pytorch/test/quantization
Andrew Or 7320ef5651 [quant][pt2] Add prepare QAT test for mobilenetv2 (#104068)
Summary:
Prepare QAT for mobilenetv2 has matching numerics with
FX. There were two changes needed to achieve this, however.
First, this commit adds observer sharing for ReLU6, which is
used extensively throughout this model. Second, in the tests we
have to use the same manual seed every time we call the models
in order to get the same results between FX and PT2. This is
because there is a dropout at the end of the model.

Test Plan: python test/test_quantization.py TestQuantizePT2EModels.test_qat_mobilenet_v2

Reviewed By: kimishpatel

Differential Revision: D46707786

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104068
Approved by: https://github.com/jerryzh168
2023-06-23 16:34:25 +00:00
..
ao_migration ao migration: remove package test as this behavior is tested by other things (#94422) 2023-02-13 16:33:40 +00:00
bc [BE] [3/3] Rewrite super() calls in test (#94592) 2023-02-12 22:20:53 +00:00
core [codemod][numpy] replace np.str with str (#103931) 2023-06-21 18:16:42 +00:00
eager [BE] [3/3] Rewrite super() calls in test (#94592) 2023-02-12 22:20:53 +00:00
fx [PT2][Quant] Update op names for decomposed quantized lib (#103251) 2023-06-15 04:37:58 +00:00
jit [BE] Move flatbuffer related python C bindings to script_init (#97476) 2023-03-28 17:56:32 +00:00
pt2e [quant][pt2] Add prepare QAT test for mobilenetv2 (#104068) 2023-06-23 16:34:25 +00:00
serialized [ao] fix incorrect integer cast on histogram observer bounds (#90355) 2022-12-12 20:30:44 +00:00
__init__.py