pytorch/test/quantization
Raghuraman Krishnamoorthi 14273126d2 Numeric Suite: Swap with shadow modules only for quantized part of model (#51052)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51052

Ensure that shadow modules are inserted only for quantized modules in a model. Removes redundant module insertion.
ghstack-source-id: 121041113

Test Plan: buck test caffe2/test:quantization --  'test_compare_model_stub_partial \(quantization\.test_numeric_suite\.TestEagerModeNumericSuite\)'

Reviewed By: vkuzo

Differential Revision: D26054016

fbshipit-source-id: 73fc2fd2f0239b0363f358c80e34566d06a0c7cb
2021-02-04 11:40:30 -08:00
..
serialized Adding a version serialization type to ConvPackedParam (#43086) 2020-08-28 15:41:30 -07:00
__init__.py remediation of S205607 2020-07-17 17:19:47 -07:00
test_backward_compatibility.py Adding a version serialization type to ConvPackedParam (#43086) 2020-08-28 15:41:30 -07:00
test_bias_correction.py Numeric Suite: Swap with shadow modules only for quantized part of model (#51052) 2021-02-04 11:40:30 -08:00
test_equalize.py cross_layer_equalization (#41685) 2020-07-22 08:39:23 -07:00
test_fusion_passes.py Add JIT fusion pass to fuse quantized add and relu. (#38897) 2020-05-27 14:16:57 -07:00
test_numeric_suite_fx.py compare_model_outputs_fx API implementation (#49266) 2021-02-02 10:43:25 -08:00
test_numeric_suite.py Numeric Suite: Swap with shadow modules only for quantized part of model (#51052) 2021-02-04 11:40:30 -08:00
test_qat_module.py [reland][quant][fix] Add bias once in conv_fused (#48593) (#48661) 2020-12-02 10:17:43 -08:00
test_quantize_fx.py [quant][graphmode][fx] Enable inception_v3 and googlenet static quant test (#51402) 2021-02-03 14:32:00 -08:00
test_quantize_jit.py split quantization jit op (#51329) 2021-01-29 07:49:53 -08:00
test_quantize.py quantization: Linear + BatchNorm1d fusion (#50748) 2021-01-20 12:59:02 -08:00
test_quantized_functional.py [reland][quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) (#48038) 2020-11-17 09:52:21 -08:00
test_quantized_module.py [quant] Add reflection padding to conv (#49011) 2021-02-03 21:44:12 -08:00
test_quantized_op.py [quant] Support 2 dim input in quantized batchnorm 1d (#51597) 2021-02-03 21:05:03 -08:00
test_quantized_tensor.py [quant] PerChannelFloatQParams support for quint4x2 dtype (#45594) 2020-10-01 23:59:53 -07:00
test_workflow_module.py mem-efficient learnable fake quantization (#49315) 2021-02-03 18:57:47 -08:00