pytorch/torch/quantization
Jerry Zhang ee5ad6ce25 [quant][graphmode] Pass debug option into insert_quant_dequant pass (#39915)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39915

Some of the usage, e.g. add_scalar will not be supporting the debug option,
that is, we will not have a numerically exact representation of the final quantized model
before finalize if people use add scalar.
warning will be added in a later PR.

Test Plan: Imported from OSS

Differential Revision: D22013026

fbshipit-source-id: 714b938f25c10fad3dfc79f095356b9803ef4b47
2020-06-16 08:14:50 -07:00
..
__init__.py [quant][graphmode] Rename _quantize_script.py to quantize_script.py (#39122) 2020-05-29 12:33:40 -07:00
_numeric_suite.py [PyTorch Numeric Suite] Add module output comparison (#36701) 2020-05-03 00:04:35 -07:00
default_mappings.py layernorm: eager mode qat support (#39094) 2020-06-07 13:38:16 -07:00
fake_quantize.py graph mode qat: make fake_quantize scriptable (#39750) 2020-06-10 21:34:18 -07:00
fuse_modules.py [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452) (#38749) 2020-05-19 22:48:05 -07:00
observer.py [quant][graphmode] Dynamic Quant Do not depend on input shapes (#39412) 2020-06-07 11:09:44 -07:00
qconfig.py [quant] Enable reduce_range for graphmode (#39874) 2020-06-12 16:25:58 -07:00
quantize_script.py [quant][graphmode] Pass debug option into insert_quant_dequant pass (#39915) 2020-06-16 08:14:50 -07:00
quantize.py add_observer: respect device affinity for ReLU (#39337) 2020-06-03 09:31:36 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00