pytorch/torch/quantization
Raghuraman Krishnamoorthi 904949382e Ensure that histogram observers have zero-point of zero for post ReLU activations (#37107)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37107

Currently histogram observers relax both the min and max values of the activations for performance speedup reasons. This causes an issue for glow where there is a slow down if the zero-point is not zero for post ReLU activations.
ghstack-source-id: 102768017

Test Plan: buck test caffe2/test:quantization -- 'test_histogram_observer_one_sided \(quantization\.test_quantization\.RecordHistogramObserverTest\)' --print-passing-details

Differential Revision: D21187636

fbshipit-source-id: 8d616b9e9caf2979a26a215e99434f71025e3d8b
2020-04-24 20:57:34 -07:00
..
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
_numeric_suite.py Revert D21045393: [PyTorch Numeric Suite] Add module level comparison 2020-04-24 07:03:04 -07:00
_quantize_script.py Fix type annotations and make MyPy run on torch/ (#36584) 2020-04-22 14:17:08 -07:00
default_mappings.py quantized layer norm: add to static quant (#36690) 2020-04-16 18:18:02 -07:00
fake_quantize.py Per channel quantization performance improvement (#33772) 2020-02-26 10:19:25 -08:00
fuse_modules.py [quant] Enable fusion for conv modules with bias (#36173) 2020-04-08 15:53:32 -07:00
observer.py Ensure that histogram observers have zero-point of zero for post ReLU activations (#37107) 2020-04-24 20:57:34 -07:00
qconfig.py [quant][graph] Add a new observer type for dynamic quantization (#35455) 2020-03-26 17:38:21 -07:00
quantize.py Add the more fusion (conv3d and batchnorm)support in pytorch quantization flow (#33540) 2020-03-23 20:36:03 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00