mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37107 Currently histogram observers relax both the min and max values of the activations for performance speedup reasons. This causes an issue for glow where there is a slow down if the zero-point is not zero for post ReLU activations. ghstack-source-id: 102768017 Test Plan: buck test caffe2/test:quantization -- 'test_histogram_observer_one_sided \(quantization\.test_quantization\.RecordHistogramObserverTest\)' --print-passing-details Differential Revision: D21187636 fbshipit-source-id: 8d616b9e9caf2979a26a215e99434f71025e3d8b |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| _numeric_suite.py | ||
| _quantize_script.py | ||
| default_mappings.py | ||
| fake_quantize.py | ||
| fuse_modules.py | ||
| observer.py | ||
| qconfig.py | ||
| quantize.py | ||
| stubs.py | ||