pytorch/torch/quantization
Supriya Rao c04d39aaf2 [quant][bug] Histogram observer bug fix with min == max (#40310)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40310

Test Plan:
python test/test_quantization.py test_histogram_observer_same_inputs

Imported from OSS

Differential Revision: D22145908

fbshipit-source-id: c1646d9ae6738755981fe3d09c8a8e25fcb994d4
2020-06-22 10:05:10 -07:00
..
__init__.py [quant][graphmode] Rename graph mode quantization API to quantize_jit (#40212) 2020-06-19 18:13:37 -07:00
_numeric_suite.py [PyTorch Numeric Suite] Add support for dynamic LSTM (#40065) 2020-06-20 07:00:13 -07:00
default_mappings.py quantized elu: eager mode QAT handling (#40104) 2020-06-21 09:40:46 -07:00
fake_quantize.py graph mode qat: make fake_quantize scriptable (#39750) 2020-06-10 21:34:18 -07:00
fuse_modules.py [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452) (#38749) 2020-05-19 22:48:05 -07:00
observer.py [quant][bug] Histogram observer bug fix with min == max (#40310) 2020-06-22 10:05:10 -07:00
qconfig.py [quant] Enable reduce_range for graphmode (#39874) 2020-06-12 16:25:58 -07:00
quantize_jit.py [quant][graphmode] docstrings for top level APIs (#40328) 2020-06-19 22:20:23 -07:00
quantize.py [quant][graphmode] docstrings for top level APIs (#40328) 2020-06-19 22:20:23 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00