pytorch/torch/quantization
Natalia Gimelshein b995540a01 Revert D21632878: [quant] Support for fused ConvBn1d and ConvBnRelu1d modules
Test Plan: revert-hammer

Differential Revision:
D21632878

Original commit changeset: 0d73398b95d7

fbshipit-source-id: c4dd18a4220d175237f31f741a782f2596228009
2020-05-19 15:22:16 -07:00
..
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
_numeric_suite.py [PyTorch Numeric Suite] Add module output comparison (#36701) 2020-05-03 00:04:35 -07:00
_quantize_script.py [quant][graphmode] Move processing code to prepare_script (#38669) 2020-05-18 20:18:11 -07:00
default_mappings.py [quant] Add support for Quantized Conv1d and ConvRELU1d (#38283) 2020-05-13 16:59:13 -07:00
fake_quantize.py fake_quant: move observer and fake_quant flags into buffers (#38368) 2020-05-18 09:30:07 -07:00
fuse_modules.py Revert D21632878: [quant] Support for fused ConvBn1d and ConvBnRelu1d modules 2020-05-19 15:22:16 -07:00
observer.py [quant] Remove get_qparams in Observers (#38435) 2020-05-18 20:49:33 -07:00
qconfig.py [quant] Return default qconfig when backend is 'none' (#38407) 2020-05-14 09:53:50 -07:00
quantize.py Revert D21632878: [quant] Support for fused ConvBn1d and ConvBnRelu1d modules 2020-05-19 15:22:16 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00