pytorch/torch/quantization
Jerry Zhang b2291d4600 Make PerChannelMinMaxObserver scriptable using torch.jit.ignore (#29416)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29416

att

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18580906

fbshipit-source-id: 5370300b89e26c2b4662b17e51284e8708cb5843
2019-11-19 19:12:55 -08:00
..
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
_quantize_script.py Convert conv_prepack to conv2d_prepack and conv_unpack to conv2d_unpack (#29529) 2019-11-11 21:54:10 -08:00
default_mappings.py Add nn.quantized.Conv3d (#29813) 2019-11-15 04:33:40 -08:00
fake_quantize.py Changing observer name 2019-10-17 11:36:03 -07:00
fuse_modules.py Rename _intrinsic to intrinsic 2019-10-02 18:53:06 -07:00
observer.py Make PerChannelMinMaxObserver scriptable using torch.jit.ignore (#29416) 2019-11-19 19:12:55 -08:00
qconfig.py Refactoing names for consistency 2019-10-16 12:18:26 -07:00
quantize.py Do not insert observers for empty sequential modules (#28384) 2019-10-21 20:32:13 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00