pytorch/torch/quantization
Jerry Zhang c475ef72f9 Change order of activation and weight in QConfig (#25950)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25950

I feel that is a more natural order

Test Plan:
python test/test_quantizer.py

Imported from OSS

Differential Revision: D17294963

fbshipit-source-id: ed8ffdfe788a5e81966bda856e8d046ab68ee229
2019-09-11 09:51:01 -07:00
..
__init__.py Dynamic Quantized Linear Module (#23128) 2019-08-13 21:01:23 -07:00
fake_quantize.py Change return type of observer to two tensors (#24339) 2019-08-15 10:26:44 -07:00
fuse_modules.py ConvBn2d/ConvBnReLU2d (#23357) 2019-08-01 10:07:00 -07:00
observer.py Relax scale to prevent saturation in conv/linear. Add test to verify precision of numerics of quantized model with updated observer. This test catches errors in (#25667) 2019-09-06 17:18:01 -07:00
QConfig.py Change order of activation and weight in QConfig (#25950) 2019-09-11 09:51:01 -07:00
quantize.py Add torch.nn.LSTM into the default dynamic quantize mappings (#25954) 2019-09-10 21:03:12 -07:00