pytorch/torch/quantization
Supriya Rao 0429d2c9b8 [quant][graphmode] Add new tensorlist observer for LSTM (#35893)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35893

LSTM operator inputs have tensor list for activations and weights.
In graph mode we need a new observer to work with tensor list

Test Plan:
python test/quantization/test_quantization.py ObserverTest

Imported from OSS

Differential Revision: D20830389

fbshipit-source-id: 4790f8932ae3d38446c1d942a2b3780aa91e3022
2020-04-03 10:41:28 -07:00
..
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
_quantize_script.py [quant][graph] Add quant fusion for dynamic quantization (#35586) 2020-03-30 23:34:56 -07:00
default_mappings.py Add the more fusion (conv3d and batchnorm)support in pytorch quantization flow (#33540) 2020-03-23 20:36:03 -07:00
fake_quantize.py Per channel quantization performance improvement (#33772) 2020-02-26 10:19:25 -08:00
fuse_modules.py Add the more fusion (conv3d and batchnorm)support in pytorch quantization flow (#33540) 2020-03-23 20:36:03 -07:00
observer.py [quant][graphmode] Add new tensorlist observer for LSTM (#35893) 2020-04-03 10:41:28 -07:00
qconfig.py [quant][graph] Add a new observer type for dynamic quantization (#35455) 2020-03-26 17:38:21 -07:00
quantize.py Add the more fusion (conv3d and batchnorm)support in pytorch quantization flow (#33540) 2020-03-23 20:36:03 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00