pytorch/torch/quantization
Vasiliy Kuznetsov 65df8b3886 hardswish: make it work in static quantization (#36545)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36545

* adds a quantized nn.module for Hardswish so we can observe activation values
* modifies the hardswish op to allow specifying scale + zero_point
* makes hardswish model be properly swapped in static quantization

Test Plan:
added tests and they pass for:
* the new _out flavor of hardswish
* QNNPACK changes
* static quant e2e

Imported from OSS

Differential Revision: D21045320

fbshipit-source-id: ab7e52f0f54a7d5923ab6f58197022cc28c12354
2020-04-15 18:02:35 -07:00
..
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
_numeric_suite.py [PyTorch Numeric Suite] Add weight compare API (#36186) 2020-04-13 19:02:00 -07:00
_quantize_script.py [quant][graph] Add quant fusion for dynamic quantization (#35586) 2020-03-30 23:34:56 -07:00
default_mappings.py hardswish: make it work in static quantization (#36545) 2020-04-15 18:02:35 -07:00
fake_quantize.py Per channel quantization performance improvement (#33772) 2020-02-26 10:19:25 -08:00
fuse_modules.py [quant] Enable fusion for conv modules with bias (#36173) 2020-04-08 15:53:32 -07:00
observer.py [quant][graphmode] Add new tensorlist observer for LSTM (#35893) 2020-04-03 10:41:28 -07:00
qconfig.py [quant][graph] Add a new observer type for dynamic quantization (#35455) 2020-03-26 17:38:21 -07:00
quantize.py Add the more fusion (conv3d and batchnorm)support in pytorch quantization flow (#33540) 2020-03-23 20:36:03 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00