pytorch/torch/quantization
Vasiliy Kuznetsov 2c558dba3d quantized layer norm: add to static quant (#36690)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36690

Adds the static quantization hook for LayerNorm

Test Plan:
```
python test/quantization/test_quantized_nn_mods.py ModuleAPITest.test_layer_norm
python test/quantization/test_quantization.py EagerModePostTrainingQuantTest.test_normalization
```

Imported from OSS

Differential Revision: D21055401

fbshipit-source-id: 188329f35359576d50ed0db5fb675ce68c28bf7d
2020-04-16 18:18:02 -07:00
..
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
_numeric_suite.py [PyTorch Numeric Suite] Add weight compare API (#36186) 2020-04-13 19:02:00 -07:00
_quantize_script.py [quant][graph] Add quant fusion for dynamic quantization (#35586) 2020-03-30 23:34:56 -07:00
default_mappings.py quantized layer norm: add to static quant (#36690) 2020-04-16 18:18:02 -07:00
fake_quantize.py Per channel quantization performance improvement (#33772) 2020-02-26 10:19:25 -08:00
fuse_modules.py [quant] Enable fusion for conv modules with bias (#36173) 2020-04-08 15:53:32 -07:00
observer.py [quant][graphmode] Add new tensorlist observer for LSTM (#35893) 2020-04-03 10:41:28 -07:00
qconfig.py [quant][graph] Add a new observer type for dynamic quantization (#35455) 2020-03-26 17:38:21 -07:00
quantize.py Add the more fusion (conv3d and batchnorm)support in pytorch quantization flow (#33540) 2020-03-23 20:36:03 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00