pytorch/torch/nn
Vasiliy Kuznetsov a27aaa49aa quant norm layers: move scale + zp to buffers (#52861)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52861

Currently, scale and zp in these layers are not buffers, which
means they do not get saved to the state dict.  Movin them
into buffers to allow people to properly use state_dict.

Note: this is a redo of https://github.com/pytorch/pytorch/pull/45313,
with BN taken out. Not doing this for BN because it has dependencies on existing
behavior.  We should clean it up eventually.

Note: not handling BC because it's 100% broken now, so there is
no practical value in handling BC.

Test Plan:
```
python test/test_quantization.py TestPostTrainingStatic.test_normalization
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D26671761

fbshipit-source-id: 7615b1dd0d1ae88eeff8b1d150f3846815dc2bc9
2021-02-25 17:23:39 -08:00
..
backends remediation of S205607 2020-07-17 17:19:47 -07:00
intrinsic [quant] Reference option for conv module (#52316) 2021-02-24 14:54:02 -08:00
modules Added fast path in the case of no hooks (#52576) 2021-02-24 21:48:09 -08:00
parallel log newly added construction and runtime stats at randomly selected iterations (#51394) 2021-02-19 00:15:04 -08:00
qat [reland][quant][fix] Add bias once in conv_fused (#48593) (#48661) 2020-12-02 10:17:43 -08:00
quantizable MHA: Fix regression and apply bias flag to both in/out proj (#52537) 2021-02-22 14:47:12 -08:00
quantized quant norm layers: move scale + zp to buffers (#52861) 2021-02-25 17:23:39 -08:00
utils quantization: Linear + BatchNorm1d fusion (#50748) 2021-01-20 12:59:02 -08:00
__init__.py Add LazyBatchNormXd (#51862) 2021-02-09 10:29:03 -08:00
_reduction.py Drop unused imports (#49972) 2021-01-13 12:26:17 -08:00
common_types.py Fix mypy type hint for AdaptiveAvgPool2,3d, AdaptiveMaxPool2,3d (#49963) 2021-01-06 09:47:15 -08:00
cpp.py Annotate torch.nn.cpp (#46490) 2020-10-23 17:40:32 -07:00
functional.py fix(docs): remove redundant hardsigmoid() in docstring to show up inplace parameter (#52559) 2021-02-25 09:09:32 -08:00
functional.pyi.in Add Gaussian NLL Loss (#50886) 2021-01-22 06:56:49 -08:00
grad.py Grad input padding support for dilation argument (#33872) 2020-04-09 11:09:55 -07:00
init.py Add SELU Activation to calculate_gain (#50664) 2021-01-18 23:01:18 -08:00
parameter.py Add LazyBatchNormXd (#51862) 2021-02-09 10:29:03 -08:00
parameter.pyi Add LazyBatchNormXd (#51862) 2021-02-09 10:29:03 -08:00