pytorch/torch/quantization
Jerry Zhang 91f10a1de1 [quant][graphmode][refactor] Better API for fold_convbn (#32380)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32380

We'll clone the module first and then fold conv bn and return a new
module

Test Plan:
.

Imported from OSS

Differential Revision: D19508033

fbshipit-source-id: 328e91a2c9420761c904a7f2b62dab4cfaaa31ac
2020-01-24 15:46:47 -08:00
..
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
_quantize_script.py [quant][graphmode][refactor] Better API for fold_convbn (#32380) 2020-01-24 15:46:47 -08:00
default_mappings.py Fix mapping white list (#30636) 2019-12-03 11:34:28 -08:00
fake_quantize.py Use default scale/zero_point in fake_quantize module instead of None (#32318) 2020-01-17 11:04:08 -08:00
fuse_modules.py Updates to quantization documentation (#30288) 2019-11-23 09:29:30 -08:00
observer.py Bug fix: Handle missing keys in observer state dict during load (#30357) 2019-11-26 06:53:45 -08:00
qconfig.py Updates to quantization documentation (#30288) 2019-11-23 09:29:30 -08:00
quantize.py Remove qconfig_dict in top level eager mode quantization API (#31972) 2020-01-10 11:04:37 -08:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00