pytorch/torch/quantization
Jerry Zhang 7023e13fbb Fix mapping white list (#30636)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30636

Currently DeQuantStub is still in whitelist because set union has
lower precedence than set difference
fix issue: https://github.com/pytorch/pytorch/issues/29646

Test Plan:
verified locally that we don't attach qconfig for DeQuantStub

Imported from OSS

Differential Revision: D18775275

fbshipit-source-id: 8da07e40963555671b3d4326c9291706103f858e
2019-12-03 11:34:28 -08:00
..
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
_quantize_script.py Use LinearPackedParams everywhere 2019-11-22 11:31:17 -08:00
default_mappings.py Fix mapping white list (#30636) 2019-12-03 11:34:28 -08:00
fake_quantize.py Bug fix: Handle missing keys in observer state dict during load (#30357) 2019-11-26 06:53:45 -08:00
fuse_modules.py Updates to quantization documentation (#30288) 2019-11-23 09:29:30 -08:00
observer.py Bug fix: Handle missing keys in observer state dict during load (#30357) 2019-11-26 06:53:45 -08:00
qconfig.py Updates to quantization documentation (#30288) 2019-11-23 09:29:30 -08:00
quantize.py Fix quantized ConvReLU3d test (#30266) 2019-11-25 14:52:32 -08:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00