pytorch/torch/quantization
Jerry Zhang 3a02ed822b Remove insert_prepack_unpack and fold_prepack for now (#30909)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30909

`fold_prepack` doesn't work anymore after we change `scale`, `zero_point`
to be attributes, but since the freeze API is coming up, I don't want to
spend time to make this work since this will be thrown away later.

Test Plan:
.

Imported from OSS

Differential Revision: D18864537

fbshipit-source-id: 649e6b91f2b04b8babacc0afb6bc1530ed7259d3
2019-12-12 07:44:31 -08:00
..
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
_quantize_script.py Remove insert_prepack_unpack and fold_prepack for now (#30909) 2019-12-12 07:44:31 -08:00
default_mappings.py Fix mapping white list (#30636) 2019-12-03 11:34:28 -08:00
fake_quantize.py Bug fix: Handle missing keys in observer state dict during load (#30357) 2019-11-26 06:53:45 -08:00
fuse_modules.py Updates to quantization documentation (#30288) 2019-11-23 09:29:30 -08:00
observer.py Bug fix: Handle missing keys in observer state dict during load (#30357) 2019-11-26 06:53:45 -08:00
qconfig.py Updates to quantization documentation (#30288) 2019-11-23 09:29:30 -08:00
quantize.py Fix quantized ConvReLU3d test (#30266) 2019-11-25 14:52:32 -08:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00