pytorch/torch/quantization
Mike Ruberry b7a9bc0802 Revert D22217029: Add fake quantize operator that works in backward pass
Test Plan: revert-hammer

Differential Revision:
D22217029 (48e978ba18)

Original commit changeset: 7055a2cdafcf

fbshipit-source-id: f57a27be412c6fbfd5a5b07a26f758ac36be3b67
2020-08-07 23:04:40 -07:00
..
__init__.py [quant] Expose register activation post process hook function to user (#42342) 2020-08-03 12:28:42 -07:00
_equalize.py cross_layer_equalization (#41685) 2020-07-22 08:39:23 -07:00
_learnable_fake_quantize.py Extending Learnable Fake Quantize module to support gradient scaling and factory (partial) construction (#41969) 2020-07-29 10:22:26 -07:00
_numeric_suite.py Remove unused Logger in get_matching_activations (#41023) 2020-07-07 00:33:07 -07:00
default_mappings.py qat eager: remove unneeded modules (#40396) 2020-06-22 17:45:51 -07:00
fake_quantize.py graph mode qat: make fake_quantize scriptable (#39750) 2020-06-10 21:34:18 -07:00
fuse_modules.py Quantization: preserving pre and post forward hooks (#37233) 2020-07-13 12:41:24 -07:00
observer.py Speed up HistogramObserver by vectorizing critical path (#41041) 2020-08-07 12:29:23 -07:00
qconfig.py Revert D22217029: Add fake quantize operator that works in backward pass 2020-08-07 23:04:40 -07:00
quantize_jit.py [quant][graphmode] Enable inplace option for top level API (#40414) 2020-06-23 16:42:48 -07:00
quantize.py [quant] Expose register activation post process hook function to user (#42342) 2020-08-03 12:28:42 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00