pytorch/torch/quantization
Jerry Zhang ec9e6e07bc [quant][graphmode][fx] Add support for general value ops (#43439)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43439

Porting op tests from test_quantize_jit.py

Test Plan:
TestQuantizeFxOps

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D23278585

fbshipit-source-id: ad29f39482cf4909068ce29555470ef430ea17f6
2020-08-22 08:52:28 -07:00
..
fx [quant][graphmode][fx] Add support for general value ops (#43439) 2020-08-22 08:52:28 -07:00
__init__.py [quant] Expose register activation post process hook function to user (#42342) 2020-08-03 12:28:42 -07:00
_correct_bias.py Bias Correction Implementation (#41845) 2020-08-20 21:40:33 -07:00
_equalize.py cross_layer_equalization (#41685) 2020-07-22 08:39:23 -07:00
_learnable_fake_quantize.py Extending Learnable Fake Quantize module to support gradient scaling and factory (partial) construction (#41969) 2020-07-29 10:22:26 -07:00
_numeric_suite.py Remove unused Logger in get_matching_activations (#41023) 2020-07-07 00:33:07 -07:00
_quantize_fx.py [quant][graphmode][fx][test] Add per op test for graph mode quant on fx (#43229) 2020-08-20 17:32:02 -07:00
default_mappings.py [quant][graphmode][fx] Add support for instance_norm (#43377) 2020-08-21 18:32:50 -07:00
fake_quantize.py [quant][doc] Print more info for fake quantize module (#43031) 2020-08-13 20:27:36 -07:00
fuse_modules.py [quant] Make OP_LIST_TO_FUSER_METHOD public (#43286) 2020-08-20 20:19:13 -07:00
observer.py [quant] Enable from_float for quantized Embedding_Bag (#43176) 2020-08-21 11:46:03 -07:00
qconfig.py [quant] Enable from_float for quantized Embedding_Bag (#43176) 2020-08-21 11:46:03 -07:00
quantize_jit.py [quant][graphmode] Enable inplace option for top level API (#40414) 2020-06-23 16:42:48 -07:00
quantize.py [quant] Enable from_float for quantized Embedding_Bag (#43176) 2020-08-21 11:46:03 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00