pytorch/torch/quantization
Jerry Zhang cb7f35d47a [quant][refactor] Checking activation_dtype instead of activation_post_process (#62489)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62489

Addressing comment from previous PR: https://github.com/pytorch/pytorch/pull/62374#discussion_r679354145

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: iramazanli

Differential Revision: D30053980

fbshipit-source-id: 79c216410282eccd6f0a8f24e38c55c4d18ec0d0
2021-08-10 12:17:36 -07:00
..
fx [quant][refactor] Checking activation_dtype instead of activation_post_process (#62489) 2021-08-10 12:17:36 -07:00
ns ns for fx: add ref_node_target_type (#62685) 2021-08-05 09:26:10 -07:00
__init__.py [quant][graphmode] relax the constraint for supported_dtypes for reference option (Linear and Conv) (#62348) 2021-07-29 16:31:04 -07:00
_correct_bias.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
_equalize.py [quant] Eager mode equalization support for ConvReLU and LinearReLU (#58792) 2021-05-24 17:25:13 -07:00
_learnable_fake_quantize.py [docs] Fix backticks in docs (#60474) 2021-06-24 06:27:41 -07:00
_numeric_suite_fx.py ns for fx: add ref_node_target_type (#62685) 2021-08-05 09:26:10 -07:00
_numeric_suite.py Add lint for unqualified type: ignore (#56290) 2021-04-21 08:07:23 -07:00
fake_quantize.py [quant] Update get_default_qat_qconfig to return the fused observer+fake_quant module (#62702) 2021-08-10 09:28:49 -07:00
fuse_modules.py Add lint for unqualified noqa (#56272) 2021-04-19 13:16:18 -07:00
fuser_method_mappings.py fix docstring for fusing functions (#58638) 2021-05-24 18:27:22 -07:00
observer.py [quant] add reduce_range option to FusedMovingAvgFakeQuantize module (#62863) 2021-08-10 09:27:01 -07:00
qconfig.py [quant] Update get_default_qat_qconfig to return the fused observer+fake_quant module (#62702) 2021-08-10 09:28:49 -07:00
quant_type.py [quant][graphmode][fx] custom_module support static/dynamic/weight_only quant (#46786) 2020-10-27 21:41:33 -07:00
quantization_mappings.py [quant][graphmode][fx] Produce reference linear module in convert (#60152) 2021-06-20 20:08:12 -07:00
quantize_fx.py [quant] Input Weight Equalization - prepare modifications (#59747) 2021-06-16 22:32:28 -07:00
quantize_jit.py Enable the quantization on XPU devices (#54857) 2021-05-20 17:02:13 -07:00
quantize.py Support for reference convert_fx working on gpu 2021-07-23 10:30:38 -07:00
stubs.py type check for torch.quantization.stubs (#46475) 2020-10-16 15:34:23 -07:00
utils.py [quant] add reduce_range option to FusedMovingAvgFakeQuantize module (#62863) 2021-08-10 09:27:01 -07:00