pytorch/torch/ao/quantization
andrewor14 6988e40b48 [quant][fx] Lower operator.matmul in convert_fx (#113954)
Summary: We support lowering `torch.matmul` but not
`operator.matmul`. This commit adds support for the latter,
which enables lowering the shorthand `@`. This address
https://github.com/pytorch/pytorch/issues/111450.

Test Plan:
python test/test_quantization.py TestQuantizeFx

Reviewers: jerryzh168

Subscribers: jerryzh168, supriyar
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113954
Approved by: https://github.com/jerryzh168
2023-12-12 00:34:58 +00:00
..
backend_config [quant][executorch] Support inception_v4 in examples (#108382) 2023-09-08 17:39:31 +00:00
experimental upgrade lintrunner to the lowest supported versions on python 3.12 (#113562) 2023-11-15 18:12:01 +00:00
fx [quant][fx] Lower operator.matmul in convert_fx (#113954) 2023-12-12 00:34:58 +00:00
pt2e [Quant] [PT2] Enable batchnorm in _move_exported_model_to_eval (#114547) 2023-12-06 19:51:22 +00:00
quantizer [quant][pt2e] XNNPACKQuantizer skip inserting observers for non-float Tensors (#114999) 2023-12-07 22:13:36 +00:00
__init__.py [quant][pt2e] Add generate_numeric_debug_handle pass (#114315) 2023-12-01 03:38:17 +00:00
_correct_bias.py docstyle _correct_bias.py _equalize.py _learnable_fake_quantize.py backend_config experimental fake_quantize.py fuse_modules.py fuser_method_mappings.py (#112992) 2023-11-15 00:59:44 +00:00
_equalize.py docstyle _correct_bias.py _equalize.py _learnable_fake_quantize.py backend_config experimental fake_quantize.py fuse_modules.py fuser_method_mappings.py (#112992) 2023-11-15 00:59:44 +00:00
_learnable_fake_quantize.py docstyle _correct_bias.py _equalize.py _learnable_fake_quantize.py backend_config experimental fake_quantize.py fuse_modules.py fuser_method_mappings.py (#112992) 2023-11-15 00:59:44 +00:00
fake_quantize.py [quant][pt2e][xnnpack] Add support for QAT dynamic quantization for linear in XNNPACKQuantizer (#113288) 2023-12-04 23:06:38 +00:00
fuse_modules.py docstyle _correct_bias.py _equalize.py _learnable_fake_quantize.py backend_config experimental fake_quantize.py fuse_modules.py fuser_method_mappings.py (#112992) 2023-11-15 00:59:44 +00:00
fuser_method_mappings.py Update mypy to 1.7.0 (#114160) 2023-11-28 06:45:55 +00:00
observer.py [ao] making hist_obs handle torch.inf and closeby values (#103467) 2023-12-08 21:41:31 +00:00
pattern.md
qconfig_mapping.py [ao] fixing quantized prelu workflow (#103455) 2023-06-23 16:45:40 +00:00
qconfig.py Back out "Enable pickling model prepared with QAT qconfig" (#110392) 2023-10-05 14:41:00 +00:00
quant_type.py [BE] Enable ruff's UP rules and autoformat ao/ (#105430) 2023-07-19 13:44:37 +00:00
quantization_mappings.py [BE] Enable ruff's UP rules and autoformat ao/ (#105430) 2023-07-19 13:44:37 +00:00
quantize_fx.py [quant][pt2e] Disable remove_qconfig (#111000) 2023-10-11 19:43:46 +00:00
quantize_jit.py Fix typos under torch/ao directory (#97679) 2023-04-10 22:25:15 +00:00
quantize_pt2e.py [quant][pt2e] Add transform_for_annotation method in Quantizer (#113115) 2023-11-09 20:23:29 +00:00
quantize.py [ao] Support Subclasses of FloatFunctional in eager mode prepare (#109646) 2023-09-20 08:09:55 +00:00
stubs.py [codemod] Replace hasattr with getattr in caffe2/torch/ao/quantization/stubs.py (#100597) 2023-05-04 16:36:23 +00:00
utils.py [BE]: Update lintrunner mypy to 1.6.0 (#111375) 2023-10-17 01:22:06 +00:00