pytorch/torch/ao/nn
Jerry Zhang 09ebdf44fa [quant][pt2e] Fix a bug in reference quantized module (decomposed mode) (#98903)
Summary:
Fixed quant_min/quant_max for per channel quantized weight for reference quantized module in decomposed mode,
this bug is triggered while onboard an internal model

Test Plan:
python test/test_quantization.py TestQuantizeFx.test__convert_to_reference_decomposed_fx_per_channel_quant_module

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98903
Approved by: https://github.com/andrewor14
2023-04-13 21:55:45 +00:00
..
intrinsic Update conv_fused.py (#95551) 2023-03-13 23:42:34 +00:00
qat [ao] fixing public v private for torch.ao.nn.X (#87883) 2022-12-15 03:03:07 +00:00
quantizable [AO] Update qLSTM implementation to remove unsupported backend ops (#96436) 2023-03-14 17:58:34 +00:00
quantized [quant][pt2e] Fix a bug in reference quantized module (decomposed mode) (#98903) 2023-04-13 21:55:45 +00:00
sparse AO migration: replace torch internal callsites (#94170) 2023-02-07 02:32:23 +00:00
__init__.py [quant][ao_migration] nn.intrinsic migration to ao (#84842) 2022-09-28 23:54:29 +00:00