pytorch/test/quantization
Jerry Zhang b8eef500a6 Fix attr check for quantization spec (#135736)
Summary:
Previously we only checked dtype and is_dynamic to decide if two quantization spec are equivalent
this may not work in some cases, e.g. when people use different qscheme or quant_min/quant_max

This PR added checks for other fields as well

Test Plan:
regression tests

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D62530974](https://our.internmc.facebook.com/intern/diff/D62530974)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135736
Approved by: https://github.com/sxu
2024-09-13 23:01:22 +00:00
..
ao_migration Enable UFMT on all of test/quantization/ao_migration &bc (#123994) 2024-04-13 06:36:10 +00:00
bc Fix failures when default is flipped for weights_only (#127627) 2024-08-16 00:22:43 +00:00
core Change wrapped_linear_prepack and wrapped_quantized_linear_prepacked to private by adding _ as prefix (#135401) 2024-09-08 04:16:24 +00:00
eager Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
fx Fix failures when default is flipped for weights_only (#127627) 2024-08-16 00:22:43 +00:00
jit Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
pt2e Fix attr check for quantization spec (#135736) 2024-09-13 23:01:22 +00:00
serialized
__init__.py