pytorch/torch/ao
Ben Koopman 4d576442e9 Fix incorrect get_default_qat_qconfig in prepare_qat_fx docs. (#155100)
Fixes #144522

## Description

FX QAT docs for prepare_qat_fx incorrectly used get_default_qat_qconfig when it should use get_default_qat_qconfig_mapping for a qconfig_mapping.

Previous example code incorrectly used `get_default_qat_qconfig`, resulting in a qconfig being incorrectly
passed to `prepare_qat_fx`.    `prepare_qat_fx` requires  a `qconfig_mapping`, not a single `qconfig`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155100
Approved by: https://github.com/jerryzh168
2025-06-04 18:51:40 +00:00
..
nn [BE][Ez]: Remove unneeded mypy suppressions (#154800) 2025-06-01 06:10:41 +00:00
ns BE: Type previously untyped decorators (#154515) 2025-05-29 00:36:34 +00:00
pruning Fix more URLs (#153277) 2025-05-14 16:23:50 +00:00
quantization Fix incorrect get_default_qat_qconfig in prepare_qat_fx docs. (#155100) 2025-06-04 18:51:40 +00:00
__init__.py [BE][Easy] improve submodule discovery for torch.ao type annotations (#144680) 2025-01-13 17:16:19 +00:00