pytorch/test/quantization
andrewor14 0e551bbcd7 [quant][pt2] Preserve source_fn_stack after QAT fusion (#110899)
Test Plan:
python test/test_quantization.py TestQuantizePT2EQAT.test_qat_preserve_source_fn_stack

Reviewers: jerryzh168, kimishpatel

Subscribers: jerryzh168, kimishpatel, supriyar

Differential Revision: [D50101253](https://our.internmc.facebook.com/intern/diff/D50101253)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110899
Approved by: https://github.com/jerryzh168
2023-10-11 02:55:52 +00:00
..
ao_migration ao migration: remove package test as this behavior is tested by other things (#94422) 2023-02-13 16:33:40 +00:00
bc [BE] Enable ruff's UP rules and autoformat test/ (#105434) 2023-07-19 20:36:06 +00:00
core [ao] fixing multihead attention convert size (#110407) 2023-10-03 08:49:12 +00:00
eager [pytorch][ao] Add torch.matmul in FloatFunctional/QFunctional (#106831) 2023-08-10 22:43:36 +00:00
fx Back out "Enable pickling model prepared with QAT qconfig" (#110392) 2023-10-05 14:41:00 +00:00
jit Reland: Remove remaining global set_default_dtype calls from tests (#108088) 2023-09-07 03:04:34 +00:00
pt2e [quant][pt2] Preserve source_fn_stack after QAT fusion (#110899) 2023-10-11 02:55:52 +00:00
serialized [ao] fix incorrect integer cast on histogram observer bounds (#90355) 2022-12-12 20:30:44 +00:00
__init__.py