pytorch/torch/ao
andrewor14 0e551bbcd7 [quant][pt2] Preserve source_fn_stack after QAT fusion (#110899)
Test Plan:
python test/test_quantization.py TestQuantizePT2EQAT.test_qat_preserve_source_fn_stack

Reviewers: jerryzh168, kimishpatel

Subscribers: jerryzh168, kimishpatel, supriyar

Differential Revision: [D50101253](https://our.internmc.facebook.com/intern/diff/D50101253)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110899
Approved by: https://github.com/jerryzh168
2023-10-11 02:55:52 +00:00
..
nn [ao] fixing multihead attention convert size (#110407) 2023-10-03 08:49:12 +00:00
ns [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
pruning add pruning method: Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (#95689) 2023-08-02 16:24:42 +00:00
quantization [quant][pt2] Preserve source_fn_stack after QAT fusion (#110899) 2023-10-11 02:55:52 +00:00
__init__.py [refactor] Renaming ao.sparsity to ao.pruning (#84867) 2022-10-07 00:58:41 +00:00