pytorch/test/quantization
Mu-Chu Lee 966ae943df Add wrapper for fbgemm quantization operations (#122763)
Summary:
We add wrappers for fbgemm's packing so we can pass it through PT2 to
lowering phase of AOTInductor.

Test Plan:
Included in commit.
test_quantized_ops::test_wrapped_fbgemm_linear_fp16

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D55433204](https://our.internmc.facebook.com/intern/diff/D55433204)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122763
Approved by: https://github.com/jerryzh168
ghstack dependencies: #122762
2024-03-28 18:41:18 +00:00
..
ao_migration
bc [BE] Enable ruff's UP rules and autoformat test/ (#105434) 2023-07-19 20:36:06 +00:00
core Add wrapper for fbgemm quantization operations (#122763) 2024-03-28 18:41:18 +00:00
eager [BE]: Update flake8 to v6.1.0 and fix lints (#116591) 2024-01-03 06:04:44 +00:00
fx [BE]: Apply RUF025 dict.fromkeys preview rule (#118637) 2024-01-30 20:46:54 +00:00
jit [quant] Remove deprecated torch.jit.quantized APIs (#118406) 2024-01-27 18:32:45 +00:00
pt2e [quant][pt2e] Enable observer sharing between different quantization specs (#122734) 2024-03-27 16:45:19 +00:00
serialized
__init__.py