pytorch/test/quantization
Sampath Victor 783a9dcb6d [6/n] Quantization with min & max bounds support - using fbgemm changes in ATen (#162924)
Summary:
This diff uses the FBGEMM changes made in D78181177 & D81858256 to support using the provided per row min/max values while quantizaing float/half to 8-bit, 4-bit & 2-bit in ATen library.

Please find more context on this here: https://fburl.com/gdoc/yutf32a0

Test Plan:
```
buck test mode/opt caffe2/torch/fb/model_transform/splitting/tests:split_dispatcher_test
```
https://www.internalfb.com/intern/testinfra/testrun/7881299640979446

Please refer to D80905814's test plan for integration testing.

Rollback Plan:

Differential Revision: D81327342

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162924
Approved by: https://github.com/jerryzh168
2025-09-25 02:52:04 +00:00
..
ao_migration [BE][PYFMT] migrate PYFMT for test/[i-z]*/ to ruff format (#144556) 2025-07-29 03:26:09 +00:00
bc Add __main__ guards to quantization tests (#154728) 2025-06-10 19:46:07 +00:00
core [6/n] Quantization with min & max bounds support - using fbgemm changes in ATen (#162924) 2025-09-25 02:52:04 +00:00
eager Revert "unskipped mobilenet_v3 quantization and mobilenet_v2 quantization plus tests from https://github.com/pytorch/pytorch/issues/125438 (#157786)" 2025-08-07 13:09:33 +00:00
fx [BE] fix remaining flake8 v7 warnings (#159044) 2025-07-25 02:56:34 +00:00
jit [BE][PYFMT] migrate PYFMT for test/[i-z]*/ to ruff format (#144556) 2025-07-29 03:26:09 +00:00
pt2e update test_quantization tests to run weekly (#163077) 2025-09-24 11:31:11 +00:00
serialized
__init__.py