pytorch/torch/ao
Manuel Candales 6d8a7d6e58 [pytorch] optional zero points on dequantize per channel (#121724)
Summary:
X-link: https://github.com/pytorch/executorch/pull/2364

bypass-github-export-checks

Test Plan: sandcastle

Reviewed By: mikekgfb

Differential Revision: D54709217

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121724
Approved by: https://github.com/mikekgfb
2024-03-12 19:54:11 +00:00
..
nn Update Quantizable LSTM to support QAT (#121448) 2024-03-08 18:55:50 +00:00
ns Enable possibly-undefined error code (#118533) 2024-01-30 21:07:01 +00:00
pruning Enable possibly-undefined error code (#118533) 2024-01-30 21:07:01 +00:00
quantization [pytorch] optional zero points on dequantize per channel (#121724) 2024-03-12 19:54:11 +00:00
__init__.py [refactor] Renaming ao.sparsity to ao.pruning (#84867) 2022-10-07 00:58:41 +00:00