pytorch/test/quantization
Xia, Weiwen 1a722f62c2 [Quant][X86] add an op to compute uint8 batch norm 2d (#152811)
**Summary**
This PR adds a new op, `onednn.qbatch_norm2d`, which accepts uint8 inputs on CPU device (instead of QuantizedCPU).
The new ops are implemented with AVX512 instructions and it provides similar performance as its counterpart for QuantizedCPU device `quantized.batch_norm2d`.
The new op supports output dtypes other than uint8 (fp32, fp16 and bf16 are supported).

**Test plan**
```
pytest test/quantization/core/test_quantized_op.py -k test_int8_batch_norm_onednn
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152811
Approved by: https://github.com/leslie-fang-intel, https://github.com/jerryzh168, https://github.com/jgong5
ghstack dependencies: #152411
2025-05-16 06:13:40 +00:00
..
ao_migration PEP585 update - test (#145176) 2025-01-22 04:48:28 +00:00
bc PEP585 update - test (#145176) 2025-01-22 04:48:28 +00:00
core [Quant][X86] add an op to compute uint8 batch norm 2d (#152811) 2025-05-16 06:13:40 +00:00
eager [Easy] enable PYFMT for torch/quantization/eager (#150761) 2025-04-18 05:53:33 +00:00
fx Add test coverage (#149182) 2025-03-14 09:38:29 +00:00
jit PEP585 update - test (#145176) 2025-01-22 04:48:28 +00:00
pt2e [BE]: Update ruff to 0.11.8 (#153249) 2025-05-12 18:30:52 +00:00
serialized
__init__.py