pytorch/test/quantization
leslie-fang-intel b6fc7af8a0 Enable oneDNN QConv FP32/BF16 output (#112010)
**Summary**

- PR 1 for enabling Int8-Mixed-BF16 PT2E PTQ Quantization with Inductor https://github.com/pytorch/pytorch/issues/111640.
- Enable QConv (relu, add, add_relu) with BFloat16 or Float32 output.

**Test Plan**
```
python -u -m pytest -s -v test_quantized_op.py -k test_qconv1d_pt2e
python -u -m pytest -s -v test_quantized_op.py -k test_qconv2d_pt2e
python -u -m pytest -s -v test_quantized_op.py -k test_qconv3d_pt2e
python -u -m pytest test_quantized_op.py -k test_qconv2d_relu_pt2e
python -u -m pytest test_quantized_op.py -k test_qconv2d_add_pt2e
python -u -m pytest test_quantized_op.py -k test_qconv2d_add_relu_pt2e
python -u -m pytest test_quantized_op.py -k test_qconv2d_add_relu_float_output_pt2e
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112010
Approved by: https://github.com/jerryzh168, https://github.com/jgong5
2023-11-03 08:16:45 +00:00
..
ao_migration ao migration: remove package test as this behavior is tested by other things (#94422) 2023-02-13 16:33:40 +00:00
bc [BE] Enable ruff's UP rules and autoformat test/ (#105434) 2023-07-19 20:36:06 +00:00
core Enable oneDNN QConv FP32/BF16 output (#112010) 2023-11-03 08:16:45 +00:00
eager [pytorch][ao] Add torch.matmul in FloatFunctional/QFunctional (#106831) 2023-08-10 22:43:36 +00:00
fx Back out "Enable pickling model prepared with QAT qconfig" (#110392) 2023-10-05 14:41:00 +00:00
jit Revert "Remove deprecated fbgemm operators (#104535)" 2023-10-25 16:34:16 +00:00
pt2e [Quant] [PT2] Add ConvBNAdd(ReLU) Annotation into X86InductorQuantizer (#111281) 2023-11-02 02:05:49 +00:00
serialized [ao] fix incorrect integer cast on histogram observer bounds (#90355) 2022-12-12 20:30:44 +00:00
__init__.py