pytorch/test/quantization
leslie-fang-intel 8c1f65dc2b [Quant] [PT2] Add Hardtanh and ReLU6 into X86InductorQuantizer Conv2d Unary Annotation (#114579)
**Summary**
Add `Hardtanh` and `ReLU6` into X86InductorQuantizer Conv2d Unary Annotation

**TestPlan**
```
python -m pytest test_x86inductor_quantizer.py -k test_conv2d_unary
python -m pytest test_x86inductor_quantizer.py -k test_qat_conv2d_unary
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114579
Approved by: https://github.com/jgong5, https://github.com/jerryzh168
ghstack dependencies: #114578
2023-11-28 07:18:00 +00:00
..
ao_migration ao migration: remove package test as this behavior is tested by other things (#94422) 2023-02-13 16:33:40 +00:00
bc [BE] Enable ruff's UP rules and autoformat test/ (#105434) 2023-07-19 20:36:06 +00:00
core [Quant] Enable QConv2d with hardtanh post op (#114578) 2023-11-28 07:13:01 +00:00
eager [ao] updating embedding_bag support for fx and eager (#107623) 2023-11-21 03:54:00 +00:00
fx [ao] updating embedding_bag support for fx and eager (#107623) 2023-11-21 03:54:00 +00:00
jit Revert "Remove deprecated fbgemm operators (#104535)" 2023-10-25 16:34:16 +00:00
pt2e [Quant] [PT2] Add Hardtanh and ReLU6 into X86InductorQuantizer Conv2d Unary Annotation (#114579) 2023-11-28 07:18:00 +00:00
serialized [ao] fix incorrect integer cast on histogram observer bounds (#90355) 2022-12-12 20:30:44 +00:00
__init__.py