pytorch/test/quantization
Xia, Weiwen 9827d677b4 [Quant][PT2E][X86] annotate and convert for linear_dynamic_fp16 (#141480)
Annotate linear node for `linear_dynamic_fp16` with `X86InductorQuantizer`
After `convert_pt2e`, the pattern will be
```
  x
  |
linear <- to_fp32 <- to_fp16 <- w
```

**Test plan**
```
pytest test/quantization/pt2e/test_x86inductor_quantizer.py -k test_linear_dynamic_fp16
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/141480
Approved by: https://github.com/jgong5, https://github.com/jerryzh168
2024-11-29 07:48:39 +00:00
..
ao_migration Enable UFMT on all of test/quantization/ao_migration &bc (#123994) 2024-04-13 06:36:10 +00:00
bc Fix failures when default is flipped for weights_only (#127627) 2024-08-16 00:22:43 +00:00
core Enable UBSAN tests (#141672) 2024-11-28 01:55:15 +00:00
eager Replace clone.detach with detach.clone (#140264) 2024-11-13 07:01:02 +00:00
fx Fix failures when default is flipped for weights_only (#127627) 2024-08-16 00:22:43 +00:00
jit Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
pt2e [Quant][PT2E][X86] annotate and convert for linear_dynamic_fp16 (#141480) 2024-11-29 07:48:39 +00:00
serialized
__init__.py