pytorch/test/quantization
Xia, Weiwen b5bfbba184 [Quant][CPU] fix fake_quantize_per_tensor_affine of inf values (#155109)
Fixes #154328

**Summary**
Fail reason:
The input value is infinity in float and it has undefined behavior to convert it to int64_t. On X86, it will be converted to the min value of int64_t, which is not expected.

Fix:
Clamping `(input * inv_scale + zero_point)` to `[quant_min, quant_max]` before converting it to int64_t.

**Test plan**
```
pytest test/quantization/core/test_workflow_ops.py -k test_fake_quantize_per_tensor_affine_inf
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155109
Approved by: https://github.com/leslie-fang-intel, https://github.com/jerryzh168
2025-06-26 01:24:36 +00:00
..
ao_migration Add __main__ guards to quantization tests (#154728) 2025-06-10 19:46:07 +00:00
bc Add __main__ guards to quantization tests (#154728) 2025-06-10 19:46:07 +00:00
core [Quant][CPU] fix fake_quantize_per_tensor_affine of inf values (#155109) 2025-06-26 01:24:36 +00:00
eager Add __main__ guards to quantization tests (#154728) 2025-06-10 19:46:07 +00:00
fx Add __main__ guards to quantization tests (#154728) 2025-06-10 19:46:07 +00:00
jit Add __main__ guards to quantization tests (#154728) 2025-06-10 19:46:07 +00:00
pt2e Typo fixes for "overridden" in comments and function names (#155944) 2025-06-14 03:37:38 +00:00
serialized
__init__.py