pytorch/torch/ao/quantization
yintong-lu 3361908fc5 torch/ao/quantization/utils.py: Moving eps to targeted device to avoid device mismatch issue (#135204)
MOTIVATION

We recently verified some quantization tests on devices other than cpu (eg. CUDA and Intel Gaudi devices identified as 'hpu'). We noticed a device mismatch error as eps is a tensor created on cpu but other tensors (min_val_neg, max_val_pos, scale, zero_point) are moved to the targeted _device_.

CHANGES

Move eps to _device_ of other tensors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135204
Approved by: https://github.com/jgong5, https://github.com/jerryzh168
2024-10-15 14:58:55 +00:00
..
backend_config [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
experimental [CODEMOD][caffe2] use npt.NDArray instead of np.ndarray in type annotations (#136288) 2024-09-19 12:40:36 +00:00
fx Add missing mappings to support torch.uint16 in quantization and export (#136547) 2024-10-01 00:01:01 +00:00
pt2e Default to use training IR (#137804) 2024-10-11 22:34:28 +00:00
quantizer Make PT2E work with both IR simultaneously (#135769) 2024-10-02 21:05:22 +00:00
__init__.py [BE]: Update mypy to 1.11.2 (#133816) 2024-09-16 19:44:11 +00:00
_correct_bias.py [BE]: Update mypy to 1.11.2 (#133816) 2024-09-16 19:44:11 +00:00
_equalize.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
_learnable_fake_quantize.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
fake_quantize.py Add None return type to init (#132335) 2024-08-01 15:26:45 +00:00
fuse_modules.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
fuser_method_mappings.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
observer.py Add uint16 support for observer (#136238) 2024-09-18 23:52:18 +00:00
pattern.md
qconfig_mapping.py Add None return type to init (#132335) 2024-08-01 15:26:45 +00:00
qconfig.py Revise CPU vectorization ISA support API (#135075) 2024-09-05 12:14:56 +00:00
quant_type.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
quantization_mappings.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
quantize_fx.py Add None return type to init (#132335) 2024-08-01 15:26:45 +00:00
quantize_jit.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
quantize_pt2e.py Change to export_for_training in quantize_pt2e tests (#137233) 2024-10-04 18:33:02 +00:00
quantize.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
stubs.py [BE] enable UFMT for torch/ao/quantization/ (#128863) 2024-07-25 04:17:54 +00:00
utils.py torch/ao/quantization/utils.py: Moving eps to targeted device to avoid device mismatch issue (#135204) 2024-10-15 14:58:55 +00:00