pytorch/torch/ao
yintong-lu 3361908fc5 torch/ao/quantization/utils.py: Moving eps to targeted device to avoid device mismatch issue (#135204)
MOTIVATION

We recently verified some quantization tests on devices other than cpu (eg. CUDA and Intel Gaudi devices identified as 'hpu'). We noticed a device mismatch error as eps is a tensor created on cpu but other tensors (min_val_neg, max_val_pos, scale, zero_point) are moved to the targeted _device_.

CHANGES

Move eps to _device_ of other tensors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135204
Approved by: https://github.com/jgong5, https://github.com/jerryzh168
2024-10-15 14:58:55 +00:00
..
nn [QAT] Make Fused modules torchscriptable (#136285) 2024-09-28 03:46:19 +00:00
ns [BE]: Update mypy to 1.11.2 (#133816) 2024-09-16 19:44:11 +00:00
pruning [BE]: Update mypy to 1.11.2 (#133816) 2024-09-16 19:44:11 +00:00
quantization torch/ao/quantization/utils.py: Moving eps to targeted device to avoid device mismatch issue (#135204) 2024-10-15 14:58:55 +00:00
__init__.py [BE] enable UFMT for torch/ao/ (#128864) 2024-07-25 11:30:14 +00:00