pytorch/torch/quantization
Vasiliy Kuznetsov c193bd41f5 fake_quantize: respect device affinity (#39031)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39031

Makes the eager mode QAT prepare logic respect device affinity.
This fixes the issue where a module is on `cuda:0`, and running
the QAT prepare script would add observers on `cpu`.  Now it
will add them on the original device.

Test Plan:
```
python test/test_quantization.py TestDistributed.test_device_affinity
```

Imported from OSS

Differential Revision: D21729272

fbshipit-source-id: 5537bf3977ddc23412184941978bf0d1cc6fb479
2020-06-01 08:55:14 -07:00
..
__init__.py [quant][graphmode] Rename _quantize_script.py to quantize_script.py (#39122) 2020-05-29 12:33:40 -07:00
_numeric_suite.py [PyTorch Numeric Suite] Add module output comparison (#36701) 2020-05-03 00:04:35 -07:00
default_mappings.py [quant] Add support for Quantized Conv1d and ConvRELU1d (#38283) 2020-05-13 16:59:13 -07:00
fake_quantize.py fake_quant: move observer and fake_quant flags into buffers (#38368) 2020-05-18 09:30:07 -07:00
fuse_modules.py [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452) (#38749) 2020-05-19 22:48:05 -07:00
observer.py fake_quant: make qparams shape consistent (#38587) 2020-05-21 19:08:08 -07:00
qconfig.py [quant] Return default qconfig when backend is 'none' (#38407) 2020-05-14 09:53:50 -07:00
quantize_script.py [quant][graphmode] Rename _quantize_script.py to quantize_script.py (#39122) 2020-05-29 12:33:40 -07:00
quantize.py fake_quantize: respect device affinity (#39031) 2020-06-01 08:55:14 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00