pytorch/torch/quantization
Yukio Siraichi 27048c1dfa Remove legacy constructor calls from _torch_ folder. (#53889)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53146
Related to https://github.com/pytorch/pytorch/issues/47112

As mentioned in https://github.com/pytorch/pytorch/issues/47112, the plan is to:

1. Verify that all `torch.Tensor()` scenarios are covered by other functions
2. Scrub internal `torch.Tensor()` uses
3. Update the docs and throw `TORCH_WARN_ONCE` if someone uses `torch.Tensor()`

In this PR, I replaced all occurrences of `torch.Tensor` present in the _torch_ folder.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53889

Reviewed By: walterddr, zou3519

Differential Revision: D27190743

Pulled By: jbschlosser

fbshipit-source-id: 7ecc201d57935b8dbb98ae3718b60d95cb55a010
2021-03-19 15:20:19 -07:00
..
fx Remove legacy constructor calls from _torch_ folder. (#53889) 2021-03-19 15:20:19 -07:00
ns Remove legacy constructor calls from _torch_ folder. (#53889) 2021-03-19 15:20:19 -07:00
__init__.py [quant][fix] Fix quant type classification for float_qparam qconfig (#48069) 2020-11-18 18:22:08 -08:00
_correct_bias.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
_equalize.py
_learnable_fake_quantize.py mem-efficient learnable fake quantization (#49315) 2021-02-03 18:57:47 -08:00
_numeric_suite_fx.py compare_model_outputs_fx API implementation (#49266) 2021-02-02 10:43:25 -08:00
_numeric_suite.py ns_eager: rename Logger I/O var names to logger_cls (#51359) 2021-02-09 22:30:44 -08:00
fake_quantize.py memory efficient per-channel fq: use it everywhere, delete old version (#51265) 2021-01-28 19:42:25 -08:00
fuse_modules.py quantization: Linear + BatchNorm1d fusion (#50748) 2021-01-20 12:59:02 -08:00
fuser_method_mappings.py quantization: Linear + BatchNorm1d fusion (#50748) 2021-01-20 12:59:02 -08:00
observer.py update HistogramObserver to be scriptable (#51081) 2021-01-27 07:27:03 -08:00
qconfig.py [quant][graphmode][fx] Add reference option support for linear_static_fp16 (#52650) 2021-02-27 08:25:44 -08:00
quant_type.py [quant][graphmode][fx] custom_module support static/dynamic/weight_only quant (#46786) 2020-10-27 21:41:33 -07:00
quantization_mappings.py [quantization] Add some support for 3d operations (#50003) 2021-03-10 16:40:35 -08:00
quantize_fx.py [quant][fx] add _remove_qconfig flag to convert_fx (#53166) 2021-03-03 12:58:05 -08:00
quantize_jit.py [TorchScript] Support user defined classes as constants (#5062) 2020-11-16 20:52:02 -08:00
quantize.py [quant] Factoring out the list of no_observers (#50459) 2021-02-17 12:38:30 -08:00
stubs.py type check for torch.quantization.stubs (#46475) 2020-10-16 15:34:23 -07:00
utils.py [quant][graphmode][fx] Fix a condition check for CopyNode (#53585) 2021-03-11 09:32:20 -08:00