pytorch/torch/ao/quantization
Nikita Shulga 8f7e3791ef Make PyTorch importable on python-3.7.0 (#78500)
By stringifying "typing.OrderedDict", as [`typing.OrderedDict`](https://docs.python.org/3.10/library/typing.html#typing.OrderedDict) were introduced by Python-3.7.2+

See similar fix in 21a82fb519

Partially addresses https://github.com/pytorch/pytorch/issues/78499

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78500
Approved by: https://github.com/atalman
2022-05-31 06:11:30 +00:00
..
_dbr [Quant][fx][bc-breaking] Replace qconfig_dict with a config object (#78452) 2022-05-30 18:30:07 +00:00
backend_config [fx2trt] Fix dummy weight initialization in conv1d converter (#78402) 2022-05-27 04:48:45 +00:00
fx [Quant][fx][bc-breaking] Replace qconfig_dict with a config object (#78452) 2022-05-30 18:30:07 +00:00
__init__.py [Quant][fx][bc-breaking] Replace qconfig_dict with a config object (#78452) 2022-05-30 18:30:07 +00:00
_correct_bias.py [quant] Fix the parts that were missing after initial migration (#66058) 2021-10-05 11:45:37 -07:00
_equalize.py [quant] AO migration of the _correct_bias.py, _equalize.py, and _learnable_fake_quantize.py (#64917) 2021-09-15 18:15:39 -07:00
_learnable_fake_quantize.py [quant] Fix the parts that were missing after initial migration (#66058) 2021-10-05 11:45:37 -07:00
_quantize_dbr.py [Quant][fx][bc-breaking] Replace qconfig_dict with a config object (#78452) 2022-05-30 18:30:07 +00:00
fake_quantize.py [quant][better-engineering][bc-breaking] Removed quant_min/quant_max from fake_quant modules 2022-05-11 14:23:05 +00:00
fuse_modules.py [ao][sparsity] Composability of fusion and sparsity (#74847) 2022-04-08 00:44:12 +00:00
fuser_method_mappings.py [Quant][fx] Decouple prepare_*fx from training/eval modes (#75401) 2022-04-08 15:34:08 +00:00
observer.py [quant][fx][improvement] Renamed default_affine_fixed_qparams_observer and default_symmetric_fixed_qparams_observer (#76637) 2022-05-04 02:39:20 +00:00
pattern.md [quant][refactor] Move pattern type definition to ao/quantization/utils.py (#68769) 2021-12-07 11:00:22 -08:00
qconfig_mapping_utils.py [Quant][fx][bc-breaking] Replace qconfig_dict with a config object (#78452) 2022-05-30 18:30:07 +00:00
qconfig_mapping.py [Quant][fx][bc-breaking] Replace qconfig_dict with a config object (#78452) 2022-05-30 18:30:07 +00:00
qconfig.py [Quant][fx] Fix get_default_qconfig_dict for fused modules 2022-04-15 22:37:26 +00:00
quant_type.py [quant] AO migration of the quant_types.py (phase 1) (#64916) 2021-09-15 17:30:00 -07:00
quantization_mappings.py [quant][fx][improvement] Renamed default_affine_fixed_qparams_observer and default_symmetric_fixed_qparams_observer (#76637) 2022-05-04 02:39:20 +00:00
quantization_types.py [quant][fx] Move backend_config folder to torch.ao.quantization 2022-04-19 15:38:57 +00:00
quantize_fx.py [Quant][fx][bc-breaking] Replace qconfig_dict with a config object (#78452) 2022-05-30 18:30:07 +00:00
quantize_jit.py [quant] Fix the parts that were missing after initial migration (#66058) 2021-10-05 11:45:37 -07:00
quantize.py [ao][sparsity] make sparsity compose with PTQ convert (#74846) 2022-04-06 04:27:16 +00:00
stubs.py quantization: fix bug in QuantWrapper with DeQuant qconfig (#73671) 2022-03-03 15:31:53 +00:00
utils.py Make PyTorch importable on python-3.7.0 (#78500) 2022-05-31 06:11:30 +00:00