pytorch/torch/quantization/fx/_equalize.py
Jerry Zhang 508845f2b5 [quant] AO migration of the torch/quantization/quantize_fx.py and torch/quantization/fx/* (#65033)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65033

1. Move the file:
```
hg mv caffe2/torch/quantization/fx caffe2/torch/ao/quantization/fx
hg mv caffe2/torch/quantization/quantize_fx.py caffe2/torch/ao/quantization/quantize_fx.py
```
2. Create new files
```
touch caffe2/torch/quantization/quantize_fx.py
touch caffe2/torch/quantization/fx/__init__.py
```
3. import things in the new files
4. add tests to test/quantization/ao_migration/test_quantization_fx.py
this is because we have some fx import in quantize_fx and fx/*.py

Test Plan: buck test mode/dev //caffe2/test:quantization

Reviewed By: vkuzo, z-a-f

Differential Revision: D30949749

fbshipit-source-id: 9e5d4d039c8a0a0820bc9040e224f0d2c26886d3
2021-09-22 09:29:15 -07:00

37 lines
1.2 KiB
Python

# flake8: noqa: F401
r"""
This file is in the process of migration to `torch/ao/quantization`, and
is kept here for compatibility while the migration process is ongoing.
If you are adding a new entry/functionality, please, add it to the
appropriate files under `torch/ao/quantization/fx/`, while adding an import statement
here.
"""
from torch.ao.quantization.fx._equalize import (
reshape_scale,
_InputEqualizationObserver,
_WeightEqualizationObserver,
calculate_equalization_scale,
EqualizationQConfig,
input_equalization_observer,
weight_equalization_observer,
default_equalization_qconfig,
fused_module_supports_equalization,
nn_module_supports_equalization,
node_supports_equalization,
is_equalization_observer,
get_op_node_and_weight_eq_obs,
maybe_get_weight_eq_obs_node,
maybe_get_next_input_eq_obs,
maybe_get_next_equalization_scale,
scale_input_observer,
scale_weight_node,
scale_weight_functional,
clear_weight_quant_obs_node,
remove_node,
update_obs_for_equalization,
convert_eq_obs,
_convert_equalization_ref,
get_layer_sqnr_dict,
get_equalization_qconfig_dict
)