pytorch/torch/quantization/fx/quantization_patterns.py
andrewor14 d80056312a [Quant][fx][bc-breaking] Rename fx/*patterns.py (#89872)
Summary: This commit renames fx/quantization_patterns.py
to fx/quantize_handler.py, and fx/fusion_patterns.py to
fx/fuse_handler.py. This is because these files contain
only QuantizeHandler and FuseHandler respectively, so the
new names are more descriptive. A future commit will
further break BC by removing all the empty *QuantizeHandler
classes.

BC-breaking notes:

The following classes under the
`torch.ao.quantization.fx.quantization_patterns` namespace
are migrated to the `torch.ao.quantization.fx.quantize_handler`
namespace:
```
QuantizeHandler
BinaryOpQuantizeHandler
CatQuantizeHandler
ConvReluQuantizeHandler
LinearReLUQuantizeHandler
BatchNormQuantizeHandler
EmbeddingQuantizeHandler
RNNDynamicQuantizeHandler
DefaultNodeQuantizeHandler
FixedQParamsOpQuantizeHandler
CopyNodeQuantizeHandler
GeneralTensorShapeOpQuantizeHandler
CustomModuleQuantizeHandler
StandaloneModuleQuantizeHandler
```

The following classes under the
`torch.ao.quantization.fx.fusion_patterns` namespace are
migrated to the `torch.ao.quantization.fx.fuse_handler`
namespace:
```
DefaultFuseHandler
FuseHandler
```

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Reviewers: jerryzh168, vkuzo

Subscribers: jerryzh168, vkuzo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89872
Approved by: https://github.com/jerryzh168
2022-12-01 17:37:07 +00:00

40 lines
2.0 KiB
Python

# flake8: noqa: F401
r"""
This file is in the process of migration to `torch/ao/quantization`, and
is kept here for compatibility while the migration process is ongoing.
If you are adding a new entry/functionality, please, add it to the
appropriate files under `torch/ao/quantization/fx/`, while adding an import statement
here.
"""
from torch.ao.quantization.fx.quantize_handler import (
QuantizeHandler,
BinaryOpQuantizeHandler,
CatQuantizeHandler,
ConvReluQuantizeHandler,
LinearReLUQuantizeHandler,
BatchNormQuantizeHandler,
EmbeddingQuantizeHandler,
RNNDynamicQuantizeHandler,
DefaultNodeQuantizeHandler,
FixedQParamsOpQuantizeHandler,
CopyNodeQuantizeHandler,
CustomModuleQuantizeHandler,
GeneralTensorShapeOpQuantizeHandler,
StandaloneModuleQuantizeHandler
)
QuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
BinaryOpQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
CatQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
ConvReluQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
LinearReLUQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
BatchNormQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
EmbeddingQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
RNNDynamicQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
DefaultNodeQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
FixedQParamsOpQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
CopyNodeQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
CustomModuleQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
GeneralTensorShapeOpQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"
StandaloneModuleQuantizeHandler.__module__ = "torch.quantization.fx.quantization_patterns"