mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
Summary: Do the following renames: `torch.quantization` -> `torch.ao.quantization` `torch.nn.quantized` -> `torch.ao.nn.quantized` `torch.nn.quantizable` -> `torch.ao.nn.quantizable` `torch.nn.qat` -> `torch.ao.nn.qat` `torch.nn.intrinsic` -> `torch.ao.nn.intrinsic` And then, do `torch.ao.nn.quantized._reference` -> `torch.ao.nn.quantized.reference` to clean up the aftermath of https://github.com/pytorch/pytorch/pull/84974 Then, manually update `test/test_module_init.py` to fix hanging whitespace due to the replace. Run this script to do the replacements: https://gist.github.com/vkuzo/7f7afebf8c31b9ba48306223e68a1c82 This is for https://github.com/pytorch/pytorch/issues/81667 Test plan: CI Pull Request resolved: https://github.com/pytorch/pytorch/pull/94170 Approved by: https://github.com/jerryzh168
25 lines
913 B
Python
25 lines
913 B
Python
# flake8: noqa: F401
|
|
r"""
|
|
This file is in the process of migration to `torch/ao/quantization`, and
|
|
is kept here for compatibility while the migration process is ongoing.
|
|
If you are adding a new entry/functionality, please, add it to the
|
|
`torch/ao/quantization/fuse_modules.py`, while adding an import statement
|
|
here.
|
|
"""
|
|
|
|
from torch.ao.quantization.fuse_modules import fuse_modules
|
|
from torch.ao.quantization.fuse_modules import fuse_known_modules
|
|
from torch.ao.quantization.fuse_modules import get_fuser_method
|
|
|
|
# for backward compatiblity
|
|
from torch.ao.quantization.fuser_method_mappings import fuse_conv_bn
|
|
from torch.ao.quantization.fuser_method_mappings import fuse_conv_bn_relu
|
|
|
|
# TODO: These functions are not used outside the `fuse_modules.py`
|
|
# Keeping here for now, need to remove them later.
|
|
from torch.ao.quantization.fuse_modules import (
|
|
_fuse_modules,
|
|
_get_module,
|
|
_set_module,
|
|
)
|