[ao_migration] torch/nn/intrinsic: torch.quantization -> torch.ao.quantization (#65903)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65903

This changes the imports in the `caffe2/torch/nn/intrinsic` to include the new import locations.

```
codemod -d torch/nn/intrinsic --extensions py 'torch.quantization' 'torch.ao.quantization'
```

Test Plan: `python test/run_test.py`

Reviewed By: albanD

Differential Revision: D31301195

fbshipit-source-id: a5a9d84cb1ac33df6c90ee03cda3e2f1c5d5ff51
This commit is contained in:
Zafar Takhirov 2021-10-08 16:16:01 -07:00 committed by Facebook GitHub Bot
parent 2daae532bd
commit a28b038af4

View File

@ -188,7 +188,7 @@ class _ConvBnNd(nn.modules.conv._ConvNd, nni._FusedModule):
def from_float(cls, mod):
r"""Create a qat module from a float module or qparams_dict
Args: `mod` a float module, either produced by torch.quantization utilities
Args: `mod` a float module, either produced by torch.ao.quantization utilities
or directly from user
"""
# The ignore is because _FLOAT_MODULE is a TypeVar here where the bound