Commit Graph

47 Commits

Author SHA1 Message Date
Jerry Zhang
214a6500e3 [quant][docs] Additonal fixes for quantize_fx docs (#84587)
Summary:
Some more clarifications for the arguments, including linking to object docs (QConfigMapping, BackendConfig) and adding types
in the doc

Test Plan:
```
cd docs
make html
```
and

visual inspection for the generated docs

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84587
Approved by: https://github.com/vkuzo
2022-09-09 15:23:23 +00:00
zaf
c92e5ac95b [quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- Documentation @vkuzo
  - docs/source/conf.py
  - docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
  - torch/csrc/jit/passes/hoist_conv_packed_params.cpp
  - torch/csrc/jit/passes/quantization/helper.h
  - torch/csrc/jit/serialization/import_source.cpp

Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012/)

Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-25 16:50:33 +00:00
PyTorch MergeBot
6a9c02339d Revert "[quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)"
This reverts commit 432f037498.

Reverted https://github.com/pytorch/pytorch/pull/78713 on behalf of https://github.com/janeyx99 due to Reverting for breaking (trunk-only) ios build
2022-08-22 07:32:37 +00:00
zaf
432f037498 [quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- Documentation @vkuzo
  - docs/source/conf.py
  - docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
  - torch/csrc/jit/passes/hoist_conv_packed_params.cpp
  - torch/csrc/jit/passes/quantization/helper.h
  - torch/csrc/jit/serialization/import_source.cpp

Differential Revision: [D36860145](https://our.internmc.facebook.com/intern/diff/D36860145/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-22 01:38:55 +00:00
Jerry Zhang
bce1540f1f [quant][fx] Add more detailed docs for prepare_fx/prepare_qat_fx/convert_fx (#83132)
Summary:
att

Test Plan:
visual inspection of generated docs page
https://pytorch.org/docs/stable/quantization-support.html

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83132
Approved by: https://github.com/andrewor14
2022-08-11 16:20:30 +00:00
Andrew Or
782f3489c6 [Quant][fx][bc-breaking] Integrate BackendConfig with quantization flow (part 2) (#82557)
This is part 2 of the effort to replace `backend_config_dict` with
a python config object, a more formal and robust API that leads to
better user experience. This commit integrates the `BackendConfig`
implemented in part 1 (https://github.com/pytorch/pytorch/pull/81469)
with the existing FX graph mode quantization flow.

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

BC-breaking Notes:

Before:
```
import torch
from torch.ao.quantization import get_default_qconfig_mapping
from torch.ao.quantization.backend_config import ObservationType
from torch.ao.quantization.quantize_fx import prepare_fx, convert_fx

dtype_config = {
    "input_dtype": torch.quint8,
    "output_dtype": torch.quint8
    "weight_dtype": torch.qint8,
    "bias_dtype": torch.float,
}

backend_config_dict = {
    "name": "my_backend",
    "configs": [{
        "pattern": torch.nn.Linear,
        "observation_type": ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,
        "dtype_configs": [dtype_config],
        "root_module": torch.nn.Linear,
        "reference_quantized_module": torch.nn.quantized._reference.Linear,
        "qat_module": torch.nn.qat.Linear,
    }]
}

m = MyModel()
qconfig_mapping = get_default_qconfig_mapping()
example_inputs = (torch.rand(3, 3),)
m = prepare_fx(
    m, qconfig_mapping, example_inputs,
    backend_config_dict=backend_config_dict)
m = convert_fx(m, backend_config_dict=backend_config_dict)
```

After:
```
import torch
from torch.ao.quantization import get_default_qconfig_mapping
from torch.ao.quantization.backend_config import (
    BackendConfig,
    BackendPatternConfig,
    DTypeConfig,
    ObservationType,
)
from torch.ao.quantization.quantize_fx import prepare_fx, convert_fx

dtype_config = DTypeConfig(
    input_dtype=torch.quint8,
    output_dtype=torch.quint8
    weight_dtype=torch.qint8,
    bias_dtype=torch.float,
)

backend_config = BackendConfig("my_backend").set_backend_pattern_config(
    BackendPatternConfig(torch.nn.Linear)
        .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT)
        .add_dtype_config(dtype_config)
        .set_root_module(torch.nn.Linear)
        .set_reference_quantized_module(torch.nn.quantized._reference.Linear)
        .set_qat_module(torch.nn.qat.Linear))

m = MyModel()
qconfig_mapping = get_default_qconfig_mapping()
example_inputs = (torch.rand(3, 3),)
m = prepare_fx(m, qconfig_mapping, example_inputs, backend_config=backend_config)
m = convert_fx(m, backend_config=backend_config)
```

Reviewers: jerryzh168

Subscribers: jerryzh168, supriyar

Differential Revision: [D38471932](https://our.internmc.facebook.com/intern/diff/D38471932)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82557
Approved by: https://github.com/jerryzh168
2022-08-08 18:55:50 +00:00
Andrew Or
c657c3d3ab [Quant][fx] Rename convert_to_reference to convert_to_reference_fx (#81326)
Summary: This commit renames the convert_to_reference function to
convert_to_reference_fx, which is more descriptive and matches
prepare_fx and prepare_qat_fx better.

Test Plan:
python test/test_quantization.py TestQuantizeFx

Reviewers: jerryzh168

Subscribers: jerryh168

Differential Revision: [D37787876](https://our.internmc.facebook.com/intern/diff/D37787876)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81326
Approved by: https://github.com/jerryzh168
2022-07-13 22:18:46 +00:00
Jerry Zhang
1a7e560ade [quant] Refactor quantization tracer to a separate file (#80268)
Summary:
att, since we need to reuse the tracer in some other places

Test Plan:
python test/test_quantization.py TestQuantizeFx

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D37435748](https://our.internmc.facebook.com/intern/diff/D37435748)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80268
Approved by: https://github.com/vkuzo
2022-06-30 00:49:57 +00:00
Andrew Or
17104d3d7f [Quant][fx][bc-breaking] Replace is_reference with convert_to_reference (#80091)
Summary: This PR removes the is_reference flag from the  existing
convert_fx API and replaces it with a new convert_to_reference
function. This separates (1) converting the prepared model to a
reference model from (2) lowering the reference model to a quantized
model, enabling users to call their custom lowering function for
custom backends. For the native fbgemm backend, for example, the
following are equivalent:

```
from torch.ao.quantization.quantize_fx import prepare_fx, convert_fx

prepared = prepare_fx(model, ...)
quantized = convert_fx(prepared, ...)
```

```
from torch.ao.quantization.fx import lower_to_fbgemm
from torch.ao.quantization.quantize_fx import (
    prepare_fx,
    convert_to_reference
)

prepared = prepare_fx(model, ...)
reference = convert_to_reference(prepared, ...)
quantized = lower_to_fbgemm(reference, ...)
```

Note that currently `lower_to_fbgemm` takes in two other arguments
that are difficult for users to provide. A future commit will remove
these arguments to make the helper function more user friendly.

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Reviewers: jerryzh168, vkuzo

Subscribers: jerryzh168, vkuzo

Differential Revision: [D37359946](https://our.internmc.facebook.com/intern/diff/D37359946)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80091
Approved by: https://github.com/jerryzh168
2022-06-29 23:01:27 +00:00
Andrew Or
8aedd8fb25 [Quant][fx] Hide equalization_config from prepare APIs (#80164)
Summary: This PR hides the equalization_config argument from
prepare_fx. This is a private API that we do not wish to expose
to users and have to maintain backward compatibility for.

Test Plan:
python test/test_quantization.py TestEqualizeFx

Reviewers: jerryzh168

Subscribers: jerryzh168

Differential Revision: [D37394353](https://our.internmc.facebook.com/intern/diff/D37394353)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80164
Approved by: https://github.com/jerryzh168
2022-06-28 04:20:34 +00:00
Andrew Or
78144b9f35 [Quant][fx][bc-breaking] Replace *custom_config_dict with config objects
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79066

Following https://github.com/pytorch/pytorch/pull/78452,
this commit replaces the following config dicts with python objects:

- prepare_custom_config_dict -> PrepareCustomConfig
- convert_custom_config_dict -> ConvertCustomConfig
- fuse_custom_config_dict -> FuseCustomConfig

This leads to better type safety and better user experience in
notebook settings due to improved auto completion. The new APIs
are as follows:

```
from torch.ao.quantization.fx.custom_config import PrepareCustomConfig

prepare_custom_config = PrepareCustomConfig() \
    .set_float_to_observed_mapping(float_class, observed_class) \
    .set_non_traceable_module_names(["mod1", "mod2"]) \
    .set_non_traceable_module_classes([class1, class2]) \
    .set_input_quantized_indexes([0, 1]) \
    .set_output_quantized_indexes([0]) \
    .set_preserved_attributes(["attr1", "attr2"])

convert_custom_config = ConvertCustomConfig() \
    .set_observed_to_quantized_mapping(observed_class, quantized_class) \
    .set_preserved_attributes(["attr1", "attr2"])

model = prepare_fx(
    model,
    qconfig_mapping,
    example_inputs,
    prepare_custom_config=prepare_custom_config)

model(data)

model = convert_fx(model, convert_custom_config=convert_custom_config)
```

For backwards compatibility, prepare_fx, prepare_qat_fx, and
convert_fx will continue to accept Dicts, which will be converted
to the relevant *CustomConfig object internally.

Note that this commit does not modify existing tests to use the
new API; they will continue to pass in Dicts as before, which still
works but triggers a deprecation warning. This will be handled in
a future commit.

Differential Revision: [D37088095](https://our.internmc.facebook.com/intern/diff/D37088095/)

Approved by: https://github.com/jerryzh168
2022-06-16 17:50:07 +00:00
Andrew Or
e41389f84b [Quant][docs] Replace qconfig_dict with QConfigMapping in docs
Summary: https://github.com/pytorch/pytorch/pull/78452 replaced
qconfig_dict with QConfigMapping as the default API for prepare_fx,
prepare_qat_fx, and convert_fx. We should update the docs to reflect
this change as well.

Test Plan:
```
cd docs
make html
cd build/html
python -m server.http
```

Reviewers: jerryzh168, vkuzo

Subscribers: jerryzh168, vkuzo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78533

Approved by: https://github.com/vkuzo
2022-06-01 15:10:48 +00:00
Andrew Or
c7b4eec233 [Quant][fx][bc-breaking] Replace qconfig_dict with a config object (#78452)
**Summary:** Previously, FX graph mode quantization configurations
were specified through a dictionary of qconfigs. However, this
API was not in line with other core APIs in PyTorch. This commit
replaces this dictionary with a config object that users will
create and pass to prepare and convert. This leads to better
type safety and better user experience in notebook settings
due to improved auto completion.

The new API is as follows:

```
from torch.ao.quantization import QConfigMapping
from torch.ao.quantization.quantize_fx import prepare_fx

qconfig_mapping = QConfigMapping()
    .set_global(qconfig)
    .set_object_type(torch.nn.Linear, qconfig)
    .set_module_name_regex("foo.*bar", qconfig)
    .set_module_name("mod", qconfig)

prepare_fx(model, qconfig_mapping)
```

For backwards compatibility, `prepare_fx`, `prepare_qat_fx`,
and `convert_fx` will continue to accept qconfig_dicts, which
will be converted to QuantizationConfigs internally.

Note that this commit does not modify existing tests to use the
new API; they will continue to pass in qconfig_dict as before,
which still works but triggers a deprecation warning. This will
be handled in a future commit.

**Test Plan:**
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

**Reviewers:** jerryzh168, vkuzo

**Subscribers:** jerryzh168, vkuzo

Differential Revision: D36747998

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78452
Approved by: https://github.com/jerryzh168
2022-05-30 18:30:07 +00:00
Jerry Zhang
416899d1a9 [quant][fx][bc-breaking] Add required example_args argument to prepare_fx and prepare_qat_fx (#249) (#77608)
Summary:
X-link: https://github.com/facebookresearch/d2go/pull/249

X-link: https://github.com/fairinternal/ClassyVision/pull/104

X-link: https://github.com/pytorch/benchmark/pull/916

X-link: https://github.com/facebookresearch/ClassyVision/pull/791

X-link: https://github.com/facebookresearch/mobile-vision/pull/68

FX Graph Mode Quantization needs to know whether an fx node is a floating point Tensor before it can decide whether to
insert observer/fake_quantize module or not, since we only insert observer/fake_quantize module for floating point Tensors.
Currently we have some hacks to support this by defining some rules like NON_OBSERVABLE_ARG_DICT (https://github.com/pytorch/pytorch/blob/master/torch/ao/quantization/fx/utils.py#L496), but this approach is fragile and we do not plan to maintain it long term in the pytorch code base.

As we discussed in the design review, we'd need to ask users to provide sample args and sample keyword args
so that we can infer the type in a more robust way. This PR starts with changing the prepare_fx and prepare_qat_fx api to require user to either provide
example arguments thrugh example_inputs, Note this api doesn't support kwargs, kwargs can make https://github.com/pytorch/pytorch/pull/76496#discussion_r861230047 (comment) simpler, but
it will be rare, and even then we can still workaround with positional arguments, also torch.jit.trace(https://pytorch.org/docs/stable/generated/torch.jit.trace.html) and ShapeProp: https://github.com/pytorch/pytorch/blob/master/torch/fx/passes/shape_prop.py#L140 just have single positional args, we'll just use a single example_inputs argument for now.

If needed, we can extend the api with an optional example_kwargs. e.g. in case when there are a lot of arguments for forward and it makes more sense to
pass the arguments by keyword

BC-breaking Note:
Before:
```python
m = resnet18(...)
m = prepare_fx(m, qconfig_dict)
# or
m = prepare_qat_fx(m, qconfig_dict)
```
After:
```python
m = resnet18(...)
m = prepare_fx(m, qconfig_dict, example_inputs=(torch.randn(1, 3, 224, 224),))
# or
m = prepare_qat_fx(m, qconfig_dict, example_inputs=(torch.randn(1, 3, 224, 224),))
```

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
python test/test_quantization.py TestQuantizeFxModels

Imported from OSS

**Static Docs Preview: classyvision**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D35984526/V30/classyvision/)|

|**Modified Pages**|

Reviewed By: vkuzo, andrewor14

Differential Revision: D35984526

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77608
Approved by: https://github.com/dzdang
2022-05-21 21:03:48 +00:00
Jerry Zhang
74454bdb46 [quant][fx] Move backend_config folder to torch.ao.quantization
Summary:
Following https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md we implemented
the backend configuration for fbgemm/qnnpack backend, currently it was under fx folder, but we'd like to use this for all different
workflows, including eager, fx graph and define by run quantization, this PR moves it to torch.ao.quantization namespace so that
it can be shared by different workflows
Also moves some utility functions specific to fx to fx/backend_config_utils.py and some files are kept in fx folder (quantize_handler.py and fuse_handler.py)

Test Plan:
python test/teset_quantization.py TestQuantizeFx
python test/teset_quantization.py TestQuantizeFxOps
python test/teset_quantization.py TestQuantizeFxModels
python test/test_quantization.py TestAOMigrationQuantization
python test/test_quantization.py TestAOMigrationQuantizationFx

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75823

Approved by: https://github.com/vkuzo
2022-04-19 15:38:57 +00:00
Jerry Zhang
72d3d160fb [quant][fx] Remove additional_object_mapping from the docs (#75389)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75389

This seems to be removed before, so won't mark this PR as bc-breaking, this use case
is now enabled with backend_config_dict api

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D35451960

fbshipit-source-id: 21a8f19c1968af44bf4fa603f16ee8c6f5080e5a
(cherry picked from commit 2862f17b57f846b55736bc6b5d10df4256567adf)
2022-04-11 10:40:11 +00:00
Jerry Zhang
bcf6974c20 [qunat][fx] Remove "additional_fuser_method_mapping" key from prepare_custom_config_dict (#75388)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75388

This is now replaced with backend_config_dict, we don't want to expose the implementation detail to
users. We'll have docs for backend_config_dict later

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D35451958

fbshipit-source-id: 86e482d0782470ea02408836755cfc8531b8f66e
(cherry picked from commit 072541824b454e30df2b48758f465ebd814b436e)
2022-04-11 05:30:52 +00:00
Jerry Zhang
55d479aca5 [qunat][fx][bc-breaking] Remove "additional_qat_mapping" key from prepare_custom_config_dict (#75387)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75387

This is now replaced with backend_config_dict, we don't want to expose the implementation detail to
users. We'll have docs for backend_config_dict later

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D35451955

fbshipit-source-id: 77ede61f1d8f169dc1e1e6d847244ba99a97ab76
(cherry picked from commit 953576259fdc8827437acb6f5d04e584e37a7d64)
2022-04-11 05:03:49 +00:00
Jerry Zhang
f42bdff016 [qunat][fx][bc-breaking] Remove "additional_quant_pattern" key from prepare_custom_config_dict (#75386)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75386

This is now replaced with backend_config_dict, we don't want to expose the implementation detail to
users. We'll have docs for backend_config_dict later

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: ezyang

Differential Revision: D35451957

fbshipit-source-id: 52ebb5fb20cd96c1f21410b07c3d0c448c58cdba
(cherry picked from commit ccb38026f14644f9eb43335b7a7de5568c556394)
2022-04-09 16:43:41 +00:00
Jerry Zhang
689cec9493 [quant][fx] Remove "additional_fusion_pattern" from prepare_custom_config_dict (#75377)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75377

This is in `prepare_custom_config_dict` but we never talked about them before, and we didn't find use cases internally
So it should be OK to remove.

We can now serve the same use case with `backend_config_dict` api

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D35451961

fbshipit-source-id: 8a44c4518eecd50fab7ea2ff06697527b1cdb049
(cherry picked from commit 964183ed26bd8f367a4cf7fcc991eb519dc31a58)
2022-04-09 05:31:19 +00:00
Andrew Or
0bdf9a9833 [Quant][fx] Decouple prepare_*fx from training/eval modes (#75401)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75401

This commit removes asserts that require prepare_fx to
be run in eval mode and prepare_qat_fx to be run in training mode.

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_prepare_mode

Imported from OSS

Reviewed By: vkuzo, jerryzh168

Differential Revision: D35457100

fbshipit-source-id: 13a55b13d9e389991f69c06c6a70bc51cdebba36
(cherry picked from commit fb0685e0873dc8e807da3213be403b51e8b4a687)
2022-04-08 15:34:08 +00:00
Jerry Zhang
975c9f15bd [quant] Rename _convert_do_not_use.py to convert.py (#74322)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74322

att, also change all references to _convert_do_not_use

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestAOMigrationQuantizationFx

Imported from OSS

Reviewed By: andrewor14

Differential Revision: D34936430

fbshipit-source-id: c96fb887847383bf47f0ec4219127e96e2b63b2d
(cherry picked from commit 8ad5a9e031e6ca4ede2656d9b2f7906a82b57c1c)
2022-03-17 18:57:08 +00:00
Jerry Zhang
7ddf212f33 [quant][fx] Fully align convert with the reference model design and simplify the implementation (#73863)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73863

This PR fully aligns the convert function with the design: https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md
and simplifies the implementation of convert function by always produce a reference quantized model (with reference patterns) first,
and then lower the model to a quantized model that is runnable with PyTorch native backend (fbgemm/qnnpack).

This PR makes the convert.py much easier to understand than the previous implementation, and we are able to remove majority of code
in quantization_patterns.py as well (in followup PRs).

Test Plan:
```
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels
```
and other internal/oss regression tests

Imported from OSS

Reviewed By: andrewor14

Differential Revision: D34778506

fbshipit-source-id: 0678b66addf736039a8749b352f6f569caca962b
(cherry picked from commit 33ec9caf23f3ab373d827117efbd9db0668b2437)
2022-03-11 17:11:30 +00:00
Jerry Zhang
d39ad0543a [quant][fx] Remove Fuser class in fusion implementation (#73470)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73470

att, this does not affect user apis since we are only exposing fuse_fx as a public api

Test Plan:
python test/test_quantization.py TestFuseFx

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D34495260

fbshipit-source-id: 3aa253bc7190e50acc7229186f210901ebc5481b
(cherry picked from commit a88517ff6feff7abbece2234d82fd53e33702237)
2022-03-01 09:29:21 +00:00
Jerry Zhang
16554bec1b [qunat][fx][fix] Fix get_module_type for fusion (#72735)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72735

We use `get_matched_types` to get the (type) pattern from matched modules.
And we need to use MatchAllNode instead of type(MatchAllNode) to query the fuser_method for the pattern

Test Plan:
TODO

Imported from OSS

Reviewed By: raghuramank10000

Differential Revision: D34180705

fbshipit-source-id: db9b6e791a9f26b70079fddc95fce033052199ab
(cherry picked from commit 01d38afabcb1bfc207dee7d49ee13df500d32fdf)
2022-02-25 18:37:31 +00:00
Jerry Zhang
082ff25f37 [reland][bc-breaking][quant][be] Refactor fuser_method to include is_qat argument" (#71956)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71956

Pull Request resolved: https://github.com/facebookresearch/mobile-vision/pull/59

Original commit changeset: f3912e210e8c

Original Phabricator Diff: D33178977 (ef501e8fed)

Test Plan:
Please see original diff for test plans

**Static Docs Preview: classyvision**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D33833203/V3/classyvision/)|

|**Modified Pages**|

Reviewed By: andrewor14

Differential Revision: D33833203

fbshipit-source-id: 74a8f22730b00aafa6a173b208e635c1d696959e
(cherry picked from commit fb88772b18)
2022-01-31 23:02:22 +00:00
Nikita Shulga
56511f859a Revert D33178977: [bc-breaking][quant][be] Refactor fuser_method to include is_qat argument
Test Plan: revert-hammer

Differential Revision:
D33178977 (ef501e8fed)

Original commit changeset: 0c1499c45526

Original Phabricator Diff: D33178977 (ef501e8fed)

fbshipit-source-id: f3912e210e8c588fdbdc9c3c5f4acf2aa8fe6678
(cherry picked from commit cd62183414)
2022-01-27 03:29:40 +00:00
Jerry Zhang
ef501e8fed [bc-breaking][quant][be] Refactor fuser_method to include is_qat argument (#70009)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70009

Currently we rely on module.training to decide whether we'll do a qat fusion or ptq fusion, this is
not ideal since training flag has nothing to do with quantization, this PR introduces an extra flag `is_qat`
to control this

Note: currently we still has the constraint that when `is_qat` is True, the modules must be in training mode, we
can relax this constraint later

Test Plan:
```
python test/test_quantization.py TestFuseFx
python test/test_quantization.py TestFusion
```

Imported from OSS

**Static Docs Preview: classyvision**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D33178977/V36/classyvision/)|

|**Modified Pages**|

Reviewed By: mruberry

Differential Revision: D33178977

fbshipit-source-id: 0c1499c45526971140d9ad58e2994d1edf5ad770
(cherry picked from commit 2d51f9fb28)
2022-01-26 23:33:28 +00:00
Vasiliy Kuznetsov
c3570fd945 fx quant: preserve node stack trace throughout prepare and convert (#70757)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70757

This is an initial PR on a way to preserve stack traces throughout FX
graph mode quantization.  It preserves the stack traces for ops
for all of the quantize handlers. A future PR will add stack traces
for dtype transitions.

Test Plan:
```
python test/test_quantization.py
TestQuantizeFx.test_stack_trace_preserved
```

Note: the above only tests a single case. In a future PR, once we
expand coverage, we can expand the utility functions to check for stack
traces on all tests.

```
python test/test_quantization.py
TestQuantizeFx.test_stack_trace_preserved
```

Imported from OSS

Differential Revision:
D33432485
D33432485

Reviewed By: jerryzh168

Pulled By: vkuzo

fbshipit-source-id: 56c56850393132487430a850fa1def826a9c39c0
(cherry picked from commit c11155b31e)
2022-01-24 14:15:43 +00:00
Jerry Zhang
cfc71f56e4 [quant][fx][graphmode] Support standalone module in _convert_do_not_use (#70151)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70151

this supports converting an observed standalone module to quantized standalone module
in the new convert flow (convert observers to quant-dequant operators)

Test Plan:
```
python test/test_quant_trt.py TestConvertFxDoNotUse
```

Imported from OSS

Reviewed By: supriyar

Differential Revision: D33205163

fbshipit-source-id: 01ea44fb2a8ffe30bec1dd5678e7a72797bafafc
2021-12-30 12:31:03 -08:00
Jerry Zhang
c627211651 [quant][fx][graphmode][be] Change the type for output of convert to be torch.nn.Module (#69959)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69959

GraphModule is an implementation detail, We don't want to expose it in quantization apis

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_quantized_model_type

Imported from OSS

Reviewed By: supriyar

Differential Revision: D33119103

fbshipit-source-id: d8736ff08b42ee009d6cfd74dcb3f6150f71f3d2
2021-12-29 20:33:32 -08:00
Jerry Zhang
656d2a7bf6 [quant][fx][graphmode] Add backend_config_dict for standalone module (#70150)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70150

This PR allows user to specify backend_config_dict for standalone modules, both in prepare and convert step
adding this now to allow prototype for some of our customer use cases, test for the codepath will be added in
a separate PR

Test Plan:
regression tests
```
python test/test_quantization.py TestQuantizeFx
```
test that specifies backend_config for some module will be added in a separate PR for the use case we have in mind
since it requires other features

Imported from OSS

**Static Docs Preview: classyvision**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D33205162/V9/classyvision/)|

|**Modified Pages**|

Reviewed By: vkuzo

Differential Revision: D33205162

fbshipit-source-id: a657cef8e49d99b6a43653141521dc87c33bfd89
2021-12-22 21:18:39 -08:00
Jerry Zhang
94abf120c8 [quant][fx][graphmode][be] Use is_qat instead of model.training as a flag for qat (#69878)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69878

But we'll still verify that model.training is True when user call prepare_qat API.
Relaxing this condition might also mean that we change the api for methods in fuser_method_mapping,
with additional flag for qat (currently we just have different fusions for training/eval), I don't think
this is P0, we could revisit if there is a need in the future

Test Plan:
```
python test/test_quantization.py TestQuantizeFx
```

Imported from OSS

Reviewed By: supriyar

Differential Revision: D33080988

fbshipit-source-id: b13715b91f10454948199323c5d81ef88bb3517f
2021-12-18 00:00:46 -08:00
Jerry Zhang
05946051f8 [quant][graphmode] initial support for fusion pattern in backend_config_dict (#69335)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69335

This PR added support for configuring fusion with:
"pattern", "fuser_method"

This only works for simple sequence of 2 op patterns currently, will extend this in future PRs

Test Plan:
regresion test on linear-relu fusion:
```
python test/fx2trt/test_quant_trt.py TestQuantizeFxTRTOps
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32816164

fbshipit-source-id: f300b7b96b36908cb94a50a8a17e0e15032509eb
2021-12-07 16:54:42 -08:00
Jerry Zhang
adc21f1966 [quant] Fix docs build (#67169)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67169

Looks like the doc error only appears after it's landed

Test Plan: Imported from OSS

Reviewed By: seemethere

Differential Revision: D31890431

fbshipit-source-id: d40cba082712c4b35704ea15d82fbc4749f85aec
2021-10-25 08:02:26 -07:00
Jerry Zhang
364c4959c3 [quant] Fix docs error in convert_fx (#67152)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67152

Test Plan:
```
cd docs
make html
```

Imported from OSS

Reviewed By: supriyar

Differential Revision: D31884570

fbshipit-source-id: 2b521f617c93f6fa08da3387df2d25497293eee6
2021-10-24 19:26:45 -07:00
Jerry Zhang
313939c9c6 [quant] Fix lint errors (#67138)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67138

Test Plan:
ossci

Imported from OSS

Reviewed By: supriyar

Differential Revision: D31879558

fbshipit-source-id: 271905d3d254c906aa78bae9f2bd411f9d57e1e8
2021-10-23 11:26:25 -07:00
Jerry Zhang
2d81d5ab0a [quant][graphmode][fx] Remove fbgemm_backend_config_dict for now (#67066)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67066

We'll add it later when the api is ready

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D31849079

fbshipit-source-id: 0c00d08510166b2d897cf1562c7276527319b05c
2021-10-22 21:57:56 -07:00
Supriya Rao
8460fa5707 [quant][fx] Add an option in convert_fx to accept qconfig_dict to skip quantization (#66878)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66878

Currently convert_fx quantizes all layers that have been prepared, depending on the prepare qconfig_dict
This PR adds support to accept a variation of qconfig_dict in convert_fx that can be used to specify skip quantizing certain layers

This can help with prepare/observe all operators, quantize a subset of them (based on quantization error), to avoid preparing multiple times.

The qconfig_dict passed to convert_fx can only have the values set to `None`, with the keys being the same as what is allowed in the prepare qconfig_dict

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_convert_qconfig_dict

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D31808247

fbshipit-source-id: a4f5dca1090f0083fc3fea14aff56924033eb24f
2021-10-22 21:18:15 -07:00
Jerry Zhang
8ea985f240 [quant][fx][graphmode] Rename files and functions for convert and add do_not_use suffix (#66955)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66955

The new convert function are not meant to be used by users, it's a temporary function that
we use to build up the new convert path, we will bring feature parity with the old path
and deprecate the old path after that

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D31810488

fbshipit-source-id: 2f65a110506683123350e619c48df090a15570fc
2021-10-21 22:17:28 -07:00
Jerry Zhang
a89851a0d9 [quant][fx][graphmode] Adding a new convert function that produces reference pattern by default (#66925)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66925

Current convert_fx implementation is using "The Interpreter Pattern" in https://pytorch.org/docs/stable/fx.html
There are two things that's changed which make the approach in this PR possible and needed:
1). original convert implementation is developed at the initial prototype where fx does not allow mutations, now fx
supports mutations
2). original convert needs to work for a lot of fbgemm/qnnpack specific logic, which is not needed for reference patterns

Therefore it makes sense for us to write a new convert function just for reference patterns, the implementation
is significantly easier to understand than the original convert implementation

Current support:
* we should be able to support all non-weighted ops like relu, add etc.

Missing:
* linear and conv
* some advanced features like standalone modules, input_quantized_idxs etc.

will add linear and conv support and start defining the backend_config_dict based on this version of convert

Test Plan:
python test/test_quantization.py TestQuantizeFxOpsNew

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D31786241

fbshipit-source-id: 2a32156eb6d3c5271cb44906cd863055785fb5d4
2021-10-20 18:54:30 -07:00
Eshika Shah
17f07c310b Fix type checking errors in torch/ao/quantization/quantize_fx.py (#66804)
Summary:
- [x] Fix the Pyre type checking errors in `torch/ao/quantization/quantize_fx.py`
```
torch/quantization/quantize_fx.py:41:8 Incompatible variable type [9]: fuse_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/quantize_fx.py:143:16 Incompatible variable type [9]: prepare_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/quantize_fx.py:144:16 Incompatible variable type [9]: equalization_qconfig_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/quantize_fx.py:206:8 Incompatible variable type [9]: prepare_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/quantize_fx.py:230:12 Incompatible variable type [9]: fuse_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/quantize_fx.py:268:8 Incompatible variable type [9]: prepare_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/quantize_fx.py:269:8 Incompatible variable type [9]: equalization_qconfig_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/quantize_fx.py:427:8 Incompatible variable type [9]: prepare_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/quantize_fx.py:464:8 Incompatible variable type [9]: convert_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/quantize_fx.py:486:8 Incompatible variable type [9]: convert_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
torch/quantization/quantize_fx.py:547:8 Incompatible variable type [9]: convert_custom_config_dict is declared to have type `Dict[str, typing.Any]` but is used as type `None`.
```
Fixes the issue: [MLH-Fellowship/pyre-check/issues/76](https://github.com/MLH-Fellowship/pyre-check/issues/76)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66804

Reviewed By: onionymous

Differential Revision: D31738171

Pulled By: 0xedward

fbshipit-source-id: 00d4c5749c469aff39a1531365461ced747e52fc
2021-10-19 09:45:18 -07:00
Vasiliy Kuznetsov
8b1258698e Improve quantization API docs (#66379)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66379

Description:

Creates a quantization API reference and fixes all the docblock errors.

This is #66122 to #66210 squashed together

Test Plan:
```
cd docs
make html
python -m http.server
// open webpage, inspect it, looks good
```

Reviewed By: ejguan

Differential Revision: D31543172

Pulled By: vkuzo

fbshipit-source-id: 9131363d6528337e9f100759654d3f34f02142a9
2021-10-11 18:46:11 -07:00
Mike Ruberry
9971113340 Revert D31447612: Create a documentation page for FX graph mode quantization APIs
Test Plan: revert-hammer

Differential Revision:
D31447612 (a89ac3138e)

Original commit changeset: 07d0a6137f15

fbshipit-source-id: f2cba7d835011500580b4ab9cff72171280ee18b
2021-10-10 01:51:13 -07:00
Vasiliy Kuznetsov
a89ac3138e Create a documentation page for FX graph mode quantization APIs (#66122)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66122

Description:

Adds a documentation page for FX graph mode quantization APIs which
reads from the docstrings in `quantize_fx`, and links it from the main
quantization documentation page.

Also, updates the docstrings in `quantize_fx` to render well with reStructuredText.

Test Plan:
```
cd docs
make html
python -m http.server
// open webpage, inspect it, looks good
```

Reviewed By: dagitses

Differential Revision: D31447612

Pulled By: vkuzo

fbshipit-source-id: 07d0a6137f1537af82dce0a729f9617efaa714a0
2021-10-09 06:44:38 -07:00
Zafar
0d020effab [quant] Fix the parts that were missing after initial migration (#66058)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66058

After the initial migration from `torch.quantization` to `torch.ao.quantization`, some of the files did not change.
This happened because the migration was done in parallel, and some of the files were landed while the others were still in the original location.
This is the last fix in the AO migration phase 1, which completely enables the ao.quantization namespace.

Test Plan: `python test/test_quantization.py`

Reviewed By: vkuzo

Differential Revision: D31366066

Pulled By: z-a-f

fbshipit-source-id: bf4a74885be89d098df2d87e685795a2a64026c5
2021-10-05 11:45:37 -07:00
Jerry Zhang
508845f2b5 [quant] AO migration of the torch/quantization/quantize_fx.py and torch/quantization/fx/* (#65033)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65033

1. Move the file:
```
hg mv caffe2/torch/quantization/fx caffe2/torch/ao/quantization/fx
hg mv caffe2/torch/quantization/quantize_fx.py caffe2/torch/ao/quantization/quantize_fx.py
```
2. Create new files
```
touch caffe2/torch/quantization/quantize_fx.py
touch caffe2/torch/quantization/fx/__init__.py
```
3. import things in the new files
4. add tests to test/quantization/ao_migration/test_quantization_fx.py
this is because we have some fx import in quantize_fx and fx/*.py

Test Plan: buck test mode/dev //caffe2/test:quantization

Reviewed By: vkuzo, z-a-f

Differential Revision: D30949749

fbshipit-source-id: 9e5d4d039c8a0a0820bc9040e224f0d2c26886d3
2021-09-22 09:29:15 -07:00