Commit Graph

5 Commits

Author SHA1 Message Date
Jerry Zhang
214a6500e3 [quant][docs] Additonal fixes for quantize_fx docs (#84587)
Summary:
Some more clarifications for the arguments, including linking to object docs (QConfigMapping, BackendConfig) and adding types
in the doc

Test Plan:
```
cd docs
make html
```
and

visual inspection for the generated docs

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84587
Approved by: https://github.com/vkuzo
2022-09-09 15:23:23 +00:00
Sergii Dymchenko
591222f5d9 Fix use-dict-literal lint (#83718)
Fix use-dict-literal pylint suggestions by changing `dict()` to `{}`. This PR should do the change for every Python file except test/jit/test_list_dict.py, where I think the intent is to test the constructor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83718
Approved by: https://github.com/albanD
2022-08-24 00:26:46 +00:00
Andrew Or
782f3489c6 [Quant][fx][bc-breaking] Integrate BackendConfig with quantization flow (part 2) (#82557)
This is part 2 of the effort to replace `backend_config_dict` with
a python config object, a more formal and robust API that leads to
better user experience. This commit integrates the `BackendConfig`
implemented in part 1 (https://github.com/pytorch/pytorch/pull/81469)
with the existing FX graph mode quantization flow.

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

BC-breaking Notes:

Before:
```
import torch
from torch.ao.quantization import get_default_qconfig_mapping
from torch.ao.quantization.backend_config import ObservationType
from torch.ao.quantization.quantize_fx import prepare_fx, convert_fx

dtype_config = {
    "input_dtype": torch.quint8,
    "output_dtype": torch.quint8
    "weight_dtype": torch.qint8,
    "bias_dtype": torch.float,
}

backend_config_dict = {
    "name": "my_backend",
    "configs": [{
        "pattern": torch.nn.Linear,
        "observation_type": ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,
        "dtype_configs": [dtype_config],
        "root_module": torch.nn.Linear,
        "reference_quantized_module": torch.nn.quantized._reference.Linear,
        "qat_module": torch.nn.qat.Linear,
    }]
}

m = MyModel()
qconfig_mapping = get_default_qconfig_mapping()
example_inputs = (torch.rand(3, 3),)
m = prepare_fx(
    m, qconfig_mapping, example_inputs,
    backend_config_dict=backend_config_dict)
m = convert_fx(m, backend_config_dict=backend_config_dict)
```

After:
```
import torch
from torch.ao.quantization import get_default_qconfig_mapping
from torch.ao.quantization.backend_config import (
    BackendConfig,
    BackendPatternConfig,
    DTypeConfig,
    ObservationType,
)
from torch.ao.quantization.quantize_fx import prepare_fx, convert_fx

dtype_config = DTypeConfig(
    input_dtype=torch.quint8,
    output_dtype=torch.quint8
    weight_dtype=torch.qint8,
    bias_dtype=torch.float,
)

backend_config = BackendConfig("my_backend").set_backend_pattern_config(
    BackendPatternConfig(torch.nn.Linear)
        .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT)
        .add_dtype_config(dtype_config)
        .set_root_module(torch.nn.Linear)
        .set_reference_quantized_module(torch.nn.quantized._reference.Linear)
        .set_qat_module(torch.nn.qat.Linear))

m = MyModel()
qconfig_mapping = get_default_qconfig_mapping()
example_inputs = (torch.rand(3, 3),)
m = prepare_fx(m, qconfig_mapping, example_inputs, backend_config=backend_config)
m = convert_fx(m, backend_config=backend_config)
```

Reviewers: jerryzh168

Subscribers: jerryzh168, supriyar

Differential Revision: [D38471932](https://our.internmc.facebook.com/intern/diff/D38471932)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82557
Approved by: https://github.com/jerryzh168
2022-08-08 18:55:50 +00:00
Vasiliy Kuznetsov
c15fca1137 quant doc: improve rendered documentation for backend_config_dict
Summary:

This improves the documentation page for backend_config_dict to render
the configurations in a human readable format, such as

```
{
  'pattern': torch.nn.modules.pooling.AdaptiveAvgPool1d,
  'dtype_configs': [
    {
      'input_dtype': torch.quint8,
      'output_dtype': torch.quint8,
    },
    {
      'input_dtype': torch.float16,
      'weight_dtype': torch.float16,
      'bias_dtype': torch.float16,
      'output_dtype': torch.float16,
    },
  ],
  'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,
},
```

The results are also now sorted alphabetically by the normalized name of
the root op in the pattern.

A couple of utility functions are created to help with this. If in the future
we convert backend_config_dict to use typed objects, we can move this logic
to the objects at that time.

Test plan:

```
cd docs
make html
cd build
python -m server.http
// renders correctly, example: https://gist.github.com/vkuzo/76adfc7c89e119c59813a733fa2cd56f
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77535

Approved by: https://github.com/andrewor14
2022-05-18 11:46:07 +00:00
Jerry Zhang
74454bdb46 [quant][fx] Move backend_config folder to torch.ao.quantization
Summary:
Following https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md we implemented
the backend configuration for fbgemm/qnnpack backend, currently it was under fx folder, but we'd like to use this for all different
workflows, including eager, fx graph and define by run quantization, this PR moves it to torch.ao.quantization namespace so that
it can be shared by different workflows
Also moves some utility functions specific to fx to fx/backend_config_utils.py and some files are kept in fx folder (quantize_handler.py and fuse_handler.py)

Test Plan:
python test/teset_quantization.py TestQuantizeFx
python test/teset_quantization.py TestQuantizeFxOps
python test/teset_quantization.py TestQuantizeFxModels
python test/test_quantization.py TestAOMigrationQuantization
python test/test_quantization.py TestAOMigrationQuantizationFx

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75823

Approved by: https://github.com/vkuzo
2022-04-19 15:38:57 +00:00