Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66484https://github.com/pytorch/pytorch/pull/50748 added linear - bn1d fusion
in Eager mode, for PTQ only. This PR also enables this in FX graph mode.
We reuse the existing conv-bn-relu fusion handler, renaming `conv` to
`conv_or_linear` for readability.
The QAT version is saved for a future PR, for both eager and FX graph.
Test Plan:
```
python test/test_quantization.py TestFuseFx.test_fuse_linear_bn_eval
```
Imported from OSS
Reviewed By: bdhirsh
Differential Revision: D31575392
fbshipit-source-id: f69d80ef37c98cbc070099170e335e250bcdf913
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65033
1. Move the file:
```
hg mv caffe2/torch/quantization/fx caffe2/torch/ao/quantization/fx
hg mv caffe2/torch/quantization/quantize_fx.py caffe2/torch/ao/quantization/quantize_fx.py
```
2. Create new files
```
touch caffe2/torch/quantization/quantize_fx.py
touch caffe2/torch/quantization/fx/__init__.py
```
3. import things in the new files
4. add tests to test/quantization/ao_migration/test_quantization_fx.py
this is because we have some fx import in quantize_fx and fx/*.py
Test Plan: buck test mode/dev //caffe2/test:quantization
Reviewed By: vkuzo, z-a-f
Differential Revision: D30949749
fbshipit-source-id: 9e5d4d039c8a0a0820bc9040e224f0d2c26886d3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65109
Previously for sub we set the dtype for sub with qconfig since it's matched with a QuantizeHandler,
however this is incorrect, the dtype for sub is decided by whether the output is quantized or not,
so we added a check of is_output_quantized while deciding the dtype for the output of sub.
Later: is_output_quantized now depends on is_reference, which is pretty confusing and it may cause problems down the road, we should remove this dependency in the future.
Test Plan:
python test/test_quantization.py TestQuantizeFx.test_sub_scalar
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D30977826
fbshipit-source-id: 551fd63bd61b43b3c3415944ff73174e3a21cc8a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64919
AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly. This migrates the quantization utilities.
ghstack-source-id: 138303325
Test Plan: `buck test mode/dev //caffe2/test:quantization`
Reviewed By: jerryzh168
Differential Revision: D30899082
fbshipit-source-id: 85eb38c419e417147e71758b682cd095308dd0c9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64917
AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates from torch.quantization to torch.ao.quantization the following files:
- `_correct_bias.py`
- `_equalize.py`
- `_learnable_fake_quantize.py`
**Note:** These file are migrated completely without any warning. The old location is thus silently deprecated.
Test Plan: `buck test mode/dev //caffe2/test:quantization -- TestBiasCorrection`
Reviewed By: vkuzo
Differential Revision: D30898565
fbshipit-source-id: 1d39be2539dd1adfcb42e16bdcc0daf5c8316bbd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64916
AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates the quant_type.py from torch.quantization to torch.ao.quantization.
At this point both locations will be supported. Eventually the torch.quantization will be deprecated.
Test Plan: `buck test mode/dev //caffe2/test:quantization -- TestAOMigrationQuantization`
Reviewed By: vkuzo
Differential Revision: D30898422
fbshipit-source-id: 3e6126b49f0565a4136d6928cea9eb25368927ff
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64913
AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates the fuse_module.py from torch.quantization to torch.ao.quantization.
At this point both locations will be supported. Eventually the torch.quantization will be deprecated.
Test Plan: `buck test mode/dev //caffe2/test:quantization`
Reviewed By: vkuzo
Differential Revision: D30882819
fbshipit-source-id: 1926ad6aa49136aceb5b625dcef4bfde3a2860d4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64981
this would have cause errors when observer.py was moved to ao.
see: D30391189
ghstack-source-id: 138118430
Test Plan:
buck test mode/opt //caffe2/test:quantization -- --exact 'caffe2/test:quantization - test_dynamic_quant_multi_uses (quantization.jit.test_quantize_jit.TestQuantizeDynamicJitPasses)'
buck test mode/opt //caffe2/test:quantization -- --exact 'caffe2/test:quantization - test_save_load_state_dict_script (quantization.core.test_workflow_module.TestObserver)'
Reviewed By: supriyar
Differential Revision: D30432008
fbshipit-source-id: 754727a89c78f6ceada6f8ff92c304f3953f38fc
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64910
This bled through from the original location. Removing it is not just refactoring, but also prevents potential recursive imports.
ghstack-source-id: 138112663
Test Plan: `buck test mode/dev //caffe2/test:quantization`
Reviewed By: vkuzo
Differential Revision: D30882924
fbshipit-source-id: 8652a334a5186c635761ea5e50f978d1f1078c12
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64623
The config api will change, but we'll add configs gradually for TensorRT to unblock experimentation
Test Plan:
python torch/fx/experimental/fx2trt/example/unittests.py
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D30800474
fbshipit-source-id: 3c4640de1205a0f19b62943ab84f386d80394ec2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64814
1. move the file
```
hg mv caffe2/torch/quantization/fake_quantize.py caffe2/torch/ao/quantization/
```
2. create a new file in the old location and copy the imports
3. fix all callsites inside `torch`
Test Plan:
```
buck test mode/dev //caffe2/test:quantization
```
Reviewed By: z-a-f
Differential Revision: D30866792
fbshipit-source-id: 7a221cb46c0ab01f1c5de9be061f09ecc83ce23e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64861
1. move the file
```
hg mv caffe2/torch/quantization/stubs.py caffe2/torch/ao/quantization/
```
2. create a new file in the old location and copy the imports
3. fix all call sites inside `torch`
ghstack-source-id: 137885365
Test Plan: buck test mode/dev //caffe2/test:quantization
Reviewed By: jerryzh168
Differential Revision: D30879678
fbshipit-source-id: a2d24f25d01064212aca15e94e8c78240ba48953
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64603
We'll insert observer only when both the operator and dtype is supported
Test Plan:
python test/test_quantization.py TestQuantizeFx.test_sub_scalar
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D30797025
fbshipit-source-id: a77c21e2749405534fc245374cf33a0657a3d2c8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64445
AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates the quantize.py from torch.quantization to torch.ao.quantization.
At this point both locations will be supported. Eventually the torch.quantization will be deprecated.
Test Plan: `buck test mode/dev //caffe2/test:quantization`
Reviewed By: HDCharles
Differential Revision: D30734870
fbshipit-source-id: dc204f3cc46bff2cc81c95159eab9d333b43bb4b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64288
This is just to setup the file structure and unblock experimentation.
The format for backend_config_dict will change in the future
Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Reviewed By: zou3519
Differential Revision: D30699457
fbshipit-source-id: 28211a4def05d34757850c045a36e311f54760fe
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64135
We want to start aligning the api with the design in https://github.com/pytorch/pytorch/wiki/Extending-PyTorch-Quantization-to-Custom-Backends
We plan to gradually move things from `prepare_custom_config_dict` and `convert_custom_config_dict`
to `backend_config_dict` and allow custom backend developer to define their own way of quantizing operators.
Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Reviewed By: zou3519
Differential Revision: D30699456
fbshipit-source-id: e3c068da8d3da2270f57719f7159cc71cafa8598
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64270
Before this PR, layer types were populated by doing
`str(module_instance)` and `str(function)`. This resulted
in moderately readable strings for modules, and poorly readable
strings for functions.
This PR switches the logic to use `torch.typename` utility instead.
The results are significantly more readable.
Example function type:
```
# before
'<built-in method linear of PyCapsule object at 0x7fe9b20ce7b0>'
# after
'torch._ops.quantized.PyCapsule.linear'
```
Example module type:
```
# before
"<class 'torch.nn.quantized.modules.conv.Conv2d'>"
# after
'torch.nn.quantized.modules.conv.Conv2d'
```
Test Plan:
Manually inspect NS results for modules and functions, verify they are
more readable.
Manually inspect NS results for modules and functions, verify they are
more readable.
Imported from OSS
Differential Revision:
D30669545
D30669545
Reviewed By: jerryzh168
Pulled By: vkuzo
fbshipit-source-id: 60959e5cafa0a4992b083bf99f5d8260f9acdac0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63828
Added reference quantized conv module for the custom backend flow, the reference quantized module will
have the following code:
```
w(float) -- quant - dequant \
x(float) ------------- F.conv2d ---
```
In the full model, we will see
```
w(float) -- quant - *dequant \
x -- quant --- *dequant -- *F.conv2d --- *quant - dequant
```
and the backend should be able to fuse the ops with `*` into a quantized linear
Test Plan:
python test/test_quantization.py TestQuantizeFx.test_conv_linear_reference
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D30504749
fbshipit-source-id: e1d8c43a0e0d6d9ea2375b8ca59a9c0f455514fb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64086
AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates the `quantize.py` from torch.quantization to `torch.ao.quantization`.
At this point both locations will be supported. Eventually the torch.quantization will be deprecated.
Test Plan: `buck test mode/opt //caffe2/test:quantization`
Reviewed By: jerryzh168, raghuramank100
Differential Revision: D30055886
fbshipit-source-id: 8ef7470f9fa640c0042bef5bb843e7a05ecd0b9f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63627
Added reference quantized linear module for the custom backend flow, the reference quantized module will
have the following code:
```
w(float) -- quant - dequant \
x(float) ------------- F.linear ---
```
In the full model, we will see
```
w(float) -- quant - *dequant \
x -- quant --- *dequant -- *F.linear --- *quant - dequant
```
and the backend should be able to fuse the ops with `*` into a quantized linear
Test Plan:
python test/test_quantization.py TestQuantizeFx.test_conv_linear_reference
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D30504750
fbshipit-source-id: 5729921745c2b6a0fb344efc3689f3b170e89500
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63826
Support the conversion of the intrinsic linearRelu module to the quantized dynamic LinearReLU module
Verify the support works for both linear module and functional linear fusion
Test Plan:
python test/test_quantization.py test_dynamic_with_fusion
Imported from OSS
Reviewed By: iramazanli
Differential Revision: D30503513
fbshipit-source-id: 70446797e9670dfef7341cba2047183d6f88b70f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63799
Add a new module that can be used for module swap with the nni.LinearReLU module in convert function.
Supports INT8 currently (since FP16 op doesn't have relu fusion yet).
Fixes#55393
Test Plan:
python test/test_quantization.py test_dynamic_fusion
Imported from OSS
Reviewed By: heitorschueroff
Differential Revision: D30502812
fbshipit-source-id: 3668e4f001a0626d469e17ac323acf582ee28a51
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63501
Currently some of the ops are considered as working with both float and quantized input,
so we may have things like "quant - some_op - dequant" this might not work well with the backend,
we may consider change everything to produce "quant - dequant - some_op - quant - dequant" instead
in the future, this PR fixes it for maxpool and flatten only to unblock resnet benchmarking on TensorRT
Test Plan:
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Reviewed By: mruberry
Differential Revision: D30402788
fbshipit-source-id: 892c5ff6552775070e2c1453f65846590fb12735
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62861
This PR adds a lower_to_native_backend function to lower a quantized reference model
to a model that uses fbgemm/qnnpack ops. We'll gradually add support and remove
the fbgemm/qnnpack specific handling in quantization_patterns.py
Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D30165828
fbshipit-source-id: de1149cd7e7c1840c17c251cd4d35004afd015b7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62698
We also removed the special handling in match_utils for binary ops
Test Plan:
python test/test_quantize.py TestQuantizeFx
python test/test_quantize.py TestQuantizeFxOps
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D30093781
fbshipit-source-id: 58cc972de8211a80dd4d111e25dc4ad36057933f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63513
updating these names per Jerry's nits in the previous pr
Test Plan: Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D30406710
fbshipit-source-id: a9f1577a2b8c4a93f5005e0f6278b7d7348d8b66
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63384
In some cases the changes to qconfig on module would cause the
fusions to fail. This bugfix solves that problem by adding a
qconfig_function_comparison that compares the functions within the
qconfig rather than the modules the qconfigs are on. The comparison
looks at the partial object within QConfig.activation/weight.p and
compares args, keywords and func. This is necessary to do mannually
because partial doesn't have __eq__ implemented and so == reverts to is.
Test Plan:
python test/test_quantization.py
TestFuseFx.test_problematic_fuse_example
Imported from OSS
Reviewed By: supriyar, ejguan
Differential Revision: D30386264
fbshipit-source-id: 51e358c021c39d6f48dc12ad2a82b2838677b9de
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63376
Previously when tuple is an argument for a quantizable op it would be transformed to a list by mistake,
this PR fixes that.
Test Plan:
python test/test_quantization.py TestQuantizeFx.test_preserve_tuple
Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D30357642
fbshipit-source-id: 82d10805d9c00c003cc99983dca68b6455ff7b2e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62608
Insert extra fixeqparam fake quant in the output of fixed qparam ops in fbgemm e.g. sigmoid
so that we can produce reference patterns for these ops
Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Reviewed By: iramazanli
Differential Revision: D30053978
fbshipit-source-id: c527944b6e791bb4d45ebe96265af52794203695
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63343
The previous implementation had a bug where we were trying to modify an ordered dict value while iterating through it.
This fixes it by creating a copy before modifying it.
Test Plan:
python test/test_quantization.py TestQuantizeFx.test_qconfig_qat_module_type
Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D30346116
fbshipit-source-id: 0e33dad1163e8bff3fd363bfd04de8f7114d7a3a
Summary:
Fixes https://github.com/pytorch/pytorch/issues/63326
Currently `get_callable_args` has the side effect of mutating the input _PartialWrapper. When that input is one of the global defaults, there are all sorts of lifetime issues that crop up. (Details in the linked issue.) So far as I can tell, we only need to make a constructor which is module (and by extension device) aware, so making a fresh one should have the same effect without leaking the last call's module.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63374
Test Plan: the repro in https://github.com/pytorch/pytorch/issues/63326 now reports no leaked Tensors, and all quantization tests pass locally.
Reviewed By: HDCharles
Differential Revision: D30359360
Pulled By: robieta
fbshipit-source-id: aef33261ac49952d8d90da868a57ab063dfc456e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63107
moving this function because the functionality would be useful outside of ns
ghstack-source-id: 135727260
Test Plan: buck test //caffe2/test:quantization_fx mode/dev-nosan --keep-going --config client.id=nuclide --show-full-output -- suite
Reviewed By: supriyar
Differential Revision: D30260735
fbshipit-source-id: 58deabdd0f3b03b0ee7ee92be0548a0945084d65
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63043
In version 1 we use the fused module/operator during QAT. Making this the default for all QAT runs going forward.
Older models saved after prepare_qat_fx can still load their state_dict into a model prepared using version 1.
The state_dict will still have the same attribute for the observer/fake_quant modules.
There may be some numerics difference between the old observer code in observer.py and the new fused module that was
re-written in C++/CUDA to perform observe + fake_quantize.
This PR also updates the test to check for the new module instead of the default FakeQuantize module.
Note: there are also some changes to make the operator work for multi-dim per-channel quantization + updated the test for that.
Test Plan:
python test/test_quantization.py TestSerialization.test_default_qat_qconfig
Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D30232222
fbshipit-source-id: f3553a1926ab7c663bbeed6d574e30a7e90dfb5b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62607
Removing the quantize handler for elu since it can be covered by DefaultNodeQuantizeHandler
Test Plan:
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Reviewed By: iramazanli
Differential Revision: D30053977
fbshipit-source-id: 426789443e928bb01a88907de616cbda5866f621
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62702
Expose the qconfig to the user to speed up training by leveraging the fused module.
The module currently supports per-tensor/per-channel moving avg observer and fake-quantize.
For details on perf benefits, refer to https://github.com/pytorch/pytorch/pull/61691
Test Plan: Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D30093719
fbshipit-source-id: b78deb7810f5b597474b9b9a0395d361d04eb46a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62863
To make this consistent with other observers, add reduce_range option that can be used to update quant_min/max
Test Plan:
python test/test_quantization.py test_fused_mod_reduce_range
Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D30146602
fbshipit-source-id: a2015f095766f9c884611e9ab6942528bc9bc972
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62488
Instead of attaching weight observer/fake_quant to the float linear and conv, we can
compute the quantization parameters and attach that as a dictionary to these modules so
that we can reduce the model size and make the reference module clearer
TODO: the numerics for linear and conv in reference quantized model is still not correct since
we did not quantize weight, we may explore things like parameterization to implement this support
Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D30053979
fbshipit-source-id: b5f8497cf6cf65eec924df2d8fb10a9e154b8cab
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61916
Functions used to run selective equalization based on the SQNR obtained from running the Numeric Suite. After running the Numeric Suite between the equalized and float model, we will get the SQNR between the two models and construct an equalization_qconfig_dict that specifies to only equalize the layers with the highest quantization errors.
How to run:
```
layer_to_sqnr_dict = get_layer_sqnr_dict(float_model, equalized_model, input)
eq_qconfig_dict = get_equalization_qconfig_dict(layer_to_sqnr_dict, equalized_model, num_layers_to_equalize)
prepared = prepare_fx(float_model, qconfig_dict, eq_qconfig_dict)
...
```
Test Plan:
`python test/test_quantization.py TestEqualizeFx.test_selective_equalization`
Imported from OSS
Reviewed By: supriyar
Differential Revision: D29796950
fbshipit-source-id: 91f0f8427d751beaea32d8ffc2f3b8aa8ef7ea95
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62685
Adds a `ref_node_target_type` field to hold the string type
of the base node. This is needed because in some cases
the previous node does not match ref_node (if we have observers,
or if we are logging inputs), and it is useful to know the type
of ref_node.
Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```
Imported from OSS
Reviewed By: hx89
Differential Revision: D30082947
fbshipit-source-id: 98ded7b25a5d8d5ea820e0ef62c3799b65c3fc77
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62366
In the case of models with branches, we are unable to equalize the branching part in the graph.
For example, given this graph:
```
conv2
/ \
x -> conv1 -> add
```
After prepare, we will ignore the branched layers (conv1 and conv2) and will not insert the equalization observers. A warning message will also be printed with the layers that are unable to be equalized.
```
conv2 -> out_quant_obs2
/ \
x -> input_quant_obs -> conv1 -> out_quant_obs1 -> add
```
Test Plan:
`python test/test_quantization.py TestEqualizeFx.test_input_weight_equalization_prepare`
Imported from OSS
Reviewed By: malfet, supriyar
Differential Revision: D29982585
fbshipit-source-id: 706297e7f1861975998dfa83e7ca59af09d80618
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62346
Update the operator code to resize the min/max tensors if per-channel quant is selected. We need to do this because by default the observer creates empty tensors for min/max and scale/zero_point values when per-channel quantization is enabled
Test Plan:
python test/test_quantization.py test_fused_mod_per_channel
Imported from OSS
Reviewed By: HDCharles
Differential Revision: D30003835
fbshipit-source-id: b5ec80261cb50ee543f21191a887e979dcde4667