Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65674
Before this PR user had to use the eager mode static quantization APIs to quantize Embedding/EmbeddingBag modules.
With this PR they can use either the static or dynamic quantization APIs for Embedding quantization
The only qconfig supported for embedding quantization is float_qparams_weight_only_qconfig whcih is currently enforced in the from_float
method of the quantized Embedding/Embedding modules.
To combine embedding quantization with Linear dynamic quantization, user can use the qconfig_dict to specify different qconfig for each module type.
The prepare/convert APIs can still be used to quantize Embeddings, with the caveat that user need to ensure input to Embedding ops are FP32.
Addresses Issue #65185
ghstack-source-id: 139935419
Test Plan:
python test/test_quantization.py
Imported from OSS
Reviewed By: gchanan
Differential Revision: D31211199
fbshipit-source-id: 8c747881caee5ccbf8b93c6704b08d132049dea4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64916
AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates the quant_type.py from torch.quantization to torch.ao.quantization.
At this point both locations will be supported. Eventually the torch.quantization will be deprecated.
Test Plan: `buck test mode/dev //caffe2/test:quantization -- TestAOMigrationQuantization`
Reviewed By: vkuzo
Differential Revision: D30898422
fbshipit-source-id: 3e6126b49f0565a4136d6928cea9eb25368927ff
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63799
Add a new module that can be used for module swap with the nni.LinearReLU module in convert function.
Supports INT8 currently (since FP16 op doesn't have relu fusion yet).
Fixes#55393
Test Plan:
python test/test_quantization.py test_dynamic_fusion
Imported from OSS
Reviewed By: heitorschueroff
Differential Revision: D30502812
fbshipit-source-id: 3668e4f001a0626d469e17ac323acf582ee28a51
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62487
checkGraphModeFxOp is our utility test function to quantize a given model with FX Graph Mode Quantization
and checks whether the result model contains expected ops, previously it only returns a result on the sample data for the
quantized model, this PR chagnes it to return prepared, quantized, quantized_reference models together with the result
for quantized models.
Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Reviewed By: iramazanli
Differential Revision: D30053981
fbshipit-source-id: 31fbce48d138261d0b00ba24e1427fd0c6208990
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62277
This PR changes is_reference=True for linear to produce a pattern consists of dequant - float linear - quant instead of reference linear module, this is useful for future transformations to custom backends, it is also helpful to simplify the implementation for
convert in the future.
Test Plan:
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Imported from OSS
Reviewed By: ejguan
Differential Revision: D29941079
fbshipit-source-id: 84bdfc0bb872c34fc345875e545c8b323e77c41e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61892
This PR changes is_reference=True for linear to produce a pattern consists of dequant - float linear - quant instead of reference linear module, this is useful for future transformations to custom backends, it is also helpful to simplify the implementation for
convert in the future.
Test Plan:
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D29810657
fbshipit-source-id: 949615bbc017bc454d81c8a6b2bdec53badaab19
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62041
Before this PR, weights of conv and linear modules were extracted
as lists, in order to match the signature of LSTM weights.
After this PR, weight extraction preserves the type of the weights,
so extracted weights of conv and linear have a different type
from LSTM weights. The comparison util functions are updated to
handle the LSTM weight type of `List[tensor]`.
Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels
```
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D29853626
fbshipit-source-id: 93da5b9b0b174679c61528d02b6b902cb064444e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60476
# Context
Add tests for Lite modules that are quantized using fx API
Read this posts for details about why we need a test bench for quantized lite modules
https://fb.workplace.com/groups/2322282031156145/permalink/4289792691071726/https://github.com/pytorch/pytorch/pull/60226#discussion_r654615851
moved common code to `caffe2/torch/testing/_internal/common_quantization.py`
ghstack-source-id: 133144292
Test Plan:
```
~/fbsource/fbcode] buck test caffe2/test:fx_quantization_lite
Downloaded 0/2 artifacts, 0.00 bytes, 100.0% cache miss
Building: finished in 8.3 sec (100%) 11892/11892 jobs, 2 updated
Total time: 8.6 sec
More details at https://www.internalfb.com/intern/buck/build/ffb7d517-d85e-4c8f-9531-5e5d9ca1d34c
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: d79a5713-bd29-4bbf-ae76-33a413869a09
Trace available for this run at /tmp/tpx-20210630-105547.675980/trace.log
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/3096224749578707
✓ ListingSuccess: caffe2/test:fx_quantization_lite - main (9.423)
✓ Pass: caffe2/test:fx_quantization_lite - test_embedding (mobile.test_quantize_fx_lite_script_module.TestFuseFx) (10.630)
✓ Pass: caffe2/test:fx_quantization_lite - test_submodule (mobile.test_quantize_fx_lite_script_module.TestFuseFx) (12.464)
✓ Pass: caffe2/test:fx_quantization_lite - test_conv2d (mobile.test_quantize_fx_lite_script_module.TestFuseFx) (12.728)
Summary
Pass: 3
ListingSuccess: 1
If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/3096224749578707
```
Reviewed By: iseeyuan
Differential Revision: D29306402
fbshipit-source-id: aa481e0f696b7e9b04b9dcc6516e8a390f7dc1be
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60779
When we do fusion, we replace certain modules (such as Linear + ReLU) with fused versions (such as LinearReLU) by calling `_fuse_fx` in prepare_fx. However when we try to look up using the fused module type in qconfig_dict, we cannot find a match anymore since the qconfig dict contains the original module types. An example is here [N882873](https://fburl.com/anp/azenjx3v).
So we will now update the qconfig_dict to include the fused modules mapping to the qconfigs used for the modules that make up the fused modules. If the modules are not mapped to the same qconfig, then we will raise an error.
Test Plan:
`python test/test_quantization.py TestFuseFx.test_qconfig_fused_module`
Imported from OSS
Reviewed By: supriyar
Differential Revision: D29406941
fbshipit-source-id: 74b5db89f4998aeb02b2bf7c37bf97326580c654
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60378
Created the following unit-tests to check that our equalization algorithm is as expected:
- Check the equalization scales calculated and stored in the graph are as expected
- Check the scaled weights and biases are as expected
- Check that the min/max values in the quantization observers are as expected
- Check that the graphs with equalization are structured in the same way as graphs without equalization (except that equalized graphs have additional equalization scale and mul nodes) before and after quantization
Test Plan:
`python test/test_quantization TestEqualizeFx.test_input_weight_equalization_equalization_scales`
`python test/test_quantization TestEqualizeFx.test_input_weight_equalization_weights_bias`
`python test/test_quantization TestEqualizeFx.test_input_activation_values`
`python test/test_quantization TestEqualizeFx.test_input_weight_equalization_graphs`
Imported from OSS
Reviewed By: supriyar
Differential Revision: D29406942
fbshipit-source-id: 518208546ae5835c1ebb2af217507e90af66fbe4
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45687
Fix changes the input size check for `InstanceNorm*d` to be more restrictive and correctly reject sizes with only a single spatial element, regardless of batch size, to avoid infinite variance.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56659
Reviewed By: pbelevich
Differential Revision: D27948060
Pulled By: jbschlosser
fbshipit-source-id: 21cfea391a609c0774568b89fd241efea72516bb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54813
Previously we have a cat that takes a list of Tensors with different qparams and dequantize them
cacatenate them and requantize with the output qparams. This adds some unnecessary overhead in dequantizing
and quantizing Tensors.
This PR adds an optimization for cat operator, we'll make sure inputs and output of cat
uses same observer/fake_quant and produce a cat that does not do rescaling.
Test Plan: Imported from OSS
Reviewed By: vkuzo
Differential Revision: D27408377
fbshipit-source-id: 6a4bdcfd15e57ea1fe0f7e72d1e1288eb3ece4db
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56194
Enables the NS graph matcher to also match `call_method` nodes.
These are useful for ops such as `torch.sigmoid`.
Test Plan:
```
python test/test_quantization.py TestFXGraphMatcher.test_methods
```
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D27805333
fbshipit-source-id: 509ae283db6b245671f11e3eb6b7fcb3a5735ef5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54253
Creates an `NSSubgraph` type for representing a subgraph instance,
and modifies the NS code to use it. This will enable us to add
more information to the subgraph instance definition without
having to change all the callsites.
Test Plan:
```
mypy torch/quantization
python test/test_quantization.py TestFXGraphMatcher
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D27158198
fbshipit-source-id: 548785dd90144e2da256c23af990620c778e7cfe
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53779
Moves the test case for LSTM activation matching to new NS APIs.
This requires adding the ability to log non-Tensor types.
Since we need Loggers to be scriptable and TorchScript does
not support `Union`, we collect statistics in a separate collector
if we have an RNN. Note: this can scale to a small N of
return types, but not to a large N. If the N becomes large in
the future, we will solve it then.
Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels
```
Imported from OSS
Reviewed By: hx89
Differential Revision: D26967110
fbshipit-source-id: afe60b44fdec28a328813b4f342cf4fe04820baa
Summary:
This PR implements the option to log inputs for FX Numeric Suite. The user facing api looks like
```
def prepare_model_outputs(..., should_log_inputs : bool = False)
def prepare_model_with_stubs(..., should_log_inputs : bool = False)
```
The output data now looks like
```
{
"layer1": {
"node_inputs": {
"model1": [{
"values": ...,
...,
}],
},
"node_outputs": {
...,
}
},
... // other layers
}
```
One key design decision taken here is that an input logger logs the output of previous nodes, instead of logging the input of the current node. This matters for a signature such as `cat([x1, x2, x3])`. We are inserting three input loggers here (for x1, x2, and x3), instead of a single input logger for `[x1, x2, x3]`. This was chosen in order to preserve the structure of the original graph as much as possible and keep flexibility for future optimizations.
Test Plan:
TODO: fill out
Imported from OSS
Differential Revision: D26931225
Reviewed By: hx89
Pulled By: vkuzo
fbshipit-source-id: dd692bfb5ddaaf5554f80c25e2f40b21762e4fc3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52534
Currently linear_dynamic_fp16 has a signature that's tied to fbgemm/qnnpack
We'll need to produce a pattern equivalent to linear_dynamic_fp16 to support extensions
to other backends
Test Plan:
python test/test_quantization.py TestQuantizeFxOps.test_linear_dynamic_fp16
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D26557726
fbshipit-source-id: 270c9f781f73c79416a092b7831294cabca84b0c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52779
1. makes the return type of the weight comparison APIs match the return
type of the activation comparison APIs:
```
# before
{layer_name: {model_name: weight_tensor}}
{layer_name: {model_name: [activation_tensor]}}
# after
{layer_name: {model_name: [weight_tensor]}}
{layer_name: {model_name: [activation_tensor]}}
```
2. makes a type alias for the type, so future changes are easier
Test Plan:
```
mypy torch/quantization
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```
Imported from OSS
Reviewed By: hx89
Differential Revision: D26652639
fbshipit-source-id: eb1f04d6913cedf88d628f362468875ae9ced928
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52179
Rename debug to reference. We'll use this to produce a reference quantized model
that can be used as a common interface between pytorch quantized model and backends.
Test Plan:
python test/test_quantization.py TestQuantizeFx
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D26424656
fbshipit-source-id: a0299b023f6ba7d98f5750724c517b0ecb987b35
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52130
We have patterns like (F.linear, F.relu) which need to match
to (toq.linear_relu). So, we need to match subgraphs.
This PR does the following:
* defines a "subgraph" as (start_node, end_node). The current assumption
is that subgraphs are simple, there is always a path from start_node to
end_node, and we can ignore any non-input args/kwargs of these nodes
for the purposes of matching and copying things. An example one node
subgraph is (F.linear, F.linear). An example two node subgraph
is (F.linear, F.relu).
* changes the matching logic to iterate over subgraphs instead of nodes
* changes the NS core APIs to use subgraph pairs instead of node pairs:
1. for weights, we match on the start node
2. for unshadowed activations, we observe the end nodes
3. for shadowed activations, we copy the subgraph of a to graph c
TODO(before review) write up better, not ready for review yet
Test Plan:
TODO before land: better test plan
Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D26403092
fbshipit-source-id: e49aaad4b02b8d60589435848bee422b8f41937a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52092
Adds a very simple toy sparsenn model, and enables
its inspection with the new NS APIs.
Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_sparsenn_compare_activations
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_sparsenn_shadow
```
Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D26403095
fbshipit-source-id: 3c3650aca47186deb32f2b3f1d87a0716d1ad9d1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52302
Adds the basic functionality for the three Numeric Suite core APIs to work on FX models:
1. comparing weights
2. comparing activations, with same input fed to both models
3. comparing activations, with nodes of A shadowing nodes of B
Note: there are a lot of TODOs in the code, and some/most of the APIs and implementation details may change as we iterate. This is just the first PR.
Test Plan:
We have unit test coverage for all of the APIs, for now this is with toy models:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```
Reviewed By: raghuramank100
Differential Revision: D26463013
Pulled By: vkuzo
fbshipit-source-id: e454115099ad18e4037d3c54986951cdffcab367
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51669
Adds the basic functionality for the three Numeric Suite core APIs to work on FX models:
1. comparing weights
2. comparing activations, with same input fed to both models
3. comparing activations, with nodes of A shadowing nodes of B
Note: there are a lot of TODOs in the code, and some/most of the APIs and implementation details may change as we iterate. This is just the first PR.
Test Plan:
We have unit test coverage for all of the APIs, for now this is with toy models:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```
Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D26403094
fbshipit-source-id: 9752331d4ae0105346d3da309b13c895b593b450
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51588
Early version of utility to match nodes between graph A and graph B, for Numerical Suite for FX graph mode quantization.
The main goal of this utility is to reliably match the nodes of graph A to the nodes of graph B, and throw an easy to read error message. This will be used in future PRs to create the APIs for matching activations. It also could potentially be used to match weights.
Test Plan:
For now, we have bare bones test coverage on some toy models, and a single torchvision model.
```
python test/test_quantization.py TestFXGraphMatcher
```
Future PRs will add more testing.
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D26403093
fbshipit-source-id: 60e318d51e6fefe65265488c4967629d946048ef
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50748
Adds support for Linear + BatchNorm1d fusion to quantization.
This is a redo of dreiss's https://github.com/pytorch/pytorch/pull/37467, faster
to copy-paste it than rebase and deal with conflicts.
Test Plan:
```
python test/test_quantization.py TestFusion.test_fusion_linear_bn_eval
```
Imported from OSS
Reviewed By: supriyar
Differential Revision: D25957432
fbshipit-source-id: 24e5b760f70186aa953ef65ab0182770e89495e4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49623
(not ready for review)
Ensures that conv bias is not observed in a `F.conv{n}d` call.
Test Plan: Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D25652856
fbshipit-source-id: 884f87be1948d3e049a557d79bec3c90aec34340
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49428
Previously dequantstub will be swapped with nn.quantized.DeQuantize regardless of qconfig
reason is we skipped attaching qconfig for DeQuantStub to avoid adding fake quantize module to it
but the correct fix is to skip it in insert observers, this PR fixes the issue.
Test Plan: Imported from OSS
Reviewed By: vkuzo
Differential Revision: D25569991
fbshipit-source-id: d44a08c6e64c7a49509687dc389b57de1cbb878c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49420
Before: if an output was marked as quantized, it could actually not
be quantized, if the previous node was not quantized.
After: if an output was marked as quantized, it will be quantized
regardless of the quantization status of the previous node.
Test Plan:
```
python test/test_quantization.py TestQuantizeFxOps.test_quant_output_always_observed
```
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D25566834
fbshipit-source-id: 84755a1605fd3847edd03a7887ab9f635498c05c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48939
Add numerical test for fx graph mode for resnet base, comparing with eager mode
Test Plan: Imported from OSS
Reviewed By: supriyar
Differential Revision: D25375342
fbshipit-source-id: 08f49b88daede47d44ee2ea96a02999fea246cb2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48069
also renamed float_qparam_dynamic_qconfig to float_qparam_weight_only_qconfig
It's not used in user code yet so we only need to update the tests.
Test Plan: Imported from OSS
Reviewed By: supriyar
Differential Revision: D25010175
fbshipit-source-id: caa3eaa5358a8bc5c808bf5f64e6ebff3e0b61e8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48038
nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU
that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu
this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode
Test Plan:
Imported from OSS
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D25000462
fbshipit-source-id: e3609a3ae4a3476a42f61276619033054194a0d2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47415
nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU
that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu
this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode
Test Plan: Imported from OSS
Reviewed By: z-a-f
Differential Revision: D24747035
fbshipit-source-id: b8fdf13e513a0d5f0c4c6c9835635bdf9fdc2769