Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46337
We plan to pass around the mappings instead of using global registration api to keep
the mappings local to the transformations user is performing
Test Plan: Imported from OSS
Reviewed By: vkuzo
Differential Revision: D24317436
fbshipit-source-id: 81569b88f05eeeaa9595447e482a12827aeb961f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46292
since it is not needed
Test Plan: Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D24290815
fbshipit-source-id: 5cc24a305dbdfee5de3419dc83a9c3794d949300
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46095
Adds logging on usage of public quantization APIs. This only works in FB codebase
and is a no-op in OSS.
Test Plan: The test plan is fb-only
Reviewed By: raghuramank100
Differential Revision: D24220817
fbshipit-source-id: a2cc957b5a077a70c318242f4a245426e48f75e5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44835
This is for feature parity with fx graph mode quantization
Test Plan: Imported from OSS
Reviewed By: z-a-f
Differential Revision: D23745086
fbshipit-source-id: ae2fc86129f9896d5a9039b73006a4da15821307
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43015
Currently activation_post_process are inserted by default in qat modules, which is not
friendly to automatic quantization tools, this PR removes them.
Test Plan:
Imported from OSS
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D23105059
fbshipit-source-id: 3439ac39e718ffb0390468163bcbffd384802b57
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42343
Currently activation_post_process are inserted by default in qat modules, which is not
friendly to automatic quantization tools, this PR removes them.
Test Plan: Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D22856816
fbshipit-source-id: 988a43bce46a992b38fd0d469929f89e5b046131
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42576
Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules
Test Plan: Imported from OSS
Reviewed By: vkuzo
Differential Revision: D22939453
fbshipit-source-id: 7d6a1f73ff9bfe461b3afc75aa266fcc8f7db517
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41930
As title
ghstack-source-id: 108517079
Test Plan: CI
Reviewed By: jerryzh168
Differential Revision: D22698386
fbshipit-source-id: 4f748c9bae4a0b615aa69c7cc8d8e451e5d26863
Summary:
Added a logic so that if a prehook is passed into the prepare method during quantization, then the hook will be added as a prehook to all leaf nodes (and modules specified in the non_leaf_module_list).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41863
Test Plan:
Small demo, made simple module then called prepare with prehook parameter set to the numeric suite logger, printed the results to verify its what we wanted
{F245156246}
Reviewed By: jerryzh168
Differential Revision: D22671288
Pulled By: edmundw314
fbshipit-source-id: ce65a00830ff03360a82c0a075b3b6d8cbc4362e
Summary:
1. While do convert() preserve module's **pre and post forward** hooks
2. While do fusion preserve only module's **pre forward** hooks (because after fusion output no longer the same)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37233
Differential Revision: D22425141
Pulled By: jerryzh168
fbshipit-source-id: e69b81821d507dcd110d2ff3594ba94b9593c8da
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39337
In #39031 we made fake quantize respect device affinity of the
original module. However, that PR only handled modules with parameters
or buffers, and did not work properly for `ReLU`.
Fixing the logic to also work for `ReLU` by passing the parent's
device when adding observers.
Test Plan:
```
python test/test_quantization.py TestDistributed.test_device_affinity
```
Imported from OSS
Differential Revision: D21821243
fbshipit-source-id: cc6abda3694b80ce8ba0440dc6c1b5b58f3c0066
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39031
Makes the eager mode QAT prepare logic respect device affinity.
This fixes the issue where a module is on `cuda:0`, and running
the QAT prepare script would add observers on `cpu`. Now it
will add them on the original device.
Test Plan:
```
python test/test_quantization.py TestDistributed.test_device_affinity
```
Imported from OSS
Differential Revision: D21729272
fbshipit-source-id: 5537bf3977ddc23412184941978bf0d1cc6fb479
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38283
Adds support for the modules and tests
Test Plan:
python test/test_quantization.py TestStaticQuantizedModule.test_conv1d_api
Imported from OSS
Differential Revision: D21553665
fbshipit-source-id: 7ea28da024bdf59f87f300d616c266f2b41f0bcd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31972
Since eager mode quantization requires many user modifications, we can't
consistently quantize a given model by just changing qconfig_dict, therefore
the top level `qconfig_dict` is not that useful.
fixes: https://github.com/pytorch/pytorch/issues/31549
Test Plan:
.
Imported from OSS
Differential Revision: D19330691
fbshipit-source-id: 8aee6e5249e0c14e8a363ac1a83836e88887cd7d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27850
Many of these are real problems in the documentation (i.e., link or
bullet point doesn't display correctly).
Test Plan: - built and viewed the documentation for each change locally.
Differential Revision: D17908123
Pulled By: zou3519
fbshipit-source-id: 65c92a352c89b90fb6b508c388b0874233a3817a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26782
At least we should be consistent on top-level APIs and prepare/convert/etc.
Logic is inplace=False by default but top-level APIs take care of doing fewer copies.
Also renames always-inplace methods like add_observer to have underscore in the end.
One fix for MinMaxObserver was triggered by deepcopy surfacing that we were accidentally keeping autograd around
Test Plan: Imported from OSS
Differential Revision: D17595956
Pulled By: dzhulgakov
fbshipit-source-id: 801f9f5536b553f24c7a660064dd6fce685edd65