Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70009
Currently we rely on module.training to decide whether we'll do a qat fusion or ptq fusion, this is
not ideal since training flag has nothing to do with quantization, this PR introduces an extra flag `is_qat`
to control this
Note: currently we still has the constraint that when `is_qat` is True, the modules must be in training mode, we
can relax this constraint later
Test Plan:
```
python test/test_quantization.py TestFuseFx
python test/test_quantization.py TestFusion
```
Imported from OSS
**Static Docs Preview: classyvision**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D33178977/V36/classyvision/)|
|**Modified Pages**|
Reviewed By: mruberry
Differential Revision: D33178977
fbshipit-source-id: 0c1499c45526971140d9ad58e2994d1edf5ad770
(cherry picked from commit 2d51f9fb28)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70022
Add support for fusing ConvTranpose{1,2,3}d with BatchNorm{1,2,3}d. This re-uses the existing fusion logic but adds a "transpose" flag to the fusing function which when enabled will use the appropriate reshape for ConTranspose's transposed weights.
Test Plan: `buck test mode/dev //caffe2/test:quantization -- -r quantization.eager.test_fusion.TestFusion`
Reviewed By: jerryzh168
Differential Revision: D33074405
fbshipit-source-id: 5e9eff1a06d8f98d117e7d18e80da8e842e973b7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70006
reland: fixing some mypy errors that was missed before
This PR enables fuse handler for sequence of three ops, and merges all fuse handlers into one
TODO: we can also move this to backend_config_dict folder
Test Plan:
regression fusion test
```
python test/test_quantization.py TestFuseFx
```
Imported from OSS
Imported from OSS
Reviewed By: supriyar
Differential Revision: D33144606
fbshipit-source-id: ca34f282018a0fb4d04c7e35119eaf2d64258e78
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69658
This PR enables fuse handler for sequence of three ops, and merges all fuse handlers into one
TODO: we can also move this to backend_config_dict folder
Test Plan:
regression fusion test
```
python test/test_quantization.py TestFuseFx
```
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D32974907
fbshipit-source-id: ba205e74b566814145f776257c5f5bb3b24547c1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69335
This PR added support for configuring fusion with:
"pattern", "fuser_method"
This only works for simple sequence of 2 op patterns currently, will extend this in future PRs
Test Plan:
regresion test on linear-relu fusion:
```
python test/fx2trt/test_quant_trt.py TestQuantizeFxTRTOps
```
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D32816164
fbshipit-source-id: f300b7b96b36908cb94a50a8a17e0e15032509eb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68770
Previous fusion only works for a sequnce of ops, which is not general enough for fusion patterns
that is defined by a subgraph, this PR refactors that to make it more general
Test Plan:
```
python test/test_quantization.py TestFuseFx
```
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D32602637
fbshipit-source-id: a7897c62081b9d71c67fb56e78484cf68deaacf6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64919
AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly. This migrates the quantization utilities.
ghstack-source-id: 138303325
Test Plan: `buck test mode/dev //caffe2/test:quantization`
Reviewed By: jerryzh168
Differential Revision: D30899082
fbshipit-source-id: 85eb38c419e417147e71758b682cd095308dd0c9