Commit Graph

12 Commits

Author SHA1 Message Date
Jerry Zhang
ef501e8fed [bc-breaking][quant][be] Refactor fuser_method to include is_qat argument (#70009)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70009

Currently we rely on module.training to decide whether we'll do a qat fusion or ptq fusion, this is
not ideal since training flag has nothing to do with quantization, this PR introduces an extra flag `is_qat`
to control this

Note: currently we still has the constraint that when `is_qat` is True, the modules must be in training mode, we
can relax this constraint later

Test Plan:
```
python test/test_quantization.py TestFuseFx
python test/test_quantization.py TestFusion
```

Imported from OSS

**Static Docs Preview: classyvision**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D33178977/V36/classyvision/)|

|**Modified Pages**|

Reviewed By: mruberry

Differential Revision: D33178977

fbshipit-source-id: 0c1499c45526971140d9ad58e2994d1edf5ad770
(cherry picked from commit 2d51f9fb28)
2022-01-26 23:33:28 +00:00
Jon Morton
123be0e5b7 [fusion] Add ConvTranspose+BN fusion support (#70022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70022

Add support for fusing ConvTranpose{1,2,3}d with BatchNorm{1,2,3}d. This re-uses the existing fusion logic but adds a "transpose" flag to the fusing function which when enabled will use the appropriate reshape for ConTranspose's transposed weights.

Test Plan: `buck test mode/dev //caffe2/test:quantization -- -r quantization.eager.test_fusion.TestFusion`

Reviewed By: jerryzh168

Differential Revision: D33074405

fbshipit-source-id: 5e9eff1a06d8f98d117e7d18e80da8e842e973b7
2021-12-20 18:42:48 -08:00
Vasiliy Kuznetsov
b999f87503 fx quant: move _parent_name to common utils (#69720)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69720

This function is also useful for DBR quant, moving it from FX utils
to common utils.

Test Plan:
```
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeDBR
```

Reviewed By: jerryzh168

Differential Revision: D33003473

Pulled By: vkuzo

fbshipit-source-id: 20360682c69d614a645c14fc29d3ee023d6b2623
2021-12-17 05:59:46 -08:00
Jerry Zhang
a73c6a45b6 [reland][quant][graphmode][fx] Enable fuse handler for sequence of 3 ops (#70006)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70006

reland: fixing some mypy errors that was missed before

This PR enables fuse handler for sequence of three ops, and merges all fuse handlers into one

TODO: we can also move this to backend_config_dict folder

Test Plan:
regression fusion test
```
python test/test_quantization.py TestFuseFx
```

Imported from OSS

Imported from OSS

Reviewed By: supriyar

Differential Revision: D33144606

fbshipit-source-id: ca34f282018a0fb4d04c7e35119eaf2d64258e78
2021-12-16 15:04:16 -08:00
Alban Desmaison
6f9844693f Revert D32974907: [quant][graphmode][fx] Enable fuse handler for sequence of 3 ops
Test Plan: revert-hammer

Differential Revision:
D32974907 (bf089840ac)

Original commit changeset: ba205e74b566

Original Phabricator Diff: D32974907 (bf089840ac)

fbshipit-source-id: e47838f3008ba014d884aef53460df654f0cf731
2021-12-15 05:46:49 -08:00
Jerry Zhang
bf089840ac [quant][graphmode][fx] Enable fuse handler for sequence of 3 ops (#69658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69658

This PR enables fuse handler for sequence of three ops, and merges all fuse handlers into one

TODO: we can also move this to backend_config_dict folder

Test Plan:
regression fusion test
```
python test/test_quantization.py TestFuseFx
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32974907

fbshipit-source-id: ba205e74b566814145f776257c5f5bb3b24547c1
2021-12-14 19:04:21 -08:00
Jerry Zhang
f575179953 [quant][fx][graphmode] Move more patterns to use ModuleReLU fuse handler (#69644)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69644

This PR cleans up the init of ModuleReLUFuseHandler and moved all `module - relu`
fusion pattern to use this handler

also disabled additional_fuser_method argument temporarily, will enable
after we bring back the simple pattern format

Test Plan:
```
python test/test_quantize_fx.py TestFuseFx
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32974906

fbshipit-source-id: 23483ea4293d569cb3cec6dadfefd4d9f30921a7
2021-12-11 22:00:06 -08:00
Jerry Zhang
05946051f8 [quant][graphmode] initial support for fusion pattern in backend_config_dict (#69335)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69335

This PR added support for configuring fusion with:
"pattern", "fuser_method"

This only works for simple sequence of 2 op patterns currently, will extend this in future PRs

Test Plan:
regresion test on linear-relu fusion:
```
python test/fx2trt/test_quant_trt.py TestQuantizeFxTRTOps
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32816164

fbshipit-source-id: f300b7b96b36908cb94a50a8a17e0e15032509eb
2021-12-07 16:54:42 -08:00
Jerry Zhang
e5a1ee0e5a [quant][graphmode] Refactor fusion to use the new Pattern format (#68770)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68770

Previous fusion only works for a sequnce of ops, which is not general enough for fusion patterns
that is defined by a subgraph, this PR refactors that to make it more general

Test Plan:
```
python test/test_quantization.py TestFuseFx
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32602637

fbshipit-source-id: a7897c62081b9d71c67fb56e78484cf68deaacf6
2021-12-07 16:12:40 -08:00
Jerry Zhang
10411e3561 [quan][fusion] Fix a additional_fuser_method method for fuse_fx (#67876)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67876

Previously we miss it when we call obj.convert and this argument would not impact the fusion.
This PR fixes it and adds a test for it

Test Plan:
python test/test_quantization.py TestFuseFx

Imported from OSS

Reviewed By: malfet

Differential Revision: D32191364

fbshipit-source-id: 566bd39461010d70a21de71f611bb929976fe01d
2021-11-05 14:51:15 -07:00
Vasiliy Kuznetsov
d549c8de78 fx quant: enable linear-bn1d fusion for PTQ (#66484)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66484

https://github.com/pytorch/pytorch/pull/50748 added linear - bn1d fusion
in Eager mode, for PTQ only. This PR also enables this in FX graph mode.

We reuse the existing conv-bn-relu fusion handler, renaming `conv` to
`conv_or_linear` for readability.

The QAT version is saved for a future PR, for both eager and FX graph.

Test Plan:
```
python test/test_quantization.py TestFuseFx.test_fuse_linear_bn_eval
```

Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D31575392

fbshipit-source-id: f69d80ef37c98cbc070099170e335e250bcdf913
2021-10-18 10:14:28 -07:00
Jerry Zhang
508845f2b5 [quant] AO migration of the torch/quantization/quantize_fx.py and torch/quantization/fx/* (#65033)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65033

1. Move the file:
```
hg mv caffe2/torch/quantization/fx caffe2/torch/ao/quantization/fx
hg mv caffe2/torch/quantization/quantize_fx.py caffe2/torch/ao/quantization/quantize_fx.py
```
2. Create new files
```
touch caffe2/torch/quantization/quantize_fx.py
touch caffe2/torch/quantization/fx/__init__.py
```
3. import things in the new files
4. add tests to test/quantization/ao_migration/test_quantization_fx.py
this is because we have some fx import in quantize_fx and fx/*.py

Test Plan: buck test mode/dev //caffe2/test:quantization

Reviewed By: vkuzo, z-a-f

Differential Revision: D30949749

fbshipit-source-id: 9e5d4d039c8a0a0820bc9040e224f0d2c26886d3
2021-09-22 09:29:15 -07:00