Commit Graph

16 Commits

Author SHA1 Message Date
Sergii Dymchenko
591222f5d9 Fix use-dict-literal lint (#83718)
Fix use-dict-literal pylint suggestions by changing `dict()` to `{}`. This PR should do the change for every Python file except test/jit/test_list_dict.py, where I think the intent is to test the constructor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83718
Approved by: https://github.com/albanD
2022-08-24 00:26:46 +00:00
joncrall
4618371da5 Integrate xdoctest - Rebased (#82797)
This is a new version of #15648 based on the latest master branch.

Unlike the previous PR where I fixed a lot of the doctests in addition to integrating xdoctest, I'm going to reduce the scope here. I'm simply going to integrate xdoctest, and then I'm going to mark all of the failing tests as "SKIP". This will let xdoctest run on the dashboards, provide some value, and still let the dashboards pass. I'll leave fixing the doctests themselves to another PR.

In my initial commit, I do the bare minimum to get something running with failing dashboards. The few tests that I marked as skip are causing segfaults. Running xdoctest results in 293 failed, 201 passed tests. The next commits will be to disable those tests. (unfortunately I don't have a tool that will insert the `#xdoctest: +SKIP` directive over every failing test, so I'm going to do this mostly manually.)

Fixes https://github.com/pytorch/pytorch/issues/71105

@ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82797
Approved by: https://github.com/ezyang
2022-08-12 02:08:01 +00:00
Andrew Or
0bdf9a9833 [Quant][fx] Decouple prepare_*fx from training/eval modes (#75401)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75401

This commit removes asserts that require prepare_fx to
be run in eval mode and prepare_qat_fx to be run in training mode.

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_prepare_mode

Imported from OSS

Reviewed By: vkuzo, jerryzh168

Differential Revision: D35457100

fbshipit-source-id: 13a55b13d9e389991f69c06c6a70bc51cdebba36
(cherry picked from commit fb0685e0873dc8e807da3213be403b51e8b4a687)
2022-04-08 15:34:08 +00:00
Jerry Zhang
16554bec1b [qunat][fx][fix] Fix get_module_type for fusion (#72735)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72735

We use `get_matched_types` to get the (type) pattern from matched modules.
And we need to use MatchAllNode instead of type(MatchAllNode) to query the fuser_method for the pattern

Test Plan:
TODO

Imported from OSS

Reviewed By: raghuramank10000

Differential Revision: D34180705

fbshipit-source-id: db9b6e791a9f26b70079fddc95fce033052199ab
(cherry picked from commit 01d38afabcb1bfc207dee7d49ee13df500d32fdf)
2022-02-25 18:37:31 +00:00
Vasiliy Kuznetsov
e73eaffd3b quant: add QAT fused Linear-Bn1d [1/x]: prepared module (#72431)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72431

Adds support for a fused QAT observed module for `Linear` followed by
`BatchNorm1d`. In this PR, only the support for prepared module with
fake_quants in the right places is added.

A future PR will add support for `convert`, and tests for eager and FX
graph mode workflows.

Similar to conv-bn, we rescale the weight before applying the fake
quant, and undo the rescaling after the linear operation.

Test Plan:
```
python test/test_quantization.py TestQuantizeEagerQATNumerics.test_linear_bn
```

Imported from OSS

Reviewed By: jerryzh168, raghuramank10000

Differential Revision: D34044427

fbshipit-source-id: 47a519173939ca4824d2c6e6ea7a599764a8ed10
(cherry picked from commit bfc75fe078)
2022-02-18 13:14:56 +00:00
Jerry Zhang
082ff25f37 [reland][bc-breaking][quant][be] Refactor fuser_method to include is_qat argument" (#71956)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71956

Pull Request resolved: https://github.com/facebookresearch/mobile-vision/pull/59

Original commit changeset: f3912e210e8c

Original Phabricator Diff: D33178977 (ef501e8fed)

Test Plan:
Please see original diff for test plans

**Static Docs Preview: classyvision**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D33833203/V3/classyvision/)|

|**Modified Pages**|

Reviewed By: andrewor14

Differential Revision: D33833203

fbshipit-source-id: 74a8f22730b00aafa6a173b208e635c1d696959e
(cherry picked from commit fb88772b18)
2022-01-31 23:02:22 +00:00
Nikita Shulga
56511f859a Revert D33178977: [bc-breaking][quant][be] Refactor fuser_method to include is_qat argument
Test Plan: revert-hammer

Differential Revision:
D33178977 (ef501e8fed)

Original commit changeset: 0c1499c45526

Original Phabricator Diff: D33178977 (ef501e8fed)

fbshipit-source-id: f3912e210e8c588fdbdc9c3c5f4acf2aa8fe6678
(cherry picked from commit cd62183414)
2022-01-27 03:29:40 +00:00
Jerry Zhang
ef501e8fed [bc-breaking][quant][be] Refactor fuser_method to include is_qat argument (#70009)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70009

Currently we rely on module.training to decide whether we'll do a qat fusion or ptq fusion, this is
not ideal since training flag has nothing to do with quantization, this PR introduces an extra flag `is_qat`
to control this

Note: currently we still has the constraint that when `is_qat` is True, the modules must be in training mode, we
can relax this constraint later

Test Plan:
```
python test/test_quantization.py TestFuseFx
python test/test_quantization.py TestFusion
```

Imported from OSS

**Static Docs Preview: classyvision**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D33178977/V36/classyvision/)|

|**Modified Pages**|

Reviewed By: mruberry

Differential Revision: D33178977

fbshipit-source-id: 0c1499c45526971140d9ad58e2994d1edf5ad770
(cherry picked from commit 2d51f9fb28)
2022-01-26 23:33:28 +00:00
Jon Morton
123be0e5b7 [fusion] Add ConvTranspose+BN fusion support (#70022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70022

Add support for fusing ConvTranpose{1,2,3}d with BatchNorm{1,2,3}d. This re-uses the existing fusion logic but adds a "transpose" flag to the fusing function which when enabled will use the appropriate reshape for ConTranspose's transposed weights.

Test Plan: `buck test mode/dev //caffe2/test:quantization -- -r quantization.eager.test_fusion.TestFusion`

Reviewed By: jerryzh168

Differential Revision: D33074405

fbshipit-source-id: 5e9eff1a06d8f98d117e7d18e80da8e842e973b7
2021-12-20 18:42:48 -08:00
Jerry Zhang
a73c6a45b6 [reland][quant][graphmode][fx] Enable fuse handler for sequence of 3 ops (#70006)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70006

reland: fixing some mypy errors that was missed before

This PR enables fuse handler for sequence of three ops, and merges all fuse handlers into one

TODO: we can also move this to backend_config_dict folder

Test Plan:
regression fusion test
```
python test/test_quantization.py TestFuseFx
```

Imported from OSS

Imported from OSS

Reviewed By: supriyar

Differential Revision: D33144606

fbshipit-source-id: ca34f282018a0fb4d04c7e35119eaf2d64258e78
2021-12-16 15:04:16 -08:00
Alban Desmaison
6f9844693f Revert D32974907: [quant][graphmode][fx] Enable fuse handler for sequence of 3 ops
Test Plan: revert-hammer

Differential Revision:
D32974907 (bf089840ac)

Original commit changeset: ba205e74b566

Original Phabricator Diff: D32974907 (bf089840ac)

fbshipit-source-id: e47838f3008ba014d884aef53460df654f0cf731
2021-12-15 05:46:49 -08:00
Jerry Zhang
bf089840ac [quant][graphmode][fx] Enable fuse handler for sequence of 3 ops (#69658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69658

This PR enables fuse handler for sequence of three ops, and merges all fuse handlers into one

TODO: we can also move this to backend_config_dict folder

Test Plan:
regression fusion test
```
python test/test_quantization.py TestFuseFx
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32974907

fbshipit-source-id: ba205e74b566814145f776257c5f5bb3b24547c1
2021-12-14 19:04:21 -08:00
Jerry Zhang
05946051f8 [quant][graphmode] initial support for fusion pattern in backend_config_dict (#69335)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69335

This PR added support for configuring fusion with:
"pattern", "fuser_method"

This only works for simple sequence of 2 op patterns currently, will extend this in future PRs

Test Plan:
regresion test on linear-relu fusion:
```
python test/fx2trt/test_quant_trt.py TestQuantizeFxTRTOps
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32816164

fbshipit-source-id: f300b7b96b36908cb94a50a8a17e0e15032509eb
2021-12-07 16:54:42 -08:00
Jerry Zhang
e5a1ee0e5a [quant][graphmode] Refactor fusion to use the new Pattern format (#68770)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68770

Previous fusion only works for a sequnce of ops, which is not general enough for fusion patterns
that is defined by a subgraph, this PR refactors that to make it more general

Test Plan:
```
python test/test_quantization.py TestFuseFx
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32602637

fbshipit-source-id: a7897c62081b9d71c67fb56e78484cf68deaacf6
2021-12-07 16:12:40 -08:00
Zafar Takhirov
02dec91212 [quant] AO migration of the torch/quantization/utils.py (phase 1) (#64919)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64919

AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly. This migrates the quantization utilities.
ghstack-source-id: 138303325

Test Plan: `buck test mode/dev //caffe2/test:quantization`

Reviewed By: jerryzh168

Differential Revision: D30899082

fbshipit-source-id: 85eb38c419e417147e71758b682cd095308dd0c9
2021-09-16 21:30:18 -07:00
Charles David Hernandez
8a094e3270 [quant]ao migration for quantization mappings and fuser method mappings hg mv (#64985)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64985

moving quantization_mappings.py and fuser_method_mappings.py to the ao folder while retaining backwards compatibility

also added dict test

ghstack-source-id: 138215312

Test Plan:
buck test mode/dev //caffe2/test:quantization

https://www.internalfb.com/intern/testinfra/testrun/7036874471986444

buck test mode/dev //caffe2/test:quantization -- TestAOMigrationQuantization

https://www.internalfb.com/intern/testinfra/testrun/5348024625792701

Reviewed By: z-a-f

Differential Revision: D30982551

fbshipit-source-id: 00f53bd44009d6012a7de852000aad6885131edb
2021-09-16 12:59:20 -07:00