Commit Graph

26 Commits

Author SHA1 Message Date
WeiChunyu-star
6ac8fe46dd Enable UFMT on all of test/quantization/ao_migration &bc (#123994)
Partially addresses #123062
Ran lintrunner on:
- test/quantization/ao_migration
- test/quantization/bc

Detail:
```
$ lintrunner -a --take UFMT --all-files
ok No lint issues.
Successfully applied all patches.
```

@ezyang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123994
Approved by: https://github.com/ezyang
2024-04-13 06:36:10 +00:00
Vasiliy Kuznetsov
216f88d084 ao migration: remove package test as this behavior is tested by other things (#94422)
Summary:

We have tests testing package level migration correctness for torch AO migration.
After reading the code, I noticed that these tests are not testing anything
additional on top of the function level tests we already have.

An upcoming user warning PR will break this test, and it doesn't seem worth fixing.
As long as the function level tests pass, 100% of user functionality will
be tested.  Removing this in a separate PR to keep PRs small.

Test plan:

```
python test/test_quantization.py -k AOMigration
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94422
Approved by: https://github.com/jcaip
2023-02-13 16:33:40 +00:00
Alex Settle
f8a07ca422 Reland 2nd attempt "Add heirachical module names to torchFX graph.node" (#91721)
Fixes #87659

Reland of PR #87742 and PR #90205

PR #90205 was reverted due to BC issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91721
Approved by: https://github.com/jerryzh168
2023-01-18 23:00:36 +00:00
HDCharles
f286cbebce [ao][fx] fixing public v private graph_module.py (#88395)
Summary: made _is_observed_module, _is_observed_standalone_module
private

Test Plan: python test/test_public_bindings.py

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D41015545](https://our.internmc.facebook.com/intern/diff/D41015545)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88395
Approved by: https://github.com/jcaip
2022-12-15 02:15:04 +00:00
HDCharles
258860fa3a [ao][fx] fixing public v private for pattern_utils.py (#88397)
Summary: made _DEFAULT_FUSION_PATTERNS,
_register_fusion_pattern,
_DEFAULT_QUANTIZATION_PATTERNS,
_DEFAULT_OUTPUT_FAKE_QUANTIZE_MAP,
_DEFAULT_OUTPUT_OBSERVER_MAP,
_register_quant_pattern,
_sorted_patterns_dict private

Test Plan: python test/test_public_bindings.py

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D41015537](https://our.internmc.facebook.com/intern/diff/D41015537)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88397
Approved by: https://github.com/jcaip
2022-12-14 03:40:02 +00:00
HDCharles
79156c11c3 [ao][fx] fixing public v private match_utils.py (#88396)
Summary: made _is_match, _find_matches, _MatchResult private also added
__all__ to lower_to_qnnpack.py

Test Plan: python test/test_public_bindings.py

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D41015540](https://our.internmc.facebook.com/intern/diff/D41015540)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88396
Approved by: https://github.com/jcaip
2022-12-13 20:16:55 +00:00
PyTorch MergeBot
1119d2fa54 Revert "Reland "Add heirachical module names to torchFX graph.node" (#90205)"
This reverts commit 6b7efac3c9.

Reverted https://github.com/pytorch/pytorch/pull/90205 on behalf of https://github.com/seemethere due to Reverting since this caused failures in internal systems, see https://fb.workplace.com/groups/802176577445480/posts/894284641568006 for discussion
2022-12-13 17:47:07 +00:00
Alex Settle
6b7efac3c9 Reland "Add heirachical module names to torchFX graph.node" (#90205)
Fixes #87659

Reland of PR #87742

Resolves errors that caused the changes to be backed out.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90205
Approved by: https://github.com/jerryzh168
2022-12-09 06:20:31 +00:00
andrewor14
13fcc412be [Quant][fx][bc-breaking] Remove unused functions in fx/utils.py (#90025)
Summary and BC-breaking notes: This commit removes the following
unused functions from both the `torch.quantization` and the
`torch.ao.quantization` namespaces:

```
graph_pretty_str
get_per_tensor_qparams
quantize_node
get_qconv_op
create_qparam_nodes
node_return_type_is_int
is_get_tensor_info_node
```

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
python test/test_quantization.py TestAOMigrationQuantizationFx

Reviewers: jerryzh168, vkuzo

Subscribers: jerryzh168, vkuzo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90025
Approved by: https://github.com/HDCharles
2022-12-07 01:31:28 +00:00
Jongsoo Park
2bca280a31 Revert D41683102: Multisect successfully blamed D41683102 for test or build failures (#90117)
Summary:
This diff is reverting D41683102
D41683102 has been identified to be causing the following test or build failures:
Tests affected:
- https://www.internalfb.com/intern/test/281475051072735/

Here's the Multisect link:
https://www.internalfb.com/intern/testinfra/multisect/1444960
Here are the tasks that are relevant to this breakage:
T124964606: 41 tests started failing for oncall ads_trainer_release in the last 2 weeks
We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it.

Test Plan: NA

Reviewed By: jspark1105

Differential Revision: D41710842

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90117
Approved by: https://github.com/soumith
2022-12-03 19:54:04 +00:00
alexmsettle
b703e4b3c2 Add hierarchical module names to torchFX graph.node #87659 (#87742)
Fixes #87659

Pass down the module hierarchy from module.named_modules() to the name field of graph.node.
This makes it so the name of each node contains descriptive information about the network architecture.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87742
Approved by: https://github.com/jerryzh168
2022-12-02 05:58:06 +00:00
andrewor14
d80056312a [Quant][fx][bc-breaking] Rename fx/*patterns.py (#89872)
Summary: This commit renames fx/quantization_patterns.py
to fx/quantize_handler.py, and fx/fusion_patterns.py to
fx/fuse_handler.py. This is because these files contain
only QuantizeHandler and FuseHandler respectively, so the
new names are more descriptive. A future commit will
further break BC by removing all the empty *QuantizeHandler
classes.

BC-breaking notes:

The following classes under the
`torch.ao.quantization.fx.quantization_patterns` namespace
are migrated to the `torch.ao.quantization.fx.quantize_handler`
namespace:
```
QuantizeHandler
BinaryOpQuantizeHandler
CatQuantizeHandler
ConvReluQuantizeHandler
LinearReLUQuantizeHandler
BatchNormQuantizeHandler
EmbeddingQuantizeHandler
RNNDynamicQuantizeHandler
DefaultNodeQuantizeHandler
FixedQParamsOpQuantizeHandler
CopyNodeQuantizeHandler
GeneralTensorShapeOpQuantizeHandler
CustomModuleQuantizeHandler
StandaloneModuleQuantizeHandler
```

The following classes under the
`torch.ao.quantization.fx.fusion_patterns` namespace are
migrated to the `torch.ao.quantization.fx.fuse_handler`
namespace:
```
DefaultFuseHandler
FuseHandler
```

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Reviewers: jerryzh168, vkuzo

Subscribers: jerryzh168, vkuzo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89872
Approved by: https://github.com/jerryzh168
2022-12-01 17:37:07 +00:00
HDCharles
25476f2e4b [ao] fixing public v private for quantization_types (#86031)
Summary: the main problem with this was that the different objects
defined simply as 'Any' should theoretically be public but making them
public either A) results in an error about the module being 'typing'
rather than whatever module it should be or B) you set the module
manually, thereby changing the module for the original 'Any' class.

note: QuantizeHandler has a similar issue where its simply defined as
'Any'

Pattern was defined in multiple places which was causing issues so i just moved it to a single
place given the note at the top of quantization_types.py indicating
these definitions should be moved to utils at some point anyway.

Finally i changed any references to these objects to point at the
correct locations. Note: i didn't see any fb internal references to
NodePattern or QuantizerCls that would cause issues.

Test Plan: python test/test_public_bindings.py

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86031
Approved by: https://github.com/jerryzh168
2022-10-12 20:06:30 +00:00
Vasiliy Kuznetsov
7b4e92acef fx quant: refactor qconfig setting out of find_matches
Summary:

Refactors `find_matches` function to only find subgraph
matches and not assign qconfigs to them. Moves the qconfig assignment
outside of the function. No logic change.

This will useful for prototyping future tools for quantizing
parts of the model. These tools will need to know the matches
and will reuse the `find_matches` function,
but they will assign their own qconfigs to them using a different
strategy.

Test plan:

```
python test/test_quantization.py -k Fx
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79713

Approved by: https://github.com/jerryzh168
2022-06-17 18:52:00 +00:00
Jerry Zhang
74454bdb46 [quant][fx] Move backend_config folder to torch.ao.quantization
Summary:
Following https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md we implemented
the backend configuration for fbgemm/qnnpack backend, currently it was under fx folder, but we'd like to use this for all different
workflows, including eager, fx graph and define by run quantization, this PR moves it to torch.ao.quantization namespace so that
it can be shared by different workflows
Also moves some utility functions specific to fx to fx/backend_config_utils.py and some files are kept in fx folder (quantize_handler.py and fuse_handler.py)

Test Plan:
python test/teset_quantization.py TestQuantizeFx
python test/teset_quantization.py TestQuantizeFxOps
python test/teset_quantization.py TestQuantizeFxModels
python test/test_quantization.py TestAOMigrationQuantization
python test/test_quantization.py TestAOMigrationQuantizationFx

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75823

Approved by: https://github.com/vkuzo
2022-04-19 15:38:57 +00:00
Jerry Zhang
975c9f15bd [quant] Rename _convert_do_not_use.py to convert.py (#74322)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74322

att, also change all references to _convert_do_not_use

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestAOMigrationQuantizationFx

Imported from OSS

Reviewed By: andrewor14

Differential Revision: D34936430

fbshipit-source-id: c96fb887847383bf47f0ec4219127e96e2b63b2d
(cherry picked from commit 8ad5a9e031e6ca4ede2656d9b2f7906a82b57c1c)
2022-03-17 18:57:08 +00:00
Jerry Zhang
a6bed4deaa [quant][fx] Remove convert.py since it is not used now (#74276)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74276

Removing convert.py since we have rerouted the traffic to _convert_do_not_use, we'll do a rename in the follow up PR

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D34914261

fbshipit-source-id: 09ad520d95fa91c525222a69474930efb3571088
(cherry picked from commit 8aeb33206f3572132356fe78395aa3ce6aff11cd)
2022-03-17 18:57:08 +00:00
Charles David Hernandez
c1d070d0f0 [ao] Fixing obs insertion through dtype propagation (#73274)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73274

As noticed in https://discuss.pytorch.org/t/calibration-of-model-in-post-training-static-quantization-using-fx-api/143661/6
and related to https://github.com/pytorch/pytorch/issues/72698 when using fx quantizaiton, if an op like view was used in a
model and the index parameters were passed in to the ops with a
variable rather than
hard coded, fx would mistakenly insert observers for them, leading to an
error when the observer tried to do tensor only operations on a
non-tensor. To fix this, an API was added to specify non tensor
arguments for various ops to enable better dtype propagation.
NON_TENSOR_ARG_DICT is a nested dict whose first key is a named tuple
which contains matching parameters for ops with nontensor args, the
inner dict's keys are dtypes and the values are a list of those arg indices that
take use such dtypes. Alternatively, instead of a list, the inner dict
value can also be a function that takes the node as an argument and
returns the list of arg indices.

Theoretically this api can support arbitrary functions but the current
implmentation is limited to simpler functions given the particular
issue this fixes seems to be rare.

Note: although torch.unsqueeze and torch.transpose are listed in
quantization_patterns.py, those ops appear to be untraceable by fx. I've
included tests for their cases but fixing this issue is beyond the scope
of this PR

Test Plan:
python test/test_quantization.py test_non_reference_size
...
python test/test_quantization.py test_non_reference_<op>

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D34410122

fbshipit-source-id: fc09949ca8a2d6473876a4b6c214eb91e9a9dae2
(cherry picked from commit 3a1375d677b7c98d62b1f5c839645698c39b32b9)
2022-03-16 01:41:17 +00:00
Jerry Zhang
d39ad0543a [quant][fx] Remove Fuser class in fusion implementation (#73470)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73470

att, this does not affect user apis since we are only exposing fuse_fx as a public api

Test Plan:
python test/test_quantization.py TestFuseFx

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D34495260

fbshipit-source-id: 3aa253bc7190e50acc7229186f210901ebc5481b
(cherry picked from commit a88517ff6feff7abbece2234d82fd53e33702237)
2022-03-01 09:29:21 +00:00
Vasiliy Kuznetsov
b999f87503 fx quant: move _parent_name to common utils (#69720)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69720

This function is also useful for DBR quant, moving it from FX utils
to common utils.

Test Plan:
```
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeDBR
```

Reviewed By: jerryzh168

Differential Revision: D33003473

Pulled By: vkuzo

fbshipit-source-id: 20360682c69d614a645c14fc29d3ee023d6b2623
2021-12-17 05:59:46 -08:00
Jerry Zhang
a73c6a45b6 [reland][quant][graphmode][fx] Enable fuse handler for sequence of 3 ops (#70006)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70006

reland: fixing some mypy errors that was missed before

This PR enables fuse handler for sequence of three ops, and merges all fuse handlers into one

TODO: we can also move this to backend_config_dict folder

Test Plan:
regression fusion test
```
python test/test_quantization.py TestFuseFx
```

Imported from OSS

Imported from OSS

Reviewed By: supriyar

Differential Revision: D33144606

fbshipit-source-id: ca34f282018a0fb4d04c7e35119eaf2d64258e78
2021-12-16 15:04:16 -08:00
Alban Desmaison
6f9844693f Revert D32974907: [quant][graphmode][fx] Enable fuse handler for sequence of 3 ops
Test Plan: revert-hammer

Differential Revision:
D32974907 (bf089840ac)

Original commit changeset: ba205e74b566

Original Phabricator Diff: D32974907 (bf089840ac)

fbshipit-source-id: e47838f3008ba014d884aef53460df654f0cf731
2021-12-15 05:46:49 -08:00
Jerry Zhang
bf089840ac [quant][graphmode][fx] Enable fuse handler for sequence of 3 ops (#69658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69658

This PR enables fuse handler for sequence of three ops, and merges all fuse handlers into one

TODO: we can also move this to backend_config_dict folder

Test Plan:
regression fusion test
```
python test/test_quantization.py TestFuseFx
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D32974907

fbshipit-source-id: ba205e74b566814145f776257c5f5bb3b24547c1
2021-12-14 19:04:21 -08:00
Jane Xu
6a224b3370 Set test owners for quantization tests (#66832)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

cc jerryzh168 jianyuh raghuramank100 jamesr66a vkuzo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66832

Reviewed By: saketh-are

Differential Revision: D31842880

Pulled By: janeyx99

fbshipit-source-id: 8aee760e4203045c12e7548a21ed5b71c557e3ee
2021-10-21 16:04:41 -07:00
Vasiliy Kuznetsov
d549c8de78 fx quant: enable linear-bn1d fusion for PTQ (#66484)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66484

https://github.com/pytorch/pytorch/pull/50748 added linear - bn1d fusion
in Eager mode, for PTQ only. This PR also enables this in FX graph mode.

We reuse the existing conv-bn-relu fusion handler, renaming `conv` to
`conv_or_linear` for readability.

The QAT version is saved for a future PR, for both eager and FX graph.

Test Plan:
```
python test/test_quantization.py TestFuseFx.test_fuse_linear_bn_eval
```

Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D31575392

fbshipit-source-id: f69d80ef37c98cbc070099170e335e250bcdf913
2021-10-18 10:14:28 -07:00
Jerry Zhang
508845f2b5 [quant] AO migration of the torch/quantization/quantize_fx.py and torch/quantization/fx/* (#65033)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65033

1. Move the file:
```
hg mv caffe2/torch/quantization/fx caffe2/torch/ao/quantization/fx
hg mv caffe2/torch/quantization/quantize_fx.py caffe2/torch/ao/quantization/quantize_fx.py
```
2. Create new files
```
touch caffe2/torch/quantization/quantize_fx.py
touch caffe2/torch/quantization/fx/__init__.py
```
3. import things in the new files
4. add tests to test/quantization/ao_migration/test_quantization_fx.py
this is because we have some fx import in quantize_fx and fx/*.py

Test Plan: buck test mode/dev //caffe2/test:quantization

Reviewed By: vkuzo, z-a-f

Differential Revision: D30949749

fbshipit-source-id: 9e5d4d039c8a0a0820bc9040e224f0d2c26886d3
2021-09-22 09:29:15 -07:00