Commit Graph

34 Commits

Author SHA1 Message Date
Maggie Moss
5f18f240de Add initial suppressions for pyrefly (#164177)
Adds suppressions to pyrefly will typecheck clean: https://github.com/pytorch/pytorch/issues/163283

Test plan:
`python3 scripts/lintrunner.py`
`pyrefly check`

---

Pyrefly check before: https://gist.github.com/maggiemoss/3a0aa0b6cdda0e449cd5743d5fce2c60
After:

```
 INFO Checking project configured at `/Users/maggiemoss/python_projects/pytorch/pyrefly.toml`
 INFO 0 errors (1,063 ignored)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164177
Approved by: https://github.com/Lucaskabela
2025-10-02 20:57:41 +00:00
Xuehai Pan
5cedc5a0ff [BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format (#144552)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144552
Approved by: https://github.com/ezyang
2025-08-07 00:09:56 +00:00
Edward Z. Yang
3bf922a6ce Apply UFMT to low traffic torch modules (#106249)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106249
Approved by: https://github.com/Skylion007
2023-07-29 23:37:30 +00:00
HDCharles
6fe4ccc7cb [ao] qconfig.py fix public v private (#87515)
Summary: made is_reuse_input_qconfig, _activation_is_memoryless,
_partial_wrapper_equals, _obs_or_fq_ctr_equals,
_add_module_to_qconfig_obs_ctr, _assert_valid_qconfig private

Test Plan: python test/test_public_bindings.py

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D40709280](https://our.internmc.facebook.com/intern/diff/D40709280)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87515
Approved by: https://github.com/jcaip
2022-11-09 22:30:03 +00:00
Charles David Hernandez
f309f8fbd4 [quant] ao migration of observer and qconfig (#64982)
Summary:
(Had to recreate this diff so it wasn't dependent on the stack)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64982

migration of qconfig.py and observer.py to torch/ao/quantization using new test format
ghstack-source-id: 138215256

Test Plan:
buck test mode/opt //caffe2/test:quantization

https://www.internalfb.com/intern/testinfra/testconsole/testrun/8444249354294701/

buck test mode/dev //caffe2/test:quantization -- TestAOMigrationQuantization

https://www.internalfb.com/intern/testinfra/testrun/3940649742829796

Reviewed By: z-a-f

Differential Revision: D30982534

fbshipit-source-id: 48d08969b1984311ceb036eac0877c811cd6add9
2021-09-16 10:33:16 -07:00
Charles David Hernandez
37bcefa248 [quant] Removing hardcoded "torch.quantization.observer" for migration (#64981)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64981

this would have cause errors when observer.py was moved to ao.

see: D30391189
ghstack-source-id: 138118430

Test Plan:
buck test mode/opt //caffe2/test:quantization -- --exact 'caffe2/test:quantization - test_dynamic_quant_multi_uses (quantization.jit.test_quantize_jit.TestQuantizeDynamicJitPasses)'

buck test mode/opt //caffe2/test:quantization -- --exact 'caffe2/test:quantization - test_save_load_state_dict_script (quantization.core.test_workflow_module.TestObserver)'

Reviewed By: supriyar

Differential Revision: D30432008

fbshipit-source-id: 754727a89c78f6ceada6f8ff92c304f3953f38fc
2021-09-15 15:22:19 -07:00
Vasiliy Kuznetsov
6101cbcedb torch.ao migration: fake_quantize.py, phase 1 (#64814)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64814

1. move the file
```
hg mv caffe2/torch/quantization/fake_quantize.py caffe2/torch/ao/quantization/
```

2. create a new file in the old location and copy the imports
3. fix all callsites inside `torch`

Test Plan:
```
buck test mode/dev //caffe2/test:quantization
```

Reviewed By: z-a-f

Differential Revision: D30866792

fbshipit-source-id: 7a221cb46c0ab01f1c5de9be061f09ecc83ce23e
2021-09-13 15:22:28 -07:00
Charles David Hernandez
6c3ebccc00 Updating the names of these functions (#63513)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63513

updating these names per Jerry's nits in the previous pr

Test Plan: Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D30406710

fbshipit-source-id: a9f1577a2b8c4a93f5005e0f6278b7d7348d8b66
2021-08-19 13:34:34 -07:00
Charles David Hernandez
877e6f2be3 Bugfix for fuse qconfig comparison (#63384)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63384

In some cases the changes to qconfig on module would cause the
fusions to fail. This bugfix solves that problem by adding a
qconfig_function_comparison that compares the functions within the
qconfig rather than the modules the qconfigs are on. The comparison
looks at the partial object within QConfig.activation/weight.p and
compares args, keywords and func. This is necessary to do mannually
because partial doesn't have __eq__ implemented and so == reverts to is.

Test Plan:
python test/test_quantization.py
TestFuseFx.test_problematic_fuse_example

Imported from OSS

Reviewed By: supriyar, ejguan

Differential Revision: D30386264

fbshipit-source-id: 51e358c021c39d6f48dc12ad2a82b2838677b9de
2021-08-18 13:31:56 -07:00
Supriya Rao
d5a7579597 [quant] Make version 1 the default for get_default_qat_qconfig (#63043)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63043

In version 1 we use the fused module/operator during QAT. Making this the default for all QAT runs going forward.

Older models saved after prepare_qat_fx can still load their state_dict into a model prepared using version 1.
The state_dict will still have the same attribute for the observer/fake_quant modules.

There may be some numerics difference between the old observer code in observer.py and the new fused module that was
re-written in C++/CUDA to perform observe + fake_quantize.

This PR also updates the test to check for the new module instead of the default FakeQuantize module.
Note: there are also some changes to make the operator work for multi-dim per-channel quantization + updated the test for that.

Test Plan:
python test/test_quantization.py TestSerialization.test_default_qat_qconfig

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D30232222

fbshipit-source-id: f3553a1926ab7c663bbeed6d574e30a7e90dfb5b
2021-08-11 22:06:44 -07:00
Supriya Rao
aa89d5f7f6 [quant] Update get_default_qat_qconfig to return the fused observer+fake_quant module (#62702)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62702

Expose the qconfig to the user to speed up training by leveraging the fused module.
The module currently supports per-tensor/per-channel moving avg observer and fake-quantize.

For details on perf benefits, refer to https://github.com/pytorch/pytorch/pull/61691

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D30093719

fbshipit-source-id: b78deb7810f5b597474b9b9a0395d361d04eb46a
2021-08-10 09:28:49 -07:00
Charles David Hernandez
32d0c3e8ee Support for reference convert_fx working on gpu
Summary:
This PR enables gpu only quantization, best used with is_reference since
there are not many gpu kernels for ops as of now.

This PR mainly changes how qconfigs and their obs constructors operate once they
on modules qconfig. The function add_module_to_qconfig_obs_ctr takes the obs constructors on the original
qconfig, and configures them so that when invoked, the created obs will
be on whatever device the module occupies. (Once observers are created,
module.to(device) is already setup so that it moves any observers). To do this,
a new method and a few small chanegs were added to the _PartialWrapper class that
our observers already use to create constructors (without changing the
existing functionality). These changes work in
concert with changes to the prepare flow such that when the qconfigs are
propagated to the moduels (in quantize.py and qconfig_utils.py) they are configured using add_module_to_qconfig_obs_ctr.

Ideally this would work on other models but the is_reference support for
a lot of modules isn't there yet, those tests should be added in a
future PR

Test Plan:
python test/test_quantization.py TestQuantizeFxModels.test_static_gpu_convert_basic

python test/test_quantization.py TestQuantizeFxModels.test_switch_device_prepare_convert

python test/test_quantization.py TestQuantizeFxModels.test_prepare_serialize_switch_device_convert

python test/test_quantization.py TestQuantizeFx.test_qconfig_precedence

Reviewed By: vkuzo

Differential Revision: D29684114

fbshipit-source-id: 19fefb8e1998eaf212723e836276ccf39467f2e7
2021-07-23 10:30:38 -07:00
Sam Estep
4753100a3b Un-ignore F403 in .flake8 (#55838)
Summary:
Generally wildcard imports are bad for the reasons described here: https://www.flake8rules.com/rules/F403.html

This PR replaces wildcard imports with an explicit list of imported items where possible, and adds a `# noqa: F403` comment in the other cases (mostly re-exports in `__init__.py` files).

This is a prerequisite for https://github.com/pytorch/pytorch/issues/55816, because currently [`tools/codegen/dest/register_dispatch_key.py` simply fails if you sort its imports](https://github.com/pytorch/pytorch/actions/runs/742505908).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55838

Test Plan: CI. You can also run `flake8` locally.

Reviewed By: jbschlosser

Differential Revision: D27724232

Pulled By: samestep

fbshipit-source-id: 269fb09cb4168f8a51fd65bfaacc6cda7fb87c34
2021-04-13 09:24:07 -07:00
Jerry Zhang
b685864f50 [quant][graphmode][fx] Add reference option support for linear_static_fp16 (#52650)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52650

linear_dynamic_fp16 has following dtypes for activation, weight, bias, output:
(fp32, fp16, fp32, fp32)

linear_static_fp16 has following dtypes:
(fp16, fp16, fp16, fp16)

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D26599803

fbshipit-source-id: b4a8345d355125070be718a227288cc848cc8bbc
2021-02-27 08:25:44 -08:00
Jerry Zhang
177694681e [quant][graphmode][fx] Add reference option support for linear_dynamic_fp16 (#52534)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52534

Currently linear_dynamic_fp16 has a signature that's tied to fbgemm/qnnpack
We'll need to produce a pattern equivalent to linear_dynamic_fp16 to support extensions
to other backends

Test Plan:
python test/test_quantization.py TestQuantizeFxOps.test_linear_dynamic_fp16

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D26557726

fbshipit-source-id: 270c9f781f73c79416a092b7831294cabca84b0c
2021-02-26 21:12:22 -08:00
Vasiliy Kuznetsov
19a8ada8d5 quant: fix conv transpose with qconfig == None (#52844)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52844

Fixes a crash in qconfig checking which happened if a model had conv transpose
with qconfig set to None.

Test Plan:
```
python test/test_quantization.py TestPostTrainingStatic.test_convtranspose_per_channel_qconfig_none
```

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D26666043

fbshipit-source-id: e1b62840b4e3c67acbb4dbdcd32514b374efce1e
2021-02-25 11:52:30 -08:00
Vasiliy Kuznetsov
44c17b28c6 quant: nice error message on convtranspose with per-channel weight (#49899)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49899

Per channel weights observer in conv transpose is not supported yet.  Adding an
error message which fails instantly instead of making the user wait until after
calibration/training finishes.

Test Plan:
```
python test/test_quantization.py TestPostTrainingStatic.test_convtranspose_per_channel_fails_early
python test/test_quantization.py TestQuantizeFx.test_convtranspose_per_channel_fails_early
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D25717151

fbshipit-source-id: 093e5979030ec185e3e0d56c45d7ce7338bf94b6
2021-01-05 09:38:57 -08:00
Jerry Zhang
576fa09157 [quant][fix] Fix quant type classification for float_qparam qconfig (#48069)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48069

also renamed float_qparam_dynamic_qconfig to float_qparam_weight_only_qconfig
It's not used in user code yet so we only need to update the tests.

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D25010175

fbshipit-source-id: caa3eaa5358a8bc5c808bf5f64e6ebff3e0b61e8
2020-11-18 18:22:08 -08:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Supriya Rao
3293fdfa80 [quant] Enable from_float for quantized Embedding_Bag (#43176)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43176

Convert floating point nn.EmbeddingBag module to
nn.quantized.dynamic.EmbeddingBag module

Test Plan:
python test/test_quantization.py TestDynamicQuantizedModule.test_embedding_bag_api
python test/test_quantization.py TestPostTrainingDynamic.test_embedding_quantization

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23200196

fbshipit-source-id: 090f47dbf7aceab9c719cbf282fad20fe3e5a983
2020-08-21 11:46:03 -07:00
Mike Ruberry
b7a9bc0802 Revert D22217029: Add fake quantize operator that works in backward pass
Test Plan: revert-hammer

Differential Revision:
D22217029 (48e978ba18)

Original commit changeset: 7055a2cdafcf

fbshipit-source-id: f57a27be412c6fbfd5a5b07a26f758ac36be3b67
2020-08-07 23:04:40 -07:00
Presley Graham
48e978ba18 Add fake quantize operator that works in backward pass (#40532)
Summary:
This diff adds FakeQuantizeWithBackward. This works the same way as the regular FakeQuantize module, allowing QAT to occur in the forward pass, except it has an additional quantize_backward parameter. When quantize_backward is enabled, the gradients are fake quantized as well (dynamically, using hard-coded values). This allows the user to see whether there would be a significant loss of accuracy if the gradients were quantized in their model.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40532

Test Plan: The relevant test for this can be run using `python test/test_quantization.py TestQATBackward.test_forward_and_backward`

Reviewed By: supriyar

Differential Revision: D22217029

Pulled By: durumu

fbshipit-source-id: 7055a2cdafcf022f1ea11c3442721ae146d2b3f2
2020-08-07 17:47:01 -07:00
Supriya Rao
38bf5be24f [quant] Use PlaceholderObserver instead of Fp16Observer and NoopObserver (#42348)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42348

Use the dtype info in placeholderObserver to decide what ops to insert in the graph
In the next PR we can delete NoopObserver

Test Plan:
python test/test_quantization.py

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D22859457

fbshipit-source-id: a5c618f22315534ebd9a2df77b14a0aece196989
2020-07-31 12:33:56 -07:00
Supriya Rao
6bd46b583e [quant][graph] Add support for FP16 dynamic quant (#42222)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42222

This change adds the necessary passes to perform FP16 dynamic quantization.
We skip inserting observers for activations based on the dtype (torch.float16) and only insert the Fp16Observer for weights

Test Plan:
python test/test_quantization.py TestQuantizeJitOps

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D22849220

fbshipit-source-id: 2c53594ecd2485e9e3dd0b380eceaf7c5ab5fc50
2020-07-31 12:33:53 -07:00
Edward Leardi
733b8c23c4 Fix several quantization documentation typos (#40567)
Summary:
This PR fixes several typos I noticed in the docs here: https://pytorch.org/docs/master/quantization.html. In one case there was a misspelled module [torch.nn.instrinsic.qat](https://pytorch.org/docs/master/quantization.html#torch-nn-instrinsic-qat) which I corrected and am including screenshots of below just in case.

<img width="1094" alt="before" src="https://user-images.githubusercontent.com/54918401/85766765-5cdd6280-b6e5-11ea-93e6-4944cf820b71.png">

<img width="1093" alt="after" src="https://user-images.githubusercontent.com/54918401/85766769-5d75f900-b6e5-11ea-8850-0d1f5ed67b16.png">
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40567

Differential Revision: D22311291

Pulled By: ezyang

fbshipit-source-id: 65d1f3dd043357e38a584d9e30f31634a5b0995c
2020-07-07 09:45:23 -07:00
Supriya Rao
6aebd2c412 [quant][graphmode] Add FP16 quant support - Insert Noop Observers (#40708)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40708

Insert NoopObservers for activations and weight tensors for FP16

Test Plan:
python test/test_quantization.py test_prepare_dynamic

Imported from OSS

Differential Revision: D22335976

fbshipit-source-id: b19e8035c7db3b0b065ec09c9ad6d913eb434f3e
2020-07-01 14:13:31 -07:00
Supriya Rao
727e77a809 [quant] Enable reduce_range for graphmode (#39874)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39874

When fbgemm backend is set we make sure reduce_range is set to true to avoid overflow in the operator
Also adds test for per-channel quant with graph mode and compare numerics with eager mode

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D22011205

fbshipit-source-id: 1c7c9b7ab0d84200e3d8d85c34978554c30c0169
2020-06-12 16:25:58 -07:00
Supriya Rao
cbff959bd7 [quant] Return default qconfig when backend is 'none' (#38407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38407

We can still run some quantized tests even when fbgemm/qnnpack isn't enabled

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D21554257

fbshipit-source-id: e4fa8f61f6a6717881c00620ed7938c01ffbf958
2020-05-14 09:53:50 -07:00
Supriya Rao
daba68c601 [quant][graph] Add a new observer type for dynamic quantization (#35455)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35455

In graph mode we need to observer the activation tensor for dynamic quantization. This observer should behave the same way as the quantization functions called in the dynamic operator.
Currently for qlinear_dynamic we call quant_utils::ChooseQuantizationParams which has its own logic for calculating scale and zero_point.
We mimic those calculations in the new observer.

Test Plan:
python test/test_quantization.py ObserverTest

Imported from OSS

Differential Revision: D20664586

fbshipit-source-id: e987ea71fff777c21e00c498504e6586e92568a2
2020-03-26 17:38:21 -07:00
Supriya Rao
b4b8b3c0ca Revert D20630988: [quant][graph] Add a new observer type for dynamic quantization
Test Plan: revert-hammer

Differential Revision:
D20630988

Original commit changeset: 7e7aca77590f

fbshipit-source-id: 6bc67ca322c1703004e0053f8eba9b8f6a3a5f67
2020-03-25 18:52:21 -07:00
Supriya Rao
7e24ab8c4a [quant][graph] Add a new observer type for dynamic quantization (#35265)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35265

In graph mode we need to observer the activation tensor for dynamic quantization. This observer should behave the same way as the quantization functions called in the dynamic operator.
Currently for qlinear_dynamic we call quant_utils::ChooseQuantizationParams which has its own logic for calculating scale and zero_point.
We mimic those calculations in the new observer.

Test Plan:
python test/test_quantization.py ObserverTest

Imported from OSS

Differential Revision: D20630988

fbshipit-source-id: 7e7aca77590f965dcb423a705e68d030aaf98550
2020-03-25 16:50:05 -07:00
Chris Gottbrath
7c4b9042ab Updates to quantization documentation (#30288)
Summary:
This pull request includes fixes for six quantization doc bugs.

https://github.com/pytorch/pytorch/issues/30283 - Rendering issue on QConfig
https://github.com/pytorch/pytorch/issues/26305 - Minor doc issue on fuse_modules()
https://github.com/pytorch/pytorch/issues/27451 - Issues with ConvReLU2d, ConvReLU3d, and LinearReLU doc issues
https://github.com/pytorch/pytorch/issues/26899 - Missing docstrings in torch.nn.intrinsic fused functions
https://github.com/pytorch/pytorch/issues/29735 - add discussion of QNNPack to quantization doc page
https://github.com/pytorch/pytorch/issues/27938 - some of the quantized functions lack documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30288

Differential Revision: D18653368

Pulled By: gottbrath

fbshipit-source-id: 410b3dd81ff10909a7f1a7736ca42d7cabf0beb1
2019-11-23 09:29:30 -08:00
Lingyi Liu
7d3afc4186 enable the per channel dynamic quantization (#30122)
Summary:
The PR tried to enable the per-channel(row-wise) dynamic quantization for linear operator. Given we have seen some accuracy drop due to the per-tensor quantization, we expect the per-channel could help improve the accuracy.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30122

Differential Revision: D18630541

Pulled By: lly-zero-one

fbshipit-source-id: d52685deec5e7de46cd686ae649a8c8765b9cacf
2019-11-21 10:12:05 -08:00
Zafar Takhirov
dc8785a022 Refactoing names for consistency
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27670

Test Plan: Imported from OSS

Differential Revision: D17846269

Pulled By: z-a-f

fbshipit-source-id: ed3c7441c185bf11b2e62879aa3ecbc654aa2d4e
2019-10-16 12:18:26 -07:00