Commit Graph

14 Commits

Author SHA1 Message Date
Nikita Shulga
e6fd28fb05 Revert D34126542: [Qunat] Add ConvTranspose reference module
Test Plan: revert-hammer

Differential Revision:
D34126542 (7a031ec17f)

Original commit changeset: 7da167695a1f

Original Phabricator Diff: D34126542 (7a031ec17f)

fbshipit-source-id: 14e40884807b9908017ae30af83a8dea23ff1f0f
(cherry picked from commit f99a7f5a69)
2022-02-16 22:24:15 +00:00
Terry Chen
7a031ec17f [Qunat] Add ConvTranspose reference module (#72473)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72473

Add ConvTranspose reference module

Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_conv_transpose_op

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D34126542

fbshipit-source-id: 7da167695a1fd9c141059bce14cce4f0608b086c
(cherry picked from commit dee22dcf48)
2022-02-16 01:56:28 +00:00
Terry Chen
ce3215db70 Fix nnq.dropout in vision mobilenetv3 pretrain model (#71438)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71438

Fix issue https://github.com/pytorch/vision/issues/5198
skip observer for nn.dropout to load pretrain model

Test Plan:
python -c "import torchvision; torchvision.models.quantization.mobilenet_v3_large(pretrained=True, quantize=True)"

Imported from OSS

Reviewed By: HDCharles

Differential Revision: D33641707

fbshipit-source-id: 14ea26557c4ff3b942cf46bf06610db0b8f06b05
(cherry picked from commit 0b8b178d26)
2022-01-22 00:02:48 +00:00
Charles David Hernandez
e47771cca0 [ao] Removing unused allow list arguments from propagate_qconfig and helper (#71104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71104

This shouldn't change any functionality given that those
variables were not used. It should be noted that a similar variable is
used in add_observer which is why it wasn't removed from there.
ghstack-source-id: 146940043

Test Plan:
python test/test_quantization.py

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D33510352

fbshipit-source-id: c66ed72c2b71a6e1822f9311467adaa1f4b730d0
2022-01-13 16:07:29 -08:00
Jerry Zhang
b7259b8660 [quant][be] Add a check in prepare_qat to make sure the model is in training mode (#69879)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69879

att

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D33080989

fbshipit-source-id: 55a631284365ec9dfd6bd7469688490ab1891d41
2021-12-22 11:00:00 -08:00
Jerry Zhang
5db711f9d3 [quant][be] Replace QConfigDynamic with QConfig in code (#69864)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69864

att, will have a follow up PR that removes QConfigDynamic in the api

Test Plan:
regression tests
```
python test/test_quantization.py TestPostTrainingStatic
python test/test_quantization.py TestPostTrainingDynamic
python test/test_quantization.py TestQuantizeFx
```

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D33073235

fbshipit-source-id: 6c1a1647032453803c55cdad7c04154502f085db
2021-12-17 22:30:57 -08:00
andrewor
4a8f27445d [Quant] Add dynamic QAT Linear module (#67325)
Summary:
**Summary:** This commit adds the `torch.nn.qat.dynamic.modules.Linear`
module, the dynamic counterpart to `torch.nn.qat.modules.Linear`.
Functionally these are very similar, except the dynamic version
expects a memoryless observer and is converted into a dynamically
quantized module before inference.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67325

Test Plan:
`python3 test/test_quantization.py TestQuantizationAwareTraining.test_dynamic_qat_linear`

**Reviewers:** Charles David Hernandez, Jerry Zhang

**Subscribers:** Charles David Hernandez, Supriya Rao, Yining Lu

**Tasks:** 99696812

**Tags:** pytorch

Reviewed By: malfet, jerryzh168

Differential Revision: D32178739

Pulled By: andrewor14

fbshipit-source-id: 5051bdd7e06071a011e4e7d9cc7769db8d38fd73
2021-11-08 10:24:25 -08:00
Supriya Rao
8a974a482c [quant] Add support for quantization of Embedding{Bag} in dynamic quant APIs (#65674)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65674

Before this PR user had to use the eager mode static quantization APIs to quantize Embedding/EmbeddingBag modules.
With this PR they can use either the static or dynamic quantization APIs for Embedding quantization

The only qconfig supported for embedding quantization is float_qparams_weight_only_qconfig whcih is currently enforced in the from_float
method of the quantized Embedding/Embedding modules.

To combine embedding quantization with Linear dynamic quantization, user can use the qconfig_dict to specify different qconfig for each module type.

The prepare/convert APIs can still be used to quantize Embeddings, with the caveat that user need to ensure input to Embedding ops are FP32.

Addresses Issue #65185
ghstack-source-id: 139935419

Test Plan:
python test/test_quantization.py

Imported from OSS

Reviewed By: gchanan

Differential Revision: D31211199

fbshipit-source-id: 8c747881caee5ccbf8b93c6704b08d132049dea4
2021-10-06 23:19:38 -07:00
Zafar
0d020effab [quant] Fix the parts that were missing after initial migration (#66058)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66058

After the initial migration from `torch.quantization` to `torch.ao.quantization`, some of the files did not change.
This happened because the migration was done in parallel, and some of the files were landed while the others were still in the original location.
This is the last fix in the AO migration phase 1, which completely enables the ao.quantization namespace.

Test Plan: `python test/test_quantization.py`

Reviewed By: vkuzo

Differential Revision: D31366066

Pulled By: z-a-f

fbshipit-source-id: bf4a74885be89d098df2d87e685795a2a64026c5
2021-10-05 11:45:37 -07:00
Yuan Shangguan (June)
3f5f721ab3 Pass through allow-list from prepare_qat into propagate_qconfig_ to allow custom mapping and custom QAT module (#65119)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65119

Pytorch Quantization: allow prepare_qat to include custom module by passing allow_list into the prepare_qat.

When we are implementing custom module and custom mapping for Quantization Aware Training (QAT), we need to add the custom module to the mappings and to the allow_list during prepare_qat. The allow_list needs to be surfaced to the  propagate_qconfig_.

Test Plan: relying on general unit test

Reviewed By: supriyar

Differential Revision: D30982060

fbshipit-source-id: 1114115b6a3b853238d33d72b5cbaafc60f463e0
2021-09-21 17:15:25 -07:00
Supriya Rao
9d52651d4e torch.ao migration: stubs.py phase 1 (#64861)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64861

1. move the file
  ```
  hg mv caffe2/torch/quantization/stubs.py caffe2/torch/ao/quantization/
  ```

  2. create a new file in the old location and copy the imports
  3. fix all call sites inside `torch`
ghstack-source-id: 137885365

Test Plan: buck test mode/dev //caffe2/test:quantization

Reviewed By: jerryzh168

Differential Revision: D30879678

fbshipit-source-id: a2d24f25d01064212aca15e94e8c78240ba48953
2021-09-13 08:40:29 -07:00
Zafar Takhirov
9cc44aad21 [quant] AO migration of the quantize.py (resubmission) (#64445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64445

AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates the quantize.py from torch.quantization to torch.ao.quantization.
At this point both locations will be supported. Eventually the torch.quantization will be deprecated.

Test Plan: `buck test mode/dev //caffe2/test:quantization`

Reviewed By: HDCharles

Differential Revision: D30734870

fbshipit-source-id: dc204f3cc46bff2cc81c95159eab9d333b43bb4b
2021-09-08 04:58:47 -07:00
Zafar Takhirov
046ed57a4d Revert D30055886: [quant] AO migration of the quantize.py
Test Plan: revert-hammer

Differential Revision:
D30055886 (44e3ed88c9)

Original commit changeset: 8ef7470f9fa6

fbshipit-source-id: c5bd3ead43a2d44b9e56872ec5bd7a195bdac725
2021-09-02 16:59:59 -07:00
Zafar Takhirov
44e3ed88c9 [quant] AO migration of the quantize.py (#64086)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64086

AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.

This migrates the `quantize.py` from torch.quantization to `torch.ao.quantization`.

At this point both locations will be supported. Eventually the torch.quantization will be deprecated.

Test Plan: `buck test mode/opt //caffe2/test:quantization`

Reviewed By: jerryzh168, raghuramank100

Differential Revision: D30055886

fbshipit-source-id: 8ef7470f9fa640c0042bef5bb843e7a05ecd0b9f
2021-08-29 20:30:01 -07:00