Commit Graph

63 Commits

Author SHA1 Message Date
Zafar
521d1071f8 [quant] Subpackage import in nn.quantized (#84141)
Some of the subpackages were not included in the 'torch.nn.quantized'.
That would cause some specific cases fail.
For example, `from torch.nn.quantized import dynamic` would work,
but `import torch; torch.nn.quantized.dynamic` would fail.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84141
Approved by: https://github.com/andrewor14
2022-09-01 11:35:03 +00:00
zaf
2f04ba2c7c [quant][ao_migration] torch.nn.qattorch.ao.nn.qat (#78716)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [X] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [X] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [X] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [X] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [X] [Current PR] `torch.nn.qat` → `torch.ao.nn.qat`
    - [X] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [X] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- None

Differential Revision: [D36861197](https://our.internmc.facebook.com/intern/diff/D36861197/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36861197/)!

Differential Revision: [D36861197](https://our.internmc.facebook.com/intern/diff/D36861197)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78716
Approved by: https://github.com/jerryzh168
2022-08-25 16:50:38 +00:00
PyTorch MergeBot
4cbb1986fe Revert "[quant][ao_migration] torch.nn.qattorch.ao.nn.qat (#78716)"
This reverts commit 7cd2fa1d38.

Reverted https://github.com/pytorch/pytorch/pull/78716 on behalf of https://github.com/janeyx99 due to sorry, reverting so https://github.com/pytorch/pytorch/pull/78713 could be cleanly reverted
2022-08-22 07:23:24 +00:00
zaf
7cd2fa1d38 [quant][ao_migration] torch.nn.qattorch.ao.nn.qat (#78716)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [X] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [X] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [X] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [X] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [X] [Current PR] `torch.nn.qat` → `torch.ao.nn.qat`
    - [X] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [X] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- None

Differential Revision: [D36861197](https://our.internmc.facebook.com/intern/diff/D36861197/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36861197/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78716
Approved by: https://github.com/jerryzh168
2022-08-22 05:33:23 +00:00
anjali411
f68f77610a Add __all__ to torch.nn.quantized, fx.passes, ao.nn and amp submodules (#80376)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80376
Approved by: https://github.com/albanD
2022-06-27 21:36:27 +00:00
Jerry Zhang
f83d047338 [quant][fx] Use native backend_config_dict in prepare
Summary:
Previously we are still relying on the registration mechnism and get the default quantize handlers that are registered,
now we have moved all registration to backend_config_dict we can get all quant patterns just from backend_config_dict now.

This PR enables using native backend_config_dict everywhere in prepare when the backend_config_dict is None, we'll also
do similar changes in convert as well

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
python test/test_quantization.py TestFXNumericSuiteCoreAPIs

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75469

Approved by: https://github.com/vkuzo
2022-04-12 17:05:31 +00:00
HDCharles
25ee52570e [ao][sparsity] comsability for sparsity and QAT convert
Summary: The primary issue for enabling sparsity to work with QAT
convert (unlike normal quantization convert) is that when the
parametrized module undergoes the QAT convert, the parametrizations need
to be maintained. If the parametrizations don't
get transfered during the convert, the sparsifier would lose its
connection to the model. In practice this was handled using the
transfer_parametrizations_and_params function to move the weight and
bias and any associated paramerizations to the new module. This PR also adds
tests for transfer_parametrizations_and_params and type_before_parametrizations
to test_nn.py and also added comments to the test code for
composability.

Test Plan: python test/test_ao_sparsity.py TestComposability
python test/test_nn.py TestNN

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74848

Approved by: https://github.com/vkuzo, https://github.com/Lezcano
2022-04-11 16:32:08 +00:00
Jerry Zhang
e9776fe58c [quant][fx] Support conv1d and its fusion variants in QAT (#74506)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74506

This PR supports qat Conv1d, ConvBn1d, ConvBnReLU1d, ConvReLU1d in qat in FX Graph Mode Quantization

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_conv_bn_relu

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D35032995

fbshipit-source-id: 645da33f0d893aa44f35ee1384fd1539a9c788e7
(cherry picked from commit 6b583baa74c5a4fd2f50270d633f277e2fc94716)
2022-03-23 18:43:53 +00:00
Jerry Zhang
98207aabf6 [quant][core] Refactor qat conv implementation to use the same _ConvNd as base class (#74505)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74505

att, this is to make supporting conv1d easier in future PR

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_conv_bn_relu

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D35031523

fbshipit-source-id: b630eeef49cd25e939a9891535d05c62cbad0114
(cherry picked from commit 563c7897de6755af81c71216fa58b624cc59b31d)
2022-03-23 15:32:38 +00:00
Charles David Hernandez
39605a5632 [ao] Removing memoryless observer args for MovingAverage (#73947)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73947

The original implementation of memoryless observers used MinMaxObservers and
a memoryless argument to manipulate the behavior of the observer such that it wouldn't
keep track of previously observed min and max's. It was later pointed
out that this was equivalent to a movingaverageobserver with averaging_constant=1
which is requires less overhead and no 1 off args (memoryless) so this PR refactors
the memoryless arg and uses MovingAverage observers instead, although the memoryless
adjective is still used, a complete definintion was also added to clarify error
messages given these changes.

TestPlan
python test/test_quantization.py TestQuantizeEagerQAT
python test/test_quantization.py TestObserver

Test Plan: Imported from OSS

Reviewed By: andrewor14

Differential Revision: D34732080

Pulled By: HDCharles

fbshipit-source-id: 227a1ab29d18adae55093a684ea35ac34523d07a
(cherry picked from commit 5238e70e8f90f3219c36f9c64b647951dcf64b5a)
2022-03-11 00:21:49 +00:00
Jerry Zhang
f5c7e5406b [quant][fx] Add lowering support for qat and fused convs (#73527)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73527

This includes:
```
torch.nn.qat.Conv2d,
torch.nn.qat.Conv3d,
torch.nn.intrinsic.qat.ConvBn1d,
torch.nn.intrinsic.qat.ConvBn2d,
torch.nn.intrinsic.qat.ConvBn3d,
torch.nn.intrinsic.qat.ConvBnReLU1d,
torch.nn.intrinsic.qat.ConvBnReLU2d,
torch.nn.intrinsic.qat.ConvBnReLU3d,
torch.nn.intrinsic.qat.ConvReLU2d,
torch.nn.intrinsic.qat.ConvReLU3d
torch.nn.intrinsic.ConvReLU1d,
torch.nn.intrinsic.ConvReLU2d,
torch.nn.intrinsic.ConvReLU3d,
```
We first produce the reference pattern and then lower the reference pattern to quantized modules

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: andrewor14

Differential Revision: D34583206

fbshipit-source-id: d298114d1906ea44c071b0eee52730dadf67fd3e
(cherry picked from commit 6498af35b5aa6104cadb68ca48dff4e443bee7d6)
2022-03-04 06:29:03 +00:00
Jerry Zhang
2ab9702955 [quant][core] Add Embedding and EmbeddingBag reference module (#73436)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73436

This PR adds support reference module support for Embedding and EmbeddingBag, following https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md

* the reference module inherits from the corresponding float module (e.g. nn.Embedding), and the ReferenceQuantizedModule (which defines some utility functions to store qparms for a single weight)
* in forward, we first quantize and then dequantize weight (to generate the pattern) and then feed the weight to the original fp32 op

We'll connect this with fx grpah mode quantization later, in the final PR that deprecates the current convert implementation. Since current convert doesn't
support emitting quantize_per_tensor_dynamic ops, we don't want to implement it and immediately throw away the code, so might be better to just implement this
in the final flow.

Test Plan:
Will be tested later, in the final PR that deprecates the current convert implementation

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D34480325

fbshipit-source-id: bc353f3be035a364e013fa9132d0422f19120ac3
(cherry picked from commit 1722ec2f8d82e9763ef252fed5796fd09d120e34)
2022-03-02 23:32:54 +00:00
Ben Koopman
6c9cf5e6ea [quant][embedding qat] eager mode QAT for Embeddings (#66429)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66429

Test Plan: Imported from OSS

Reviewed By: HDCharles, supriyar

Differential Revision: D31618284

Pulled By: b-koopman

fbshipit-source-id: 0c0e2e86b98da9f29e9b2fc2a35c59424f94cbba
2021-11-18 05:57:11 -08:00
andrewor
4a8f27445d [Quant] Add dynamic QAT Linear module (#67325)
Summary:
**Summary:** This commit adds the `torch.nn.qat.dynamic.modules.Linear`
module, the dynamic counterpart to `torch.nn.qat.modules.Linear`.
Functionally these are very similar, except the dynamic version
expects a memoryless observer and is converted into a dynamically
quantized module before inference.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67325

Test Plan:
`python3 test/test_quantization.py TestQuantizationAwareTraining.test_dynamic_qat_linear`

**Reviewers:** Charles David Hernandez, Jerry Zhang

**Subscribers:** Charles David Hernandez, Supriya Rao, Yining Lu

**Tasks:** 99696812

**Tags:** pytorch

Reviewed By: malfet, jerryzh168

Differential Revision: D32178739

Pulled By: andrewor14

fbshipit-source-id: 5051bdd7e06071a011e4e7d9cc7769db8d38fd73
2021-11-08 10:24:25 -08:00
Ben Koopman
3aadff651c [quant][embedding qat][bugfix] Fix and test QAT EmbeddingBag from_float error message (#66989)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66989

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D31961773

Pulled By: b-koopman

fbshipit-source-id: 0d28728c87751ffc696ac221c3e8e75ac923cc57
2021-10-28 06:29:20 -07:00
Ben Koopman
0036e41143 [quant][embedding qat] Add eager QAT test for EmbeddingBag+Linear model (#66334)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66334

Test Plan: Imported from OSS

Reviewed By: HDCharles

Differential Revision: D31618283

Pulled By: b-koopman

fbshipit-source-id: bb824a341f1aa9d7e83f8e66d320a9dfd348a1d7
2021-10-19 07:03:36 -07:00
Zafar Takhirov
2daae532bd [ao_migration] torch/nn/qat: torch.quantization -> torch.ao.quantization (#65902)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65902

This changes the imports in the `caffe2/torch/nn/qat` to include the new import locations.

```
codemod -d torch/nn/qat --extensions py 'torch.quantization' 'torch.ao.quantization'
```

Test Plan: `python test/run_test.py`

Reviewed By: jerryzh168

Differential Revision: D31301196

fbshipit-source-id: ff237790d74cd3b3b5be642a997810f4f439a1d8
2021-10-08 16:21:21 -07:00
Ben Koopman
a58ff186e8 [quant][embedding qat] Add basic EmbeddingBag QAT fakeQuant workflow (#65443)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65443

Test Plan: Imported from OSS

Reviewed By: dagitses, supriyar

Differential Revision: D31456445

Pulled By: b-koopman

fbshipit-source-id: 0edda6e272d9005fce65f2ba6a5e6abc831836de
2021-10-07 20:19:29 -07:00
Jerry Zhang
f4baa83eae [bc-breaking] reference option for conv produce a pattern instead of reference conv module (#61942)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61942

This PR changes is_reference=True for conv to produce a pattern consists of dequant - float conv - quant instead of reference conv module, this is useful for future transformations to custom backends, it is also helpful to simplify the implementation for
convert in the future.

Test Plan:
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D29810656

fbshipit-source-id: 549237a62bfda4341a2a7474c124f5e33350e267
2021-07-28 09:13:40 -07:00
Jerry Zhang
7507aeded5 [reland][bc-breaking] reference option for linear produce a pattern instead of reference linear module (#61892) (#62277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62277

This PR changes is_reference=True for linear to produce a pattern consists of dequant - float linear - quant instead of reference linear module, this is useful for future transformations to custom backends, it is also helpful to simplify the implementation for
convert in the future.

Test Plan:
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Imported from OSS

Reviewed By: ejguan

Differential Revision: D29941079

fbshipit-source-id: 84bdfc0bb872c34fc345875e545c8b323e77c41e
2021-07-27 15:46:44 -07:00
Erjia Guan
8cdf16d1de Revert D29810657: [bc-breaking] reference option for linear produce a pattern instead of reference linear module
Test Plan: revert-hammer

Differential Revision:
D29810657 (9df605133e)

Original commit changeset: 949615bbc017

fbshipit-source-id: 54597d1f9636b0f94ae01c66018ff2592e5c39fc
2021-07-27 10:10:13 -07:00
Jerry Zhang
9df605133e [bc-breaking] reference option for linear produce a pattern instead of reference linear module (#61892)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61892

This PR changes is_reference=True for linear to produce a pattern consists of dequant - float linear - quant instead of reference linear module, this is useful for future transformations to custom backends, it is also helpful to simplify the implementation for
convert in the future.

Test Plan:
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D29810657

fbshipit-source-id: 949615bbc017bc454d81c8a6b2bdec53badaab19
2021-07-27 09:49:20 -07:00
Joel Schlosser
febff45900 Support factory kwargs in torch.nn modules (#54508)
Summary:
Continuation of https://github.com/pytorch/pytorch/pull/53144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508

Reviewed By: albanD

Differential Revision: D27939544

Pulled By: jbschlosser

fbshipit-source-id: 4bf517e5f74f093e27ca38a85e732da65e44d805
2021-04-22 16:16:53 -07:00
Joel Schlosser
12b2bc94d7 Revert D27909732: [pytorch][PR] Support factory kwargs in torch.nn modules
Test Plan: revert-hammer

Differential Revision:
D27909732 (5a09def9b0)

Original commit changeset: d8684b2403ab

fbshipit-source-id: d00d69fae4fa4ed58d9e97e70b27a06a0dcb39e4
2021-04-21 13:44:03 -07:00
Joel Schlosser
5a09def9b0 Support factory kwargs in torch.nn modules (#54508)
Summary:
Continuation of https://github.com/pytorch/pytorch/pull/53144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508

Reviewed By: malfet

Differential Revision: D27909732

Pulled By: jbschlosser

fbshipit-source-id: d8684b2403ab7eb336371d118799146a2520bd76
2021-04-21 13:20:11 -07:00
Natalia Gimelshein
92d24e3060 Revert D27855386: [pytorch][PR] Support factory kwargs in torch.nn modules
Test Plan: revert-hammer

Differential Revision:
D27855386 (40483acc51)

Original commit changeset: dabd505d2a04

fbshipit-source-id: f5bf3120d87861b30a8e1bf11977ad7d27cd8500
2021-04-19 20:07:20 -07:00
Joel Schlosser
40483acc51 Support factory kwargs in torch.nn modules (#54508)
Summary:
Continuation of https://github.com/pytorch/pytorch/pull/53144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508

Reviewed By: bdhirsh

Differential Revision: D27855386

Pulled By: jbschlosser

fbshipit-source-id: dabd505d2a04208e74b158570fb2859c736eea2c
2021-04-19 12:24:58 -07:00
Sam Estep
d05e7c163f Revert D27600457: [pytorch][PR] Support factory kwargs in torch.nn modules
Test Plan: revert-hammer

Differential Revision:
D27600457 (1077f87269)

Original commit changeset: b58bfee61c39

fbshipit-source-id: 19d5bfc5133a3880383731d0332503ca1f3bce0c
2021-04-19 07:47:24 -07:00
Joel Schlosser
1077f87269 Support factory kwargs in torch.nn modules (#54508)
Summary:
Continuation of https://github.com/pytorch/pytorch/pull/53144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508

Reviewed By: mrshenli

Differential Revision: D27600457

Pulled By: jbschlosser

fbshipit-source-id: b58bfee61c3917524b4622f63ef216c27a588eb1
2021-04-19 06:58:40 -07:00
Sam Estep
4753100a3b Un-ignore F403 in .flake8 (#55838)
Summary:
Generally wildcard imports are bad for the reasons described here: https://www.flake8rules.com/rules/F403.html

This PR replaces wildcard imports with an explicit list of imported items where possible, and adds a `# noqa: F403` comment in the other cases (mostly re-exports in `__init__.py` files).

This is a prerequisite for https://github.com/pytorch/pytorch/issues/55816, because currently [`tools/codegen/dest/register_dispatch_key.py` simply fails if you sort its imports](https://github.com/pytorch/pytorch/actions/runs/742505908).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55838

Test Plan: CI. You can also run `flake8` locally.

Reviewed By: jbschlosser

Differential Revision: D27724232

Pulled By: samestep

fbshipit-source-id: 269fb09cb4168f8a51fd65bfaacc6cda7fb87c34
2021-04-13 09:24:07 -07:00
hyperfraise
f9185973d1 [quantization] Add some support for 3d operations (#50003)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/50002

The last commit adds tests for 3d conv with the `SubModelFusion` and `SubModelWithoutFusion` classes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50003

Reviewed By: mrshenli

Differential Revision: D26325953

Pulled By: jerryzh168

fbshipit-source-id: 7406dd2721c0c4df477044d1b54a6c5e128a9034
2021-03-10 16:40:35 -08:00
Jerry Zhang
52f0af03f8 [reland][quant][fix] Add bias once in conv_fused (#48593) (#48661)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48661

Previously _conv_forward will add self.bias to the result, so bias is added twice in qat ConvBn module
this PR added a bias argument to _conv_forward and _conv_forward is called with zero bias
in ConvBn module

fixes: https://github.com/pytorch/pytorch/issues/48514

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D25249175

fbshipit-source-id: 4536c7545d3dcd7e8ea254368ffb7cf15118d78c
2020-12-02 10:17:43 -08:00
Nikita Shulga
c81f2d9a2f Revert D25222215: [quant][fix] Add bias once in conv_fused
Test Plan: revert-hammer

Differential Revision:
D25222215 (d2e429864c)

Original commit changeset: 90c0ab79835b

fbshipit-source-id: 5c8eee107309cfa99cefdf439a62de0b388f9cfb
2020-12-01 07:17:45 -08:00
Jerry Zhang
d2e429864c [quant][fix] Add bias once in conv_fused (#48593)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48593

Previously _conv_forward will add self.bias to the result, so bias is added twice in qat ConvBn module
this PR added a bias argument to _conv_forward and _conv_forward is called with zero bias
in ConvBn module

fixes: https://github.com/pytorch/pytorch/issues/48514

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D25222215

fbshipit-source-id: 90c0ab79835b6d09622dcfec9de4139881a60746
2020-11-30 19:26:17 -08:00
Jerry Zhang
65e5bd23d8 [quant] Add _FusedModule type to capture all fused modules for quantization (#47484)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47484

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24774703

fbshipit-source-id: f0efc5d77035b9854ec3e31a1d34f05d5680bc22
2020-11-09 10:28:45 -08:00
Jerry Zhang
bc3151dee0 [quant] Remove unused qconfig argument in qat linear module (#45307)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45307

fixes: https://github.com/pytorch/pytorch/issues/35634

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23917339

fbshipit-source-id: 65f8844b98198bbf93547b3d71408c2a54605218
2020-09-24 22:15:16 -07:00
Gao, Xiang
37658b144b Remove useless py2 compatibility import __future__, part 1 (#43808)
Summary:
To avoid conflicts, this PR does not remove all imports. More are coming in further PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43808

Reviewed By: wanchaol

Differential Revision: D23436675

Pulled By: ailzhang

fbshipit-source-id: ccc21a1955c244f0804277e9e47e54bfd23455cd
2020-09-02 19:15:11 -07:00
Jerry Zhang
a55b7e2a6d [reland][quant][fix] Remove activation_post_process in qat modules (#42343) (#43015)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43015

Currently activation_post_process are inserted by default in qat modules, which is not
friendly to automatic quantization tools, this PR removes them.

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23105059

fbshipit-source-id: 3439ac39e718ffb0390468163bcbffd384802b57
2020-08-13 20:44:14 -07:00
Richard Zou
607e49cc83 Revert D22856816: [quant][fix] Remove activation_post_process in qat modules
Test Plan: revert-hammer

Differential Revision:
D22856816 (8cb42fce17)

Original commit changeset: 988a43bce46a

fbshipit-source-id: eff5b9abdfc15b21c02c61eefbda38d349173436
2020-08-13 07:22:20 -07:00
Jerry Zhang
8cb42fce17 [quant][fix] Remove activation_post_process in qat modules (#42343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42343

Currently activation_post_process are inserted by default in qat modules, which is not
friendly to automatic quantization tools, this PR removes them.

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D22856816

fbshipit-source-id: 988a43bce46a992b38fd0d469929f89e5b046131
2020-08-12 20:14:23 -07:00
Vasiliy Kuznetsov
b02c932fb6 qat eager: remove unneeded modules (#40396)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40396

Removes activation and normalization modules from eager mode QAT.
These were incorrectly added, but we don't actually need them.

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining
```

Imported from OSS

Differential Revision: D22169768

fbshipit-source-id: b5bd753dafe92e90e226fb773eb18c6aae179703
2020-06-22 17:45:51 -07:00
Vasiliy Kuznetsov
cd0afe2b8e quantized elu: eager mode QAT handling (#40104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40104

Adds eager mode QAT handling for quantized ELU.

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_activations
```

Imported from OSS

Differential Revision: D22075082

fbshipit-source-id: 90eb06e4c52ec542fda97d7ee108a38465d3e845
2020-06-21 09:40:46 -07:00
Vasiliy Kuznetsov
952deba828 layernorm: eager mode qat support (#39094)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39094

Adds eager mode QAT handling for LayerNorm

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885260

fbshipit-source-id: 4f4c84a8bb8ba15dd78494f92569ed3a30d89169
2020-06-07 13:38:16 -07:00
Vasiliy Kuznetsov
b530176d10 instancenorm: eager mode QAT support (#39093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39093

Adds eager mode QAT support for instancenorm

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885264

fbshipit-source-id: 7753995eed895bad26f713a857c6b0d194ea99d9
2020-06-07 13:38:10 -07:00
Vasiliy Kuznetsov
202625ba9e groupnorm: eager mode QAT support (#39092)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39092

Adds eager mode QAT support for GroupNorm.

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885261

fbshipit-source-id: 0352e6a830e6384e7ad747067f8bf8ad64ab7fa8
2020-06-07 13:38:05 -07:00
Vasiliy Kuznetsov
91f1d79d1b hardswish: enable for QAT (#36604)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36604

Adds the logic to wrap the HardSwish module in FakeQuant
to support QAT.

Test Plan:
Added test to cover that this happens properly.

Imported from OSS

Differential Revision: D21045322

fbshipit-source-id: 8c46559ade58a5d5c56442285842627a3143eb0f
2020-04-15 18:04:11 -07:00
Lisa Roach
2b068d10b0 Removing references to PYTHON3COMPATIMPORTS. (#35384)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35384

Removing references to PYTHON3COMPATIMPORTS, mostly suppressions but
removed one instance of usage in a bash script.

Fixed errors arc lint uncovered.

Test Plan:
arc lint
Sandcastle tests

Reviewed By: zertosh

Differential Revision: D20635401

fbshipit-source-id: 74c6b5edb85a78a44f96b96f72ee75a9c2d029f1
2020-04-01 10:34:04 -07:00
Jerry Zhang
0b71e7e1fd Refactor QAT Conv module for better extensibility (#30362)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30362

Right now the qat modules(qat.ConvBn2d, qat.ConvBnReLU2d, qat.Conv2d)
are not convinent to support other dimensions of Conv, this PR refactors
these modules so that we can support Conv1d/Conv3d better

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18691152

fbshipit-source-id: 5b561e6b054eadd31b98cabdf1ac67a61ee9b805
2019-11-26 06:53:12 -08:00
なるみ
d83389d327 Ignore F401 in all __init__.py without putting noqa (#25823)
Summary:
By adding `per-file-ignores = __init__.py: F401` into `.flake8` with `flake8>=3.7`, we can ignore F410 in all `__init__.py` without putting `# noqa: F401` line by line.

http://flake8.pycqa.org/en/latest/user/options.html?highlight=per-file-ignores#cmdoption-flake8-per-file-ignores
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25823

Differential Revision: D17252182

Pulled By: soumith

fbshipit-source-id: 87b174075b79e4078953a7521bd1a8f82405646b
2019-10-23 15:28:13 -07:00
Zafar Takhirov
a5ac7f6387 Changing observer name
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27779

Test Plan: Imported from OSS

Differential Revision: D17886605

Pulled By: z-a-f

fbshipit-source-id: 68c50b482e65015336ff27171fd730da493525b6
2019-10-17 11:36:03 -07:00