Commit Graph

25 Commits

Author SHA1 Message Date
Jerry Zhang
7ddf212f33 [quant][fx] Fully align convert with the reference model design and simplify the implementation (#73863)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73863

This PR fully aligns the convert function with the design: https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md
and simplifies the implementation of convert function by always produce a reference quantized model (with reference patterns) first,
and then lower the model to a quantized model that is runnable with PyTorch native backend (fbgemm/qnnpack).

This PR makes the convert.py much easier to understand than the previous implementation, and we are able to remove majority of code
in quantization_patterns.py as well (in followup PRs).

Test Plan:
```
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels
```
and other internal/oss regression tests

Imported from OSS

Reviewed By: andrewor14

Differential Revision: D34778506

fbshipit-source-id: 0678b66addf736039a8749b352f6f569caca962b
(cherry picked from commit 33ec9caf23f3ab373d827117efbd9db0668b2437)
2022-03-11 17:11:30 +00:00
Jerry Zhang
3d377fb4a3 [quant][fx][improvement] Add lowering support for BatchNormQuantizeHandler (#72490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72490

This is an effort to move the current implementation towards the reference quantized model design:
https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md
so that we use reference model in the default fbgemm/qnnpack path

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps.test_qbatch_norm

Imported from OSS

Reviewed By: vkuzo, andrewor14

Differential Revision: D34062365

fbshipit-source-id: ed015c61f5b969554a6477f92cf6be2358cb558c
(cherry picked from commit 9498421ddd)
2022-02-15 21:34:17 +00:00
Jerry Zhang
41782a4542 [quant][core][devs] Refactor the implementation for quantized batchnorm module (#72489)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72489

To reduce the duplicated code

Test Plan:
python test/test_quantization.py TestStaticQuantizedModule
python test/test_quantization.py TestQuantizeFxOps.test_qbatch_norm

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D34062367

fbshipit-source-id: cee14051bbe5dd2597e0eb6bf2d38993be9e51b3
(cherry picked from commit d9ca5cdbb1)
2022-02-15 18:09:05 +00:00
Vasiliy Kuznetsov
4916a21f10 quantization: fix scale+zp serialization of quantized BatchNorm{2|3}d (#70432)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70432

Scale and zero_point need to be buffers for serialization to work
on them properly.  This PR moves them to buffers.  This is BC breaking,
but the "before" state was completely broken (scale + zp were not
serialized at all) so there is no value in trying to handle it.

Test Plan:
```
python test/test_quantization.py TestStaticQuantizedModule.test_batch_norm2d_serialization
python test/test_quantization.py TestStaticQuantizedModule.test_batch_norm3d_serialization
```

```
python test/test_quantization.py TestStaticQuantizedModule.test_batch_norm2d_serialization
```

Imported from OSS

Differential Revision:
D33330022
D33330022

Reviewed By: jerryzh168

Pulled By: vkuzo

fbshipit-source-id: 673c61f1a9f8f949fd9e6d09a4dbd9e5c9d5fd04
2022-01-06 08:26:20 -08:00
Joel Schlosser
febff45900 Support factory kwargs in torch.nn modules (#54508)
Summary:
Continuation of https://github.com/pytorch/pytorch/pull/53144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508

Reviewed By: albanD

Differential Revision: D27939544

Pulled By: jbschlosser

fbshipit-source-id: 4bf517e5f74f093e27ca38a85e732da65e44d805
2021-04-22 16:16:53 -07:00
Joel Schlosser
12b2bc94d7 Revert D27909732: [pytorch][PR] Support factory kwargs in torch.nn modules
Test Plan: revert-hammer

Differential Revision:
D27909732 (5a09def9b0)

Original commit changeset: d8684b2403ab

fbshipit-source-id: d00d69fae4fa4ed58d9e97e70b27a06a0dcb39e4
2021-04-21 13:44:03 -07:00
Joel Schlosser
5a09def9b0 Support factory kwargs in torch.nn modules (#54508)
Summary:
Continuation of https://github.com/pytorch/pytorch/pull/53144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508

Reviewed By: malfet

Differential Revision: D27909732

Pulled By: jbschlosser

fbshipit-source-id: d8684b2403ab7eb336371d118799146a2520bd76
2021-04-21 13:20:11 -07:00
Natalia Gimelshein
92d24e3060 Revert D27855386: [pytorch][PR] Support factory kwargs in torch.nn modules
Test Plan: revert-hammer

Differential Revision:
D27855386 (40483acc51)

Original commit changeset: dabd505d2a04

fbshipit-source-id: f5bf3120d87861b30a8e1bf11977ad7d27cd8500
2021-04-19 20:07:20 -07:00
Joel Schlosser
40483acc51 Support factory kwargs in torch.nn modules (#54508)
Summary:
Continuation of https://github.com/pytorch/pytorch/pull/53144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508

Reviewed By: bdhirsh

Differential Revision: D27855386

Pulled By: jbschlosser

fbshipit-source-id: dabd505d2a04208e74b158570fb2859c736eea2c
2021-04-19 12:24:58 -07:00
Sam Estep
d05e7c163f Revert D27600457: [pytorch][PR] Support factory kwargs in torch.nn modules
Test Plan: revert-hammer

Differential Revision:
D27600457 (1077f87269)

Original commit changeset: b58bfee61c39

fbshipit-source-id: 19d5bfc5133a3880383731d0332503ca1f3bce0c
2021-04-19 07:47:24 -07:00
Joel Schlosser
1077f87269 Support factory kwargs in torch.nn modules (#54508)
Summary:
Continuation of https://github.com/pytorch/pytorch/pull/53144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508

Reviewed By: mrshenli

Differential Revision: D27600457

Pulled By: jbschlosser

fbshipit-source-id: b58bfee61c3917524b4622f63ef216c27a588eb1
2021-04-19 06:58:40 -07:00
Jerry Zhang
8aaca4b46a [reland][quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) (#48038)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48038

nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU
that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu

this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D25000462

fbshipit-source-id: e3609a3ae4a3476a42f61276619033054194a0d2
2020-11-17 09:52:21 -08:00
Vasiliy Kuznetsov
4779553921 Revert "[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)" (#47949)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47949

This reverts commit 1478e5ec2a.

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24966363

Pulled By: vkuzo

fbshipit-source-id: ca1126f699eef84027a15df35962728296c8a790
2020-11-14 08:40:30 -08:00
Jerry Zhang
1478e5ec2a [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47415

nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU
that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu

this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24747035

fbshipit-source-id: b8fdf13e513a0d5f0c4c6c9835635bdf9fdc2769
2020-11-12 10:56:30 -08:00
Gao, Xiang
37658b144b Remove useless py2 compatibility import __future__, part 1 (#43808)
Summary:
To avoid conflicts, this PR does not remove all imports. More are coming in further PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43808

Reviewed By: wanchaol

Differential Revision: D23436675

Pulled By: ailzhang

fbshipit-source-id: ccc21a1955c244f0804277e9e47e54bfd23455cd
2020-09-02 19:15:11 -07:00
Jerry Zhang
109ea59afc [quant][graphmode][fx] Add support for batchnorm relu (#43335)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43335

Porting op tests from test_quantize_jit.py

Test Plan:
TestQuantizeFxOps

Imported from OSS

Reviewed By: z-a-f

Differential Revision: D23243563

fbshipit-source-id: 3c562f519b90e0157761a00c89eca63af8b909f2
2020-08-21 14:32:51 -07:00
Vasiliy Kuznetsov
d71ec51c0e quant docs: add and clean up BatchNorm{n}d (#40346)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40346

Cleans up docstrings for quantized BatchNorm and adds to quantization docs

Test Plan: * build on Mac OS and inspect

Differential Revision: D22152633

Pulled By: vkuzo

fbshipit-source-id: e0bf02194158231e0205b5b2df7f6f1ffc3c4d65
2020-06-23 09:02:41 -07:00
Edward Yang
4fef3763dd Revert "Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings" (#37778)
Summary:
Original PR: https://github.com/pytorch/pytorch/pull/37419

cc mattip suo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37778

Differential Revision: D21385774

Pulled By: ezyang

fbshipit-source-id: 5de532faab8bae132736b6b5189e0ee2ac9935be
2020-05-04 14:32:35 -07:00
Michael Suo
20f7e62b1d Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings
Test Plan: revert-hammer

Differential Revision:
D21337640

Original commit changeset: d4ad198780c3

fbshipit-source-id: fa9ba6ac542173a50bdb45bfa12f3fec0ed704fb
2020-05-04 10:57:55 -07:00
mattip
f10fbcc820 Split up documentation into subpages and clean up some warnings (#37419)
Summary:
xref gh-32838, gh-34032

This is a major refactor of parts of the documentation to split it up using sphinx's `autosummary` feature which will build out `autofuction` and `autoclass` stub files and link to them. The end result is that the top module pages like torch.nn.rst and torch.rst are now more like table-of-contents to the actual single-class or single-function documentations pages.

Along the way, I modified many of the docstrings to eliminate sphinx warnings when building. I think the only thing I changed from a non-documentation perspective is to add names to `__all__` when adding them to `globals()` in `torch.__init__.py`

I do not know the CI system: are the documentation build artifacts available after the build, so reviewers can preview before merging?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37419

Differential Revision: D21337640

Pulled By: ezyang

fbshipit-source-id: d4ad198780c3ae7a96a9f22651e00ff2d31a0c0f
2020-05-04 09:39:22 -07:00
Supriya Rao
4ebb1278e0 [quant] Update qbatch_norm name to qbatch_norm2d (#36494)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36494

Make name consistent with op. Since we have batch_norm2d and batch_norm3d ops

Test Plan:
python test/quantization/test_quantized.py test_batch_norm2d

Imported from OSS

Differential Revision: D21008831

fbshipit-source-id: f81ca71a331d5620fd6a3f6175020a30f2e2566b
2020-04-14 10:04:27 -07:00
Lingyi Liu
fddcd72a31 Add the more fusion (conv3d and batchnorm)support in pytorch quantization flow (#33540)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33540

Differential Revision: D19994498

Pulled By: lly-zero-one

fbshipit-source-id: e5e13eab6924bd2ce1b57b16b672844b8b9638f5
2020-03-23 20:36:03 -07:00
Lingyi Liu
68758b2fa0 Add the quantized batch_norm3d and also batch_norm3d fused with relu operators (#34702)
Summary:
as title, for bringing up the quantized video model. Will add the batch_norm_relu test in another PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34702

Differential Revision: D20436092

Pulled By: lly-zero-one

fbshipit-source-id: 116bd306f7880bfd763d8575654fbd6c92818338
2020-03-13 20:30:28 -07:00
Lingyi Liu
f70945b1c3 fix the quantized batchnorm2d (#34579)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34579

Differential Revision: D20382783

Pulled By: lly-zero-one

fbshipit-source-id: dadfc4974cb4c808f1eedf8cc4ec52ec8d3ea1b0
2020-03-12 00:48:40 -07:00
Supriya Rao
2e88d3d703 [quant] Add Quantized BatchNorm2d module (#33109)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33109

Test Plan:
python test/test_quantized_nn_mods.py ModuleAPITest.test_batch_norm

Imported from OSS

Differential Revision: D19861926

fbshipit-source-id: 67315e49b4b3577b965d422ca707d927d977feeb
2020-02-13 12:15:43 -08:00