Commit Graph

32 Commits

Author SHA1 Message Date
Vasiliy Kuznetsov
952deba828 layernorm: eager mode qat support (#39094)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39094

Adds eager mode QAT handling for LayerNorm

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885260

fbshipit-source-id: 4f4c84a8bb8ba15dd78494f92569ed3a30d89169
2020-06-07 13:38:16 -07:00
Vasiliy Kuznetsov
b530176d10 instancenorm: eager mode QAT support (#39093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39093

Adds eager mode QAT support for instancenorm

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885264

fbshipit-source-id: 7753995eed895bad26f713a857c6b0d194ea99d9
2020-06-07 13:38:10 -07:00
Vasiliy Kuznetsov
202625ba9e groupnorm: eager mode QAT support (#39092)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39092

Adds eager mode QAT support for GroupNorm.

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885261

fbshipit-source-id: 0352e6a830e6384e7ad747067f8bf8ad64ab7fa8
2020-06-07 13:38:05 -07:00
Vasiliy Kuznetsov
2140874228 instancenorm: eager static quant support (#39091)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39091

Adds eager mode static quant support for instancenorm.

Test Plan:
```
python test/test_quantization.py TestPostTrainingStatic.test_normalization
python test/test_quantization.py TestStaticQuantizedModule.test_instance_norm
```

Imported from OSS

Differential Revision: D21885265

fbshipit-source-id: 277506faf108f3561867cd8449a2390b7a44c462
2020-06-07 13:37:59 -07:00
Vasiliy Kuznetsov
f9b675f7b6 groupnorm: eager static quant support (#39090)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39090

Makes quantized GroupNorm work in eager mode post training static quant.

Test Plan:
```
python test/test_quantization.py TestPostTrainingStatic.test_normalization
python test/test_quantization.py TestStaticQuantizedModule.test_group_norm
```

Imported from OSS

Differential Revision: D21885262

fbshipit-source-id: 58b0ffb59c601fcb4c79f711c7c98a667ffc6170
2020-06-07 13:37:53 -07:00
Supriya Rao
530d48e93a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452) (#38749)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38749

Test Plan: python test/test_quantization.py TestFused

Differential Revision: D21654659

Pulled By: supriyar

fbshipit-source-id: 301be24083e794f4e71ff1d6d842e1aaefa640f0
2020-05-19 22:48:05 -07:00
Natalia Gimelshein
b995540a01 Revert D21632878: [quant] Support for fused ConvBn1d and ConvBnRelu1d modules
Test Plan: revert-hammer

Differential Revision:
D21632878

Original commit changeset: 0d73398b95d7

fbshipit-source-id: c4dd18a4220d175237f31f741a782f2596228009
2020-05-19 15:22:16 -07:00
Supriya Rao
7d38db0f9a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38452

Test Plan:
python test/test_quantization.py TestFused

Imported from OSS

Differential Revision: D21632878

fbshipit-source-id: 0d73398b95d72a0a23b42ef36f3ede1bfcc35eda
2020-05-19 09:53:56 -07:00
Jerry Zhang
6232481cab [quant][graphmode] Add RemoveReduantDequantize pass (#38434)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38434

We insert dequantize for each use in order to produce quantization patterns that will
later be fused, after that we should also remove extra dequantize node produced by this operation.

Test Plan: Imported from OSS

Differential Revision: D21597834

fbshipit-source-id: 18dfb2760bbb08932aa4e1d06f96cfc5fb37ed88
2020-05-15 15:01:40 -07:00
Supriya Rao
f4605ae5c3 [quant] Fusion support for conv1d + ReLU (#38438)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38438

Fusion for PTQ flow in eager mode. Graph mode to follow

Test Plan:
python test/test_quantization.py TestFusion

Imported from OSS

Differential Revision: D21575920

fbshipit-source-id: 5bac6602520f42ae3f4957d1a55e6a863daa0257
2020-05-14 16:08:11 -07:00
Nikita Shulga
3e9b4332d2 Fix @skipIfNoFBGEMM for types (#38432)
Summary:
Return unmodified type from decorator if fbgemm is present.

Fix `Tried to trace <__torch__.torch.classes.rnn.CellParamsBase object at 0x55f504c56b40> but it is not part of the active trace. Modules that are called during a trace must be registered as submodules of the thing being traced` thrown from `TestPostTrainingDynamic.test_quantized_rnn`  by preserving modules in returned qRNNBase (i.e. by partially reverting https://github.com/pytorch/pytorch/pull/38134 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38432

Differential Revision: D21567333

Pulled By: malfet

fbshipit-source-id: 364fa2c8fc6e400b4f2e425b922a977756aec1d8
2020-05-14 08:27:29 -07:00
Nikita Shulga
376c9a40dc Fix dummy typo in skipIfNoFBGEMM (#38058)
Summary:
I've picked wrong revision when landed the diff, it should have had an actual check rather than `if True`:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38058

Differential Revision: D21466152

Pulled By: malfet

fbshipit-source-id: 03fdc510562fab44b7d64a42284d4c3c1f8e940a
2020-05-07 18:03:48 -07:00
Nikita Shulga
2b41b9bceb [BE] Add @skipIfNoFBGEMM decorator (Reland) (#37894)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37894

Differential Revision: D21449993

Pulled By: malfet

fbshipit-source-id: d9d355d360384cbb158f62b40dc885527f22ee05
2020-05-07 09:43:53 -07:00
Edward Yang
b8d48d3680 Revert D21406034: [pytorch][PR] [BE] Add @skipIfNoFBGEMM decorator
Test Plan: revert-hammer

Differential Revision:
D21406034

Original commit changeset: 9583a8a726c2

fbshipit-source-id: ec891e5d00c78310b320f4901a261fc99fc5399b
2020-05-05 16:48:40 -07:00
Nikita Shulga
06e1b68843 [BE] Add @skipIfNoFBGEMM decorator (#37810)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37810

Differential Revision: D21406034

Pulled By: malfet

fbshipit-source-id: 9583a8a726c2e59e5173e114604e4edd979330c0
2020-05-05 14:00:52 -07:00
Supriya Rao
b33b46a950 [quant] Enable qnnpack tests for test_quantize and test_numeric_suite (#37351)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37351

Test Plan:
python test/test_quantization.py PostTrainingStaticQuant

Imported from OSS

Differential Revision: D21293704

fbshipit-source-id: 621f3ac60315b61f99b9b41da691ac3473e974cc
2020-04-29 19:28:22 -07:00
Jerry Zhang
facdd15cc6 [quant] Finishing refactor for quantization test files (#37366)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37366

- we can put both fake quant module and observer module tests in the test_workflow_module.py
- added test_quantized_functional.py
- moved tests in test_numerics.py to test_quantize.py and removed test_numerics.py

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D21282198

fbshipit-source-id: 60107cee7d1ed2cd14a45650e91ec28b8a262c52
2020-04-28 21:40:57 -07:00
Haixin Liu
ca39f99d48 [Pytorch Numeric Suite] Add module level comparison (#37242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37242

Add module level comparison API.
ghstack-source-id: 102853727

Test Plan: buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub'

Reviewed By: raghuramank100

Differential Revision: D21232277

fbshipit-source-id: de707eea101a66a37869129460274c56e4e07db2
2020-04-25 16:46:10 -07:00
Alban Desmaison
35b9c89dc1 Revert D21045393: [PyTorch Numeric Suite] Add module level comparison
Test Plan: revert-hammer

Differential Revision:
D21045393

Original commit changeset: 4303805f732c

fbshipit-source-id: 06d8a234eda800eb14bc3aa58ff14b0d3cf86d86
2020-04-24 07:03:04 -07:00
Haixin Liu
fba9b9a023 [PyTorch Numeric Suite] Add module level comparison (#36669)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36669

Add module level comparison API.
ghstack-source-id: 102802362

Test Plan: buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub'

Differential Revision: D21045393

fbshipit-source-id: 4303805f732cc8c8fc67ce40d9594b664507bf82
2020-04-24 00:17:22 -07:00
Nikita Shulga
3b832ee2bf Use Python3 super() throughout torch.testing. (#37024)
Summary:
Hattip to ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37024

Differential Revision: D21173244

Pulled By: malfet

fbshipit-source-id: 7079703e28777d873f69bf9fd4dcbad8d53a2682
2020-04-22 09:00:28 -07:00
Supriya Rao
ee2a9ac56e [quant][graph] Support for quantized::mul and quantized::mul_scalar (#36818)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36818

Test Plan:
python test_quantize_script.py test_quantized_mul
python test_quantize_script.py test_quantized_mul_scalar

Imported from OSS

Differential Revision: D21134438

fbshipit-source-id: 9ed5e852c5c0c6899a11e3ed36e12b5045608ea4
2020-04-20 15:40:32 -07:00
Supriya Rao
dcfc121fd7 Enable jit trace check_trace for quantized inputs (#36740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36740

Issue #23986

Test Plan:
python test/quantization/test_quantized_nn_mods.py

Imported from OSS

Differential Revision: D21077551

fbshipit-source-id: fdd15db3284975c99b3e250a568fa94c617d21eb
2020-04-16 19:06:55 -07:00
Vasiliy Kuznetsov
2c558dba3d quantized layer norm: add to static quant (#36690)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36690

Adds the static quantization hook for LayerNorm

Test Plan:
```
python test/quantization/test_quantized_nn_mods.py ModuleAPITest.test_layer_norm
python test/quantization/test_quantization.py EagerModePostTrainingQuantTest.test_normalization
```

Imported from OSS

Differential Revision: D21055401

fbshipit-source-id: 188329f35359576d50ed0db5fb675ce68c28bf7d
2020-04-16 18:18:02 -07:00
Vasiliy Kuznetsov
91f1d79d1b hardswish: enable for QAT (#36604)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36604

Adds the logic to wrap the HardSwish module in FakeQuant
to support QAT.

Test Plan:
Added test to cover that this happens properly.

Imported from OSS

Differential Revision: D21045322

fbshipit-source-id: 8c46559ade58a5d5c56442285842627a3143eb0f
2020-04-15 18:04:11 -07:00
Vasiliy Kuznetsov
65df8b3886 hardswish: make it work in static quantization (#36545)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36545

* adds a quantized nn.module for Hardswish so we can observe activation values
* modifies the hardswish op to allow specifying scale + zero_point
* makes hardswish model be properly swapped in static quantization

Test Plan:
added tests and they pass for:
* the new _out flavor of hardswish
* QNNPACK changes
* static quant e2e

Imported from OSS

Differential Revision: D21045320

fbshipit-source-id: ab7e52f0f54a7d5923ab6f58197022cc28c12354
2020-04-15 18:02:35 -07:00
Supriya Rao
6972c27d94 [quant] Enable fusion for conv modules with bias (#36173)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36173

Previously we were ignoring the conv bias during training if it existed
This PR adds the bias from the conv op during the conv+bn fusion process

Test Plan:
python test/quantization/test_quantization.py

Imported from OSS

Differential Revision: D20921613

fbshipit-source-id: eacb2ccf9107f413ac4ef23163ba914af9b90924
2020-04-08 15:53:32 -07:00
Jerry Zhang
6fc2403951 [quant][graphmode] qconfig_dict support None (#35336)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35336

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D20655302

fbshipit-source-id: b453f3240ac487aa29629953b4d71274dbbc25fc
2020-03-29 12:47:47 -07:00
Lingyi Liu
fddcd72a31 Add the more fusion (conv3d and batchnorm)support in pytorch quantization flow (#33540)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33540

Differential Revision: D19994498

Pulled By: lly-zero-one

fbshipit-source-id: e5e13eab6924bd2ce1b57b16b672844b8b9638f5
2020-03-23 20:36:03 -07:00
Raghuraman Krishnamoorthi
243cc20451 Enable inplace relu fusion for training (#33105)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33105

Support inplace relu for Conv+BN+Relu fusion during training.
ghstack-source-id: 97944659

Test Plan: buck test caffe2/test:quantization --  'test_fuse_module_train \(test_quantization\.FusionTest\)' --print-passing-details

Differential Revision: D19795221

fbshipit-source-id: 056dc06050d145750c4d0044c0fc1c3febcfdafc
2020-02-14 12:15:58 -08:00
James Reed
812b1ad869 [quantization] FP16 dynamic quantized Linear
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32331

Test Plan: Imported from OSS

Differential Revision: D19441158

Pulled By: jamesr66a

fbshipit-source-id: c04247ffe707be68718c486c31bc6c6040f7dc11
2020-01-27 15:45:32 -08:00
Pritam Damania
f050b16dd9 Move pytorch distributed tests to separate folder for contbuild. (#30445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445

Create distributed and rpc directories under caffe/test for better management
of unit tests.

Differential Revision: D18702786

fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
2020-01-22 21:16:59 -08:00