Commit Graph

54 Commits

Author SHA1 Message Date
Vasiliy Kuznetsov
b02c932fb6 qat eager: remove unneeded modules (#40396)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40396

Removes activation and normalization modules from eager mode QAT.
These were incorrectly added, but we don't actually need them.

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining
```

Imported from OSS

Differential Revision: D22169768

fbshipit-source-id: b5bd753dafe92e90e226fb773eb18c6aae179703
2020-06-22 17:45:51 -07:00
Raghuraman Krishnamoorthi
d7d75e37bb Add state dict for LSTM and RNNCell and helper functions for accessing weights and bias (#40333)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40333

Add state_dict support for dynamic quantized LSTM/GRU/RNNCell.

Add helper functions get_weight and get_bias for LSTM and RNNCells
ghstack-source-id: 106364749

(Note: this ignores all push blocking failures!)

Test Plan:
buck test caffe2/test:quantization -- 'test_lstm_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details

buck test caffe2/test:quantization -- 'test_cell_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details

Differential Revision: D22151020

fbshipit-source-id: 2eb54062f6c6a35ffe4dbe8e8cfbf7ede0e92ba1
2020-06-22 17:41:07 -07:00
Vasiliy Kuznetsov
ab8a99bd36 graph mode: add hardswish inplace handling (#40284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40284

Adds graph mode handling for inplace hardswish, and test coverage for functional hardswish.

Test Plan:
```
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_hardswish
```

Imported from OSS

Differential Revision: D22140628

fbshipit-source-id: 55a514f7dc1130d510f69ee4e611d7cb5e08d02e
2020-06-21 09:40:50 -07:00
Vasiliy Kuznetsov
cd0afe2b8e quantized elu: eager mode QAT handling (#40104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40104

Adds eager mode QAT handling for quantized ELU.

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_activations
```

Imported from OSS

Differential Revision: D22075082

fbshipit-source-id: 90eb06e4c52ec542fda97d7ee108a38465d3e845
2020-06-21 09:40:46 -07:00
Vasiliy Kuznetsov
03ed802a90 quantized elu: eager mode static handling (#40103)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40103

Add eager mode static quantization handling for quantized ELU.

Test Plan:
```
python test/test_quantization.py TestStaticQuantizedModule.test_elu
python test/test_quantization.py TestPostTrainingStatic.test_activations
```

Imported from OSS

Differential Revision: D22075081

fbshipit-source-id: 8a3df428be135a0565472ebd0f55fa801689bcc5
2020-06-21 09:40:44 -07:00
Haixin Liu
4cbf87dc92 [PyTorch Numeric Suite] Add support for dynamic LSTM (#40065)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40065

Add support for dynamic LSTM of all three Numeric Suite APIs: compare_weights(), compare_model_stub() and compare_model_outputs().
ghstack-source-id: 106291782

Test Plan:
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_lstm_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_lstm_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_lstm_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_submodule_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_functional_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_functional_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_dynamic'

Differential Revision: D22058275

fbshipit-source-id: 76cb42ce16b6b02b0b90f7582252756582660921
2020-06-20 07:00:13 -07:00
Zafar
9da277c635 [quant][graphmodel] linear_relu (#40021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40021

This replaces #36889 due to significant merge conflicts

Test Plan: Imported from OSS

Differential Revision: D22087061

Pulled By: z-a-f

fbshipit-source-id: 6a65cdd3c0c0c957968a9d017902fb6d03b58150
2020-06-19 23:32:54 -07:00
Jerry Zhang
b2f489dc57 [quant][graphmode] Rename graph mode quantization API to quantize_jit (#40212)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40212

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D22144745

fbshipit-source-id: 38a19b5afdddbbce262eea8ddf5b68458e6017b3
2020-06-19 18:13:37 -07:00
Haixin Liu
d9c804ce22 [PyTorch Numeric Suite] Add support for dynamic quantization of linear module (#39024)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39024

Add support for dynamic quantization of linear module.
ghstack-source-id: 106205450

Test Plan:
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_submodule_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_functional_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_functional_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_dynamic'

Differential Revision: D21675971

fbshipit-source-id: c9562744dc59b61cf47f2787a934e6a5a53e12fd
2020-06-19 10:58:56 -07:00
Vasiliy Kuznetsov
4ad8ebe738 quant layer/group/instance norm: make weights and biases optional (#39203)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39203

Adds logic and test coverage for optional weights and biases for
the quantized normalization operators.  This was broken before this
PR because the `TORCH_LIBRARY` registration had these as required parameters
- removed it, and cleaned up the callsites.

Note: consolidating the registrations in `native_functions.yaml` as opposed to `library.cpp`
after a discussion with ezyang .

Test Plan:
```
python test/test_quantization.py TestQuantizedOps.test_qlayer_norm
python test/test_quantization.py TestQuantizedOps.test_group_norm
python test/test_quantization.py TestQuantizedOps.test_instance_norm
python test/test_quantization.py TestStaticQuantizedModule.test_layer_norm
python test/test_quantization.py TestStaticQuantizedModule.test_group_norm
python test/test_quantization.py TestStaticQuantizedModule.test_instance_norm
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_layer_norm
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_group_norm
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_instance_norm
```

Imported from OSS

Differential Revision: D21885259

fbshipit-source-id: 978c7b8bd6c11a03e9e5fdb68f154cb80cc43599
2020-06-18 10:19:39 -07:00
Supriya Rao
c252dddcdd [quant][graphmode] Test JIT tracing for dynamic quant cases (#40128)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40128

Reland PR

Test Plan:
python test/test_quantization.py TestQuantizeDynamicScriptJitPasses

Imported from OSS

Differential Revision: D22081258

fbshipit-source-id: a3f7e26ea02ff8946f356afa7203129c6b3d658b
2020-06-17 13:41:56 -07:00
Raghuraman Krishnamoorthi
3258cb61b1 Dynamic quantization support for LSTMCell, RNNCell and GRUCell [Remove randomness in weights] (#40102)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40102

Enable dynamic quantization for LSTMCell, RNNCell and GRUCell
ghstack-source-id: 105997236

(Note: this ignores all push blocking failures!)

Test Plan: buck test caffe2/test:quantization -- 'test_quantized_rnn_cell \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

Differential Revision: D22071017

fbshipit-source-id: 3fe1eac39db9c1e0566838eb8b969bbb1fa983c9
2020-06-16 21:29:50 -07:00
Raghuraman Krishnamoorthi
15758bca55 Refactor LSTM tests, [Remove randomness in weights] (#40101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40101

Create three tests for LSTMs:
1. test_qlstm: Test to check numerics of quantized LSTM operator.
2. test_lstm_api: To check the LSTM module and compare
it with the quantized LSTM op
3. test_quantized_rnn: Check the dynamic quantization workflow, scriptability and serialization of quantized
LSTM
ghstack-source-id: 105997268

(Note: this ignores all push blocking failures!)

Test Plan:
buck test caffe2/test:quantization -- 'test_lstm_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details

buck test caffe2/test:quantization -- 'test_quantized_rnn \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

buck test caffe2/test:quantization -- 'test_qlstm \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --print-passing-details

Differential Revision: D22070826

fbshipit-source-id: 46c333e19b9eab8fa5cab6f132e89b80a635791a
2020-06-16 17:24:07 -07:00
Supriya Rao
cb1a1942ee Revert D22071277: [quant][graphmode] Test JIT tracing for dynamic quant cases
Test Plan: revert-hammer

Differential Revision:
D22071277

Original commit changeset: e8aa8637e636

fbshipit-source-id: e89c3e03a7d695e1d4f5ff8d8c5172633db83984
2020-06-16 14:59:09 -07:00
Supriya Rao
fa4244d783 [quant][graphmode] Test JIT tracing for dynamic quant cases (#40040)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40040

Test Plan:
python test/test_quantization.py TestQuantizeDynamicScriptJitPasses

Imported from OSS

Differential Revision: D22071277

fbshipit-source-id: e8aa8637e6364092b6ff1c3a48dfc4551eb645ec
2020-06-16 13:16:42 -07:00
Raghuraman Krishnamoorthi
5add2e861c Revert D21628596: Refactor LSTM tests
Test Plan: revert-hammer

Differential Revision:
D21628596

Original commit changeset: 4aeda899f2e5

fbshipit-source-id: ab6544b87404863e054172aa9ec7ada51fad8e5e
2020-06-16 10:14:15 -07:00
Raghuraman Krishnamoorthi
e55e0cb1a9 Revert D20978736: Dynamic quantization support for LSTMCell, RNNCell and GRUCell
Test Plan: revert-hammer

Differential Revision:
D20978736

Original commit changeset: 8f303ba1d7f8

fbshipit-source-id: bcd300819616d6536f582fcd3c90decd543c4657
2020-06-16 10:11:32 -07:00
Raghuraman Krishnamoorthi
48db06e39a Dynamic quantization support for LSTMCell, RNNCell and GRUCell (#37159)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37159

Enable dynamic quantization for LSTMCell, RNNCell and GRUCell
ghstack-source-id: 105946183

(Note: this ignores all push blocking failures!)

Test Plan: buck test caffe2/test:quantization -- 'test_quantized_rnn_cell \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

Differential Revision: D20978736

fbshipit-source-id: 8f303ba1d7f8e0c646ac73e862d2c1e735b7ff61
2020-06-16 09:14:59 -07:00
Jerry Zhang
144e8dc5a3 [quant][graphmode] Use quantizedbatch_norm in graph mode (#39911)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39911

Test Plan: Imported from OSS

Differential Revision: D22012282

fbshipit-source-id: 98af55172cbeaa7080865d6533df21647a7cedfa
2020-06-16 00:58:11 -07:00
Raghuraman Krishnamoorthi
655f1ea176 Refactor LSTM tests (#38851)
Summary:
Create three tests for LSTMs:
1. test_qlstm: Test to check numerics of quantized LSTM operator.
2. test_lstm_api: To check the LSTM module and compare
it with the quantized LSTM op
3. test_quantized_rnn: Check the dynamic quantization workflow, scriptability and serialization of quantized
LSTM
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38851

ghstack-source-id: 105945574

(Note: this ignores all push blocking failures!)

Test Plan:
buck test caffe2/test:quantization -- 'test_lstm_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details

buck test caffe2/test:quantization -- 'test_quantized_rnn \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

buck test caffe2/test:quantization -- 'test_qlstm \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --print-passing-details

Differential Revision: D21628596

fbshipit-source-id: 4aeda899f2e5f14bfbe3d82096cb4ce89c725fa1
2020-06-16 00:41:24 -07:00
Jerry Zhang
246d7bb41d [quant][graphmode] Quantizing traced modules (#39826)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39826

Expanding operator test coverage to traced modules

Test Plan: Imported from OSS

Differential Revision: D21991266

fbshipit-source-id: 73b1d94caa6ad41bb0d6cbde7ba0de343da3e7ff
2020-06-12 00:55:11 -07:00
Supriya Rao
e1392922f2 [quant] Enable per-channel quantization for LSTM Modules (#39666)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39666

Test Plan:
python test/test_quantization.py TestPostTrainingDynamic.test_per_channel_lstm_quantize

Imported from OSS

Differential Revision: D21977601

fbshipit-source-id: 1333259e75782e54864ab444e05397b86cd9b9aa
2020-06-10 23:19:08 -07:00
Vasiliy Kuznetsov
952deba828 layernorm: eager mode qat support (#39094)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39094

Adds eager mode QAT handling for LayerNorm

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885260

fbshipit-source-id: 4f4c84a8bb8ba15dd78494f92569ed3a30d89169
2020-06-07 13:38:16 -07:00
Vasiliy Kuznetsov
b530176d10 instancenorm: eager mode QAT support (#39093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39093

Adds eager mode QAT support for instancenorm

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885264

fbshipit-source-id: 7753995eed895bad26f713a857c6b0d194ea99d9
2020-06-07 13:38:10 -07:00
Vasiliy Kuznetsov
202625ba9e groupnorm: eager mode QAT support (#39092)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39092

Adds eager mode QAT support for GroupNorm.

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885261

fbshipit-source-id: 0352e6a830e6384e7ad747067f8bf8ad64ab7fa8
2020-06-07 13:38:05 -07:00
Vasiliy Kuznetsov
2140874228 instancenorm: eager static quant support (#39091)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39091

Adds eager mode static quant support for instancenorm.

Test Plan:
```
python test/test_quantization.py TestPostTrainingStatic.test_normalization
python test/test_quantization.py TestStaticQuantizedModule.test_instance_norm
```

Imported from OSS

Differential Revision: D21885265

fbshipit-source-id: 277506faf108f3561867cd8449a2390b7a44c462
2020-06-07 13:37:59 -07:00
Vasiliy Kuznetsov
f9b675f7b6 groupnorm: eager static quant support (#39090)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39090

Makes quantized GroupNorm work in eager mode post training static quant.

Test Plan:
```
python test/test_quantization.py TestPostTrainingStatic.test_normalization
python test/test_quantization.py TestStaticQuantizedModule.test_group_norm
```

Imported from OSS

Differential Revision: D21885262

fbshipit-source-id: 58b0ffb59c601fcb4c79f711c7c98a667ffc6170
2020-06-07 13:37:53 -07:00
Supriya Rao
530d48e93a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452) (#38749)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38749

Test Plan: python test/test_quantization.py TestFused

Differential Revision: D21654659

Pulled By: supriyar

fbshipit-source-id: 301be24083e794f4e71ff1d6d842e1aaefa640f0
2020-05-19 22:48:05 -07:00
Natalia Gimelshein
b995540a01 Revert D21632878: [quant] Support for fused ConvBn1d and ConvBnRelu1d modules
Test Plan: revert-hammer

Differential Revision:
D21632878

Original commit changeset: 0d73398b95d7

fbshipit-source-id: c4dd18a4220d175237f31f741a782f2596228009
2020-05-19 15:22:16 -07:00
Supriya Rao
7d38db0f9a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38452

Test Plan:
python test/test_quantization.py TestFused

Imported from OSS

Differential Revision: D21632878

fbshipit-source-id: 0d73398b95d72a0a23b42ef36f3ede1bfcc35eda
2020-05-19 09:53:56 -07:00
Jerry Zhang
6232481cab [quant][graphmode] Add RemoveReduantDequantize pass (#38434)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38434

We insert dequantize for each use in order to produce quantization patterns that will
later be fused, after that we should also remove extra dequantize node produced by this operation.

Test Plan: Imported from OSS

Differential Revision: D21597834

fbshipit-source-id: 18dfb2760bbb08932aa4e1d06f96cfc5fb37ed88
2020-05-15 15:01:40 -07:00
Supriya Rao
f4605ae5c3 [quant] Fusion support for conv1d + ReLU (#38438)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38438

Fusion for PTQ flow in eager mode. Graph mode to follow

Test Plan:
python test/test_quantization.py TestFusion

Imported from OSS

Differential Revision: D21575920

fbshipit-source-id: 5bac6602520f42ae3f4957d1a55e6a863daa0257
2020-05-14 16:08:11 -07:00
Nikita Shulga
3e9b4332d2 Fix @skipIfNoFBGEMM for types (#38432)
Summary:
Return unmodified type from decorator if fbgemm is present.

Fix `Tried to trace <__torch__.torch.classes.rnn.CellParamsBase object at 0x55f504c56b40> but it is not part of the active trace. Modules that are called during a trace must be registered as submodules of the thing being traced` thrown from `TestPostTrainingDynamic.test_quantized_rnn`  by preserving modules in returned qRNNBase (i.e. by partially reverting https://github.com/pytorch/pytorch/pull/38134 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38432

Differential Revision: D21567333

Pulled By: malfet

fbshipit-source-id: 364fa2c8fc6e400b4f2e425b922a977756aec1d8
2020-05-14 08:27:29 -07:00
Nikita Shulga
376c9a40dc Fix dummy typo in skipIfNoFBGEMM (#38058)
Summary:
I've picked wrong revision when landed the diff, it should have had an actual check rather than `if True`:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38058

Differential Revision: D21466152

Pulled By: malfet

fbshipit-source-id: 03fdc510562fab44b7d64a42284d4c3c1f8e940a
2020-05-07 18:03:48 -07:00
Nikita Shulga
2b41b9bceb [BE] Add @skipIfNoFBGEMM decorator (Reland) (#37894)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37894

Differential Revision: D21449993

Pulled By: malfet

fbshipit-source-id: d9d355d360384cbb158f62b40dc885527f22ee05
2020-05-07 09:43:53 -07:00
Edward Yang
b8d48d3680 Revert D21406034: [pytorch][PR] [BE] Add @skipIfNoFBGEMM decorator
Test Plan: revert-hammer

Differential Revision:
D21406034

Original commit changeset: 9583a8a726c2

fbshipit-source-id: ec891e5d00c78310b320f4901a261fc99fc5399b
2020-05-05 16:48:40 -07:00
Nikita Shulga
06e1b68843 [BE] Add @skipIfNoFBGEMM decorator (#37810)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37810

Differential Revision: D21406034

Pulled By: malfet

fbshipit-source-id: 9583a8a726c2e59e5173e114604e4edd979330c0
2020-05-05 14:00:52 -07:00
Supriya Rao
b33b46a950 [quant] Enable qnnpack tests for test_quantize and test_numeric_suite (#37351)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37351

Test Plan:
python test/test_quantization.py PostTrainingStaticQuant

Imported from OSS

Differential Revision: D21293704

fbshipit-source-id: 621f3ac60315b61f99b9b41da691ac3473e974cc
2020-04-29 19:28:22 -07:00
Jerry Zhang
facdd15cc6 [quant] Finishing refactor for quantization test files (#37366)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37366

- we can put both fake quant module and observer module tests in the test_workflow_module.py
- added test_quantized_functional.py
- moved tests in test_numerics.py to test_quantize.py and removed test_numerics.py

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D21282198

fbshipit-source-id: 60107cee7d1ed2cd14a45650e91ec28b8a262c52
2020-04-28 21:40:57 -07:00
Haixin Liu
ca39f99d48 [Pytorch Numeric Suite] Add module level comparison (#37242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37242

Add module level comparison API.
ghstack-source-id: 102853727

Test Plan: buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub'

Reviewed By: raghuramank100

Differential Revision: D21232277

fbshipit-source-id: de707eea101a66a37869129460274c56e4e07db2
2020-04-25 16:46:10 -07:00
Alban Desmaison
35b9c89dc1 Revert D21045393: [PyTorch Numeric Suite] Add module level comparison
Test Plan: revert-hammer

Differential Revision:
D21045393

Original commit changeset: 4303805f732c

fbshipit-source-id: 06d8a234eda800eb14bc3aa58ff14b0d3cf86d86
2020-04-24 07:03:04 -07:00
Haixin Liu
fba9b9a023 [PyTorch Numeric Suite] Add module level comparison (#36669)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36669

Add module level comparison API.
ghstack-source-id: 102802362

Test Plan: buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub'

Differential Revision: D21045393

fbshipit-source-id: 4303805f732cc8c8fc67ce40d9594b664507bf82
2020-04-24 00:17:22 -07:00
Nikita Shulga
3b832ee2bf Use Python3 super() throughout torch.testing. (#37024)
Summary:
Hattip to ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37024

Differential Revision: D21173244

Pulled By: malfet

fbshipit-source-id: 7079703e28777d873f69bf9fd4dcbad8d53a2682
2020-04-22 09:00:28 -07:00
Supriya Rao
ee2a9ac56e [quant][graph] Support for quantized::mul and quantized::mul_scalar (#36818)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36818

Test Plan:
python test_quantize_script.py test_quantized_mul
python test_quantize_script.py test_quantized_mul_scalar

Imported from OSS

Differential Revision: D21134438

fbshipit-source-id: 9ed5e852c5c0c6899a11e3ed36e12b5045608ea4
2020-04-20 15:40:32 -07:00
Supriya Rao
dcfc121fd7 Enable jit trace check_trace for quantized inputs (#36740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36740

Issue #23986

Test Plan:
python test/quantization/test_quantized_nn_mods.py

Imported from OSS

Differential Revision: D21077551

fbshipit-source-id: fdd15db3284975c99b3e250a568fa94c617d21eb
2020-04-16 19:06:55 -07:00
Vasiliy Kuznetsov
2c558dba3d quantized layer norm: add to static quant (#36690)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36690

Adds the static quantization hook for LayerNorm

Test Plan:
```
python test/quantization/test_quantized_nn_mods.py ModuleAPITest.test_layer_norm
python test/quantization/test_quantization.py EagerModePostTrainingQuantTest.test_normalization
```

Imported from OSS

Differential Revision: D21055401

fbshipit-source-id: 188329f35359576d50ed0db5fb675ce68c28bf7d
2020-04-16 18:18:02 -07:00
Vasiliy Kuznetsov
91f1d79d1b hardswish: enable for QAT (#36604)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36604

Adds the logic to wrap the HardSwish module in FakeQuant
to support QAT.

Test Plan:
Added test to cover that this happens properly.

Imported from OSS

Differential Revision: D21045322

fbshipit-source-id: 8c46559ade58a5d5c56442285842627a3143eb0f
2020-04-15 18:04:11 -07:00
Vasiliy Kuznetsov
65df8b3886 hardswish: make it work in static quantization (#36545)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36545

* adds a quantized nn.module for Hardswish so we can observe activation values
* modifies the hardswish op to allow specifying scale + zero_point
* makes hardswish model be properly swapped in static quantization

Test Plan:
added tests and they pass for:
* the new _out flavor of hardswish
* QNNPACK changes
* static quant e2e

Imported from OSS

Differential Revision: D21045320

fbshipit-source-id: ab7e52f0f54a7d5923ab6f58197022cc28c12354
2020-04-15 18:02:35 -07:00
Supriya Rao
6972c27d94 [quant] Enable fusion for conv modules with bias (#36173)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36173

Previously we were ignoring the conv bias during training if it existed
This PR adds the bias from the conv op during the conv+bn fusion process

Test Plan:
python test/quantization/test_quantization.py

Imported from OSS

Differential Revision: D20921613

fbshipit-source-id: eacb2ccf9107f413ac4ef23163ba914af9b90924
2020-04-08 15:53:32 -07:00
Jerry Zhang
6fc2403951 [quant][graphmode] qconfig_dict support None (#35336)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35336

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D20655302

fbshipit-source-id: b453f3240ac487aa29629953b4d71274dbbc25fc
2020-03-29 12:47:47 -07:00