Commit Graph

44 Commits

Author SHA1 Message Date
Michael Suo
ca1b8ebbcb move misc implementation out of jit/__init__.py (#41154)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41154

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22445213

Pulled By: suo

fbshipit-source-id: 200545715c5ef13beb1437f49e01efb21498ddb7
2020-07-13 16:59:55 -07:00
emil
0c77bd7c0b Quantization: preserving pre and post forward hooks (#37233)
Summary:
1. While do convert() preserve module's **pre and post forward** hooks
2. While do fusion preserve only module's **pre forward** hooks (because after fusion output no longer the same)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/37233

Differential Revision: D22425141

Pulled By: jerryzh168

fbshipit-source-id: e69b81821d507dcd110d2ff3594ba94b9593c8da
2020-07-13 12:41:24 -07:00
Jerry Zhang
c1fa74b2d7 [quant][refactor] test_only_eval_fn (#41078)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41078

Test Plan: Imported from OSS

Differential Revision: D22420699

fbshipit-source-id: cf105cd41d83036df65c6bb3147cc14aaf755897
2020-07-09 12:34:05 -07:00
Vasiliy Kuznetsov
b02c932fb6 qat eager: remove unneeded modules (#40396)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40396

Removes activation and normalization modules from eager mode QAT.
These were incorrectly added, but we don't actually need them.

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining
```

Imported from OSS

Differential Revision: D22169768

fbshipit-source-id: b5bd753dafe92e90e226fb773eb18c6aae179703
2020-06-22 17:45:51 -07:00
Jerry Zhang
9f9e7c1d71 [quant][refactor] Tests for torch.jit.quantized (#40330)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40330

Test Plan: Imported from OSS

Differential Revision: D22149707

fbshipit-source-id: 44e7545bf9277d9245b5e9c2d9461f664fff0426
2020-06-22 10:41:31 -07:00
Vasiliy Kuznetsov
cd0afe2b8e quantized elu: eager mode QAT handling (#40104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40104

Adds eager mode QAT handling for quantized ELU.

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_activations
```

Imported from OSS

Differential Revision: D22075082

fbshipit-source-id: 90eb06e4c52ec542fda97d7ee108a38465d3e845
2020-06-21 09:40:46 -07:00
Vasiliy Kuznetsov
03ed802a90 quantized elu: eager mode static handling (#40103)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40103

Add eager mode static quantization handling for quantized ELU.

Test Plan:
```
python test/test_quantization.py TestStaticQuantizedModule.test_elu
python test/test_quantization.py TestPostTrainingStatic.test_activations
```

Imported from OSS

Differential Revision: D22075081

fbshipit-source-id: 8a3df428be135a0565472ebd0f55fa801689bcc5
2020-06-21 09:40:44 -07:00
Zafar
9da277c635 [quant][graphmodel] linear_relu (#40021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40021

This replaces #36889 due to significant merge conflicts

Test Plan: Imported from OSS

Differential Revision: D22087061

Pulled By: z-a-f

fbshipit-source-id: 6a65cdd3c0c0c957968a9d017902fb6d03b58150
2020-06-19 23:32:54 -07:00
Edmund Williams Jr
465138ec39 refactoring TestQuantizeScript (#39677)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39677

Test Plan:
Moved a test class suite between files, wanted to have same functionality (simple code refactor) so tested to make sure the test output was the same before/after the refactor.
Image below shows the output of TestGraphModePostTrainingStatic before refactor

{F239676498}

This image shows the output of TestQuantizeScript (renamed version that is in test_quantize_script.py instead of test_quantize.py)

{F239676509}

Differential Revision: D21940638

Pulled By: edmundw314

fbshipit-source-id: 54160a5151aadf3a34bdac2bcaeb52904e6653ed
2020-06-19 11:47:00 -07:00
Raghuraman Krishnamoorthi
3258cb61b1 Dynamic quantization support for LSTMCell, RNNCell and GRUCell [Remove randomness in weights] (#40102)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40102

Enable dynamic quantization for LSTMCell, RNNCell and GRUCell
ghstack-source-id: 105997236

(Note: this ignores all push blocking failures!)

Test Plan: buck test caffe2/test:quantization -- 'test_quantized_rnn_cell \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

Differential Revision: D22071017

fbshipit-source-id: 3fe1eac39db9c1e0566838eb8b969bbb1fa983c9
2020-06-16 21:29:50 -07:00
Raghuraman Krishnamoorthi
15758bca55 Refactor LSTM tests, [Remove randomness in weights] (#40101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40101

Create three tests for LSTMs:
1. test_qlstm: Test to check numerics of quantized LSTM operator.
2. test_lstm_api: To check the LSTM module and compare
it with the quantized LSTM op
3. test_quantized_rnn: Check the dynamic quantization workflow, scriptability and serialization of quantized
LSTM
ghstack-source-id: 105997268

(Note: this ignores all push blocking failures!)

Test Plan:
buck test caffe2/test:quantization -- 'test_lstm_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details

buck test caffe2/test:quantization -- 'test_quantized_rnn \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

buck test caffe2/test:quantization -- 'test_qlstm \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --print-passing-details

Differential Revision: D22070826

fbshipit-source-id: 46c333e19b9eab8fa5cab6f132e89b80a635791a
2020-06-16 17:24:07 -07:00
Raghuraman Krishnamoorthi
5add2e861c Revert D21628596: Refactor LSTM tests
Test Plan: revert-hammer

Differential Revision:
D21628596

Original commit changeset: 4aeda899f2e5

fbshipit-source-id: ab6544b87404863e054172aa9ec7ada51fad8e5e
2020-06-16 10:14:15 -07:00
Raghuraman Krishnamoorthi
e55e0cb1a9 Revert D20978736: Dynamic quantization support for LSTMCell, RNNCell and GRUCell
Test Plan: revert-hammer

Differential Revision:
D20978736

Original commit changeset: 8f303ba1d7f8

fbshipit-source-id: bcd300819616d6536f582fcd3c90decd543c4657
2020-06-16 10:11:32 -07:00
Raghuraman Krishnamoorthi
48db06e39a Dynamic quantization support for LSTMCell, RNNCell and GRUCell (#37159)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37159

Enable dynamic quantization for LSTMCell, RNNCell and GRUCell
ghstack-source-id: 105946183

(Note: this ignores all push blocking failures!)

Test Plan: buck test caffe2/test:quantization -- 'test_quantized_rnn_cell \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

Differential Revision: D20978736

fbshipit-source-id: 8f303ba1d7f8e0c646ac73e862d2c1e735b7ff61
2020-06-16 09:14:59 -07:00
Raghuraman Krishnamoorthi
655f1ea176 Refactor LSTM tests (#38851)
Summary:
Create three tests for LSTMs:
1. test_qlstm: Test to check numerics of quantized LSTM operator.
2. test_lstm_api: To check the LSTM module and compare
it with the quantized LSTM op
3. test_quantized_rnn: Check the dynamic quantization workflow, scriptability and serialization of quantized
LSTM
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38851

ghstack-source-id: 105945574

(Note: this ignores all push blocking failures!)

Test Plan:
buck test caffe2/test:quantization -- 'test_lstm_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details

buck test caffe2/test:quantization -- 'test_quantized_rnn \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

buck test caffe2/test:quantization -- 'test_qlstm \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --print-passing-details

Differential Revision: D21628596

fbshipit-source-id: 4aeda899f2e5f14bfbe3d82096cb4ce89c725fa1
2020-06-16 00:41:24 -07:00
Supriya Rao
727e77a809 [quant] Enable reduce_range for graphmode (#39874)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39874

When fbgemm backend is set we make sure reduce_range is set to true to avoid overflow in the operator
Also adds test for per-channel quant with graph mode and compare numerics with eager mode

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D22011205

fbshipit-source-id: 1c7c9b7ab0d84200e3d8d85c34978554c30c0169
2020-06-12 16:25:58 -07:00
Supriya Rao
e1392922f2 [quant] Enable per-channel quantization for LSTM Modules (#39666)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39666

Test Plan:
python test/test_quantization.py TestPostTrainingDynamic.test_per_channel_lstm_quantize

Imported from OSS

Differential Revision: D21977601

fbshipit-source-id: 1333259e75782e54864ab444e05397b86cd9b9aa
2020-06-10 23:19:08 -07:00
Vasiliy Kuznetsov
952deba828 layernorm: eager mode qat support (#39094)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39094

Adds eager mode QAT handling for LayerNorm

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885260

fbshipit-source-id: 4f4c84a8bb8ba15dd78494f92569ed3a30d89169
2020-06-07 13:38:16 -07:00
Vasiliy Kuznetsov
b530176d10 instancenorm: eager mode QAT support (#39093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39093

Adds eager mode QAT support for instancenorm

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885264

fbshipit-source-id: 7753995eed895bad26f713a857c6b0d194ea99d9
2020-06-07 13:38:10 -07:00
Vasiliy Kuznetsov
202625ba9e groupnorm: eager mode QAT support (#39092)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39092

Adds eager mode QAT support for GroupNorm.

Test Plan:
```
python test/test_quantization.py TestQuantizationAwareTraining.test_normalization
```

Imported from OSS

Differential Revision: D21885261

fbshipit-source-id: 0352e6a830e6384e7ad747067f8bf8ad64ab7fa8
2020-06-07 13:38:05 -07:00
Vasiliy Kuznetsov
2140874228 instancenorm: eager static quant support (#39091)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39091

Adds eager mode static quant support for instancenorm.

Test Plan:
```
python test/test_quantization.py TestPostTrainingStatic.test_normalization
python test/test_quantization.py TestStaticQuantizedModule.test_instance_norm
```

Imported from OSS

Differential Revision: D21885265

fbshipit-source-id: 277506faf108f3561867cd8449a2390b7a44c462
2020-06-07 13:37:59 -07:00
Vasiliy Kuznetsov
f9b675f7b6 groupnorm: eager static quant support (#39090)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39090

Makes quantized GroupNorm work in eager mode post training static quant.

Test Plan:
```
python test/test_quantization.py TestPostTrainingStatic.test_normalization
python test/test_quantization.py TestStaticQuantizedModule.test_group_norm
```

Imported from OSS

Differential Revision: D21885262

fbshipit-source-id: 58b0ffb59c601fcb4c79f711c7c98a667ffc6170
2020-06-07 13:37:53 -07:00
Supriya Rao
67115b226a [quant][graphmode] Dynamic Quant Do not depend on input shapes (#39412)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39412

This PR introduces changes to enable running the weight observer standalone in the graph
It extracts the nodes from the graph that correspond to the observed weight value and adds all the related nodes to a new subgraph
The subgraph is then executed using GraphFunction

Test Plan:
python test/test_quantization.py TestGraphMostPostTrainingStatic
python test/test_quantization.py TestQuantizeDynamicScript

Imported from OSS

Differential Revision: D21872940

fbshipit-source-id: 55f1dcc2caef193531e2b807c8e56288b9794520
2020-06-07 11:09:44 -07:00
Jerry Zhang
1d0ec50a02 [quant][graphmode] Rename _quantize_script.py to quantize_script.py (#39122)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39122

Test Plan: Imported from OSS

Differential Revision: D21757619

fbshipit-source-id: 603c020aaaf6f467e63f15b4f271fe946d9fb949
2020-05-29 12:33:40 -07:00
Mike Ruberry
13120bf677 Updates assertEqual to require atol and rtol, removes positional atol (#38872)
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.

In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872

Differential Revision: D21740237

Pulled By: mruberry

fbshipit-source-id: acbc027aa1d7877a49664d94db9a5fff91a07042
2020-05-27 06:31:07 -07:00
Rohan Varma
63e545e0fe Revert D21717199: [pytorch][PR] Updates assertEqual to require atol and rtol, removes positional atol
Test Plan: revert-hammer

Differential Revision:
D21717199

Original commit changeset: 9feb856f94ee

fbshipit-source-id: bfde9c39a5ce99f0ca6183a7dde703c65b7c8259
2020-05-26 18:23:59 -07:00
Mike Ruberry
6ddca30b2d Updates assertEqual to require atol and rtol, removes positional atol (#38872)
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.

In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872

Differential Revision: D21717199

Pulled By: mruberry

fbshipit-source-id: 9feb856f94eee911b44f6c7140a1d07c1b026d3a
2020-05-26 08:30:23 -07:00
Supriya Rao
530d48e93a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452) (#38749)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38749

Test Plan: python test/test_quantization.py TestFused

Differential Revision: D21654659

Pulled By: supriyar

fbshipit-source-id: 301be24083e794f4e71ff1d6d842e1aaefa640f0
2020-05-19 22:48:05 -07:00
Natalia Gimelshein
b995540a01 Revert D21632878: [quant] Support for fused ConvBn1d and ConvBnRelu1d modules
Test Plan: revert-hammer

Differential Revision:
D21632878

Original commit changeset: 0d73398b95d7

fbshipit-source-id: c4dd18a4220d175237f31f741a782f2596228009
2020-05-19 15:22:16 -07:00
Supriya Rao
7d38db0f9a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38452

Test Plan:
python test/test_quantization.py TestFused

Imported from OSS

Differential Revision: D21632878

fbshipit-source-id: 0d73398b95d72a0a23b42ef36f3ede1bfcc35eda
2020-05-19 09:53:56 -07:00
Supriya Rao
f4605ae5c3 [quant] Fusion support for conv1d + ReLU (#38438)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38438

Fusion for PTQ flow in eager mode. Graph mode to follow

Test Plan:
python test/test_quantization.py TestFusion

Imported from OSS

Differential Revision: D21575920

fbshipit-source-id: 5bac6602520f42ae3f4957d1a55e6a863daa0257
2020-05-14 16:08:11 -07:00
Nikita Shulga
2b41b9bceb [BE] Add @skipIfNoFBGEMM decorator (Reland) (#37894)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37894

Differential Revision: D21449993

Pulled By: malfet

fbshipit-source-id: d9d355d360384cbb158f62b40dc885527f22ee05
2020-05-07 09:43:53 -07:00
Edward Yang
b8d48d3680 Revert D21406034: [pytorch][PR] [BE] Add @skipIfNoFBGEMM decorator
Test Plan: revert-hammer

Differential Revision:
D21406034

Original commit changeset: 9583a8a726c2

fbshipit-source-id: ec891e5d00c78310b320f4901a261fc99fc5399b
2020-05-05 16:48:40 -07:00
Nikita Shulga
06e1b68843 [BE] Add @skipIfNoFBGEMM decorator (#37810)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37810

Differential Revision: D21406034

Pulled By: malfet

fbshipit-source-id: 9583a8a726c2e59e5173e114604e4edd979330c0
2020-05-05 14:00:52 -07:00
Supriya Rao
ac5403f22e [quant] Check qengine for TestNormalization (#37562)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37562

The model has a LinearLayer which needs fbgemm. Fixes failing windows test.

Test Plan:
python test/test_quantization.py TestPostTrainingStatic

Imported from OSS

Differential Revision: D21321032

fbshipit-source-id: 1671fdef5d0a1b43e2a4e703a8852d522af32288
2020-04-29 22:16:43 -07:00
Supriya Rao
b33b46a950 [quant] Enable qnnpack tests for test_quantize and test_numeric_suite (#37351)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37351

Test Plan:
python test/test_quantization.py PostTrainingStaticQuant

Imported from OSS

Differential Revision: D21293704

fbshipit-source-id: 621f3ac60315b61f99b9b41da691ac3473e974cc
2020-04-29 19:28:22 -07:00
Jerry Zhang
facdd15cc6 [quant] Finishing refactor for quantization test files (#37366)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37366

- we can put both fake quant module and observer module tests in the test_workflow_module.py
- added test_quantized_functional.py
- moved tests in test_numerics.py to test_quantize.py and removed test_numerics.py

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D21282198

fbshipit-source-id: 60107cee7d1ed2cd14a45650e91ec28b8a262c52
2020-04-28 21:40:57 -07:00
Raghuraman Krishnamoorthi
024f663fc1 Resubmit "Fix NaN error in dynamic quantization in qLinear, re-enable test_quantized_rnn"" (#37458)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37458

Original commit changeset: 0204c360ef2c
ghstack-source-id: 103069356

Test Plan: buck test caffe2/test:quantization -- 'test_quantized_rnn \(quantization\.test_quantization\.PostTrainingDynamicQuantTest\)' --print-passing-details --run-disabled

Reviewed By: jamesr66a

Differential Revision: D21287904

fbshipit-source-id: a60deb8bdcf6af4258d1c1b199fa2a5a90318528
2020-04-28 16:18:15 -07:00
Zafar Takhirov
fae87908d9 Back out "Fix NaN error in dynamic quantization in qLinear, re-enable test_quantized_rnn"
Summary:
Original commit changeset: 948a888f5516

(Note: this ignores all push blocking failures!)

Test Plan: Revert

Reviewed By: jamesr66a

Differential Revision: D21269147

fbshipit-source-id: 0204c360ef2c3f28c2b2fbe367eb4cfd77f717c4
2020-04-27 18:01:44 -07:00
James Reed
c4401ea9ab Make test_quantize runnable (#37357)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37357

Test Plan: Imported from OSS

Differential Revision: D21262896

Pulled By: jamesr66a

fbshipit-source-id: b39d5f751678a6f2f8c40b65fd2cdb96c58f7eaf
2020-04-27 13:46:52 -07:00
Raghuraman Krishnamoorthi
34284c1279 Fix NaN error in dynamic quantization in qLinear, re-enable test_quantized_rnn (#36009)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36009

When scale is very small (less than float eps, but greater than minimum double precision value), computation of reciprocal of scale in floating point precision within FBGEMM returns inf, while QuantUtils does not. Changed computation in QuantUtils to occur with floating point precision to re-enable tests.
ghstack-source-id: 102896302

Test Plan:
buck test caffe2/test:quantization -- 'test_quantized_rnn \(quantization\.test_quantization\.PostTrainingDynamicQuantTest\)' --print-passing-details --run-disabled
Summary (total time 59.91s):
  PASS: 1
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0

Differential Revision: D20853000

fbshipit-source-id: 948a888f5516b3ba9c6efb7de31ef2cc9d431991
2020-04-25 14:52:56 -07:00
Raghuraman Krishnamoorthi
904949382e Ensure that histogram observers have zero-point of zero for post ReLU activations (#37107)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37107

Currently histogram observers relax both the min and max values of the activations for performance speedup reasons. This causes an issue for glow where there is a slow down if the zero-point is not zero for post ReLU activations.
ghstack-source-id: 102768017

Test Plan: buck test caffe2/test:quantization -- 'test_histogram_observer_one_sided \(quantization\.test_quantization\.RecordHistogramObserverTest\)' --print-passing-details

Differential Revision: D21187636

fbshipit-source-id: 8d616b9e9caf2979a26a215e99434f71025e3d8b
2020-04-24 20:57:34 -07:00
Haixin Liu
8254a63802 Speed up calculate Qparams for per-channel observers (#30485)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30485

Use vectorization to speed up calculate Qparams for per-channel observers. New implementation is about 1000 times faster.

Task:
https://github.com/pytorch/pytorch/issues/30348#event-2824868602
ghstack-source-id: 102808561

Test Plan:
```
import torch
import time
import numpy as np
from torch.quantization.observer import PerChannelMinMaxObserver

obs = PerChannelMinMaxObserver()
acc_time = 0
X = torch.randn(1000, 10)
obs(X)
for i in range(100):
    start = time.time()
    obs.calculate_qparams()
    acc_time = acc_time + time.time()-start
print(acc_time)
```
Before change:
20.3

After change:
0.017

Differential Revision: D18711905

fbshipit-source-id: 3ed20a6734c9950773350957aaf0fc5d14827640
2020-04-24 07:32:36 -07:00
Jerry Zhang
230b68168b [quant] Refactor test files (#36964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36964

Rename and restructure quantization related tests
https://github.com/pytorch/pytorch/issues/31625

Test Plan:
.

Imported from OSS

Differential Revision: D21192509

fbshipit-source-id: 148c93e86e0ea68ab18a067fe74a8035a29a1e4e
2020-04-23 10:28:56 -07:00