Commit Graph

149 Commits

Author SHA1 Message Date
Supriya Rao
950e67fa43 [quant][refactor tests] Move test_qat_module into test_quantize_eager_qat (#58928)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58928

Test Plan:
python test/test_quantization.py TestConvBNQATModule

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D28683925

fbshipit-source-id: 59d240d521c8067a344c9bdf4bec94e82f52e76f
2021-05-26 07:49:59 -07:00
Supriya Rao
cc07825a21 [quant][refactor tests] Split test_quantize into test_quantize_eager_ptq, test_quantize_eager_qat and test_fusion (#58927)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58927

Part of larger re-factor of quantization tests to make it clearer as to which test belongs where.

proposed folder structure
```
test/quantization
         - bc/
            - test_backward_compatibility.py
         - core/
            - test_quantized_kernels.py
            - test_quantized_workflow_ops.py
            - test_quantized_tensor.py
            - test_workflow_module.py
         - eager/
            - test_quantize_eager_ptq.py
            - test_quantize_eager_qat.py
            - test_fusion.py
         - equalization/
            - test_equalize_eager.py
            - test_bias_correction_eager.py
         - fx/
           - test_quantize_fx.py
         - jit/
            - test_quantize_jit.py
            - test_fusion_passes.py
         - numeric_suite/
            - test_numeric_suite_fx.py
            - test_numeric_suite_eager.py
```

Test Plan:
python test/test_quantization.py

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D28683926

fbshipit-source-id: f84a4271c77c418ce9751196241933ea8cc14913
2021-05-26 07:48:28 -07:00
Sam Estep
e3900d2ba5 Add lint for unqualified noqa (#56272)
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.

Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27:            print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28:            print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:

- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
  ```
  test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
  test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
  ```

I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2365189927

Reviewed By: janeyx99

Differential Revision: D27830127

Pulled By: samestep

fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
2021-04-19 13:16:18 -07:00
Vasiliy Kuznetsov
9e8e744efe ns for fx: move shadow lstm test to new API (#53828)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53828

Moves LSTM shadow activations test to new API. In order
to enable this, adds support for passing two args instead
of one arg when copying a subgraph from A to B.

Since this was the last test of the old API, deletes
the old test case.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels.test_compare_shadow_activations_lstm_dynamic
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D26982733

fbshipit-source-id: 03f580688dd37f3ccd688d9f444e9e79cfa84734
2021-03-25 22:35:31 -07:00
Vasiliy Kuznetsov
3978ffb37a NS for FX: add test for a simple sparsenn model (#52092)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52092

Adds a very simple toy sparsenn model, and enables
its inspection with the new NS APIs.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_sparsenn_compare_activations
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_sparsenn_shadow
```

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D26403095

fbshipit-source-id: 3c3650aca47186deb32f2b3f1d87a0716d1ad9d1
2021-02-18 08:17:57 -08:00
Vasiliy Kuznetsov
bfc7e28188 reland - ns for fx - stubs of the three APIs (compare weights, activations, activations with shadow) (#52302)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52302

Adds the basic functionality for the three Numeric Suite core APIs to work on FX models:
1. comparing weights
2. comparing activations, with same input fed to both models
3. comparing activations, with nodes of A shadowing nodes of B

Note: there are a lot of TODOs in the code, and some/most of the APIs and implementation details may change as we iterate.  This is just the first PR.

Test Plan:
We have unit test coverage for all of the APIs, for now this is with toy models:

```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Reviewed By: raghuramank100

Differential Revision: D26463013

Pulled By: vkuzo

fbshipit-source-id: e454115099ad18e4037d3c54986951cdffcab367
2021-02-16 19:59:32 -08:00
Natalia Gimelshein
eaddadd4f7 Revert D26403094: ns for fx - stubs of the three APIs (compare weights, activations, activations with shadow)
Test Plan: revert-hammer

Differential Revision:
D26403094 (37622db76a)

Original commit changeset: 9752331d4ae0

fbshipit-source-id: f0a32d443a29b25af33d90420dfd1bada40c917c
2021-02-14 15:09:16 -08:00
Vasiliy Kuznetsov
37622db76a ns for fx - stubs of the three APIs (compare weights, activations, activations with shadow) (#51669)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51669

Adds the basic functionality for the three Numeric Suite core APIs to work on FX models:
1. comparing weights
2. comparing activations, with same input fed to both models
3. comparing activations, with nodes of A shadowing nodes of B

Note: there are a lot of TODOs in the code, and some/most of the APIs and implementation details may change as we iterate.  This is just the first PR.

Test Plan:
We have unit test coverage for all of the APIs, for now this is with toy models:

```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
```

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D26403094

fbshipit-source-id: 9752331d4ae0105346d3da309b13c895b593b450
2021-02-12 17:52:21 -08:00
Vasiliy Kuznetsov
bfe6e23209 Early version of fx graph matcher for NS (#51588)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51588

Early version of utility to match nodes between graph A and graph B, for Numerical Suite for FX graph mode quantization.

The main goal of this utility is to reliably match the nodes of graph A to the nodes of graph B, and throw an easy to read error message.  This will be used in future PRs to create the APIs for matching activations.  It also could potentially be used to match weights.

Test Plan:
For now, we have bare bones test coverage on some toy models, and a single torchvision model.

```
python test/test_quantization.py TestFXGraphMatcher
```

Future PRs will add more testing.

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D26403093

fbshipit-source-id: 60e318d51e6fefe65265488c4967629d946048ef
2021-02-12 17:50:13 -08:00
yanli924
ada916675f update HistogramObserver to be scriptable (#51081)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51081

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51001

fix tests in TestQuantizeJitOps

Test Plan:
Imported from OSS
python test/test_quantization.py

Reviewed By: raghuramank100

Differential Revision: D26038759

Pulled By: lyoka

fbshipit-source-id: 0977ba7b8b26a9f654f20f5c698a7a20ec078c35
2021-01-27 07:27:03 -08:00
Zafar
04a8412b86 [quant] Quantizable LSTM (#49671)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49671

- Introduces the `torch.nn.quantizable` namespace
- Adds the `torch.nn.quantizable.LSTM` module

The point of the `quantizable` namespace is to segregate the purely quantized modules with the modules that could be quantized through a normal quantization flow, but are not using the quantized kernels explicitly.
That means the quantizable modules are functionally and numerically equivalent to the FP ones and can be used instead of the FP ones without any loss.

The main difference between the `torch.nn.LSTM` and the `torch.nn.quantizable.LSTM` is that the former one does not support observation for the linear layers, because all the computation is internal to the `aten` namespace.
The `torch.nn.quantizable.LSTM`, however, uses explicit linear layers that can be observed for further quantization.

Test Plan: Imported from OSS

Differential Revision: D25663870

Reviewed By: vkuzo

Pulled By: z-a-f

fbshipit-source-id: 70ff5463bd759b9a7922571a5712d3409dfdfa06
2020-12-30 15:21:38 -08:00
Raghuraman Krishnamoorthi
f7a085af98 Dynamic GRU quantization support (#49448)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49448

ghstack-source-id: 118982171

Test Plan:
buck test caffe2/test:quantization --  'test_qlstmGRU \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --print-passing-details
buck test caffe2/test:quantization --  'test_quantized_rnn \(quantization\.test_quantize\.TestPostTrainingDynamic\)' --print-passing-details
buck test caffe2/test:quantization --  'test_qrnncell \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --run-disabled --print-passing-details

Reviewed By: vkuzo

Differential Revision: D25579815

fbshipit-source-id: 413cc8888eb8058230b94c9576d2fa54b0ed1416
2020-12-21 12:36:59 -08:00
Xin Guan
f8722825b5 Compare Weights FX Implementation (#48056)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48056

PyTorch FX Quantization API:  Compare weights
ghstack-source-id: 117255311

Test Plan:
buck test mode/dev caffe2/test:quantization -- 'test_remove_qconfig_observer_fx'
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_dynamic_fx'
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_static_fx'
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_conv_static_fx'

Reviewed By: hx89

Differential Revision: D24940516

fbshipit-source-id: 301c1958c0e64ead9072e0fd002e4b21e8cb5b79
2020-11-20 17:17:19 -08:00
Jerry Zhang
085193c291 [quant][graphmode][fx][fusion] Add test for fuse_fx (#47085)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47085

Both in train and eval mode

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24632457

fbshipit-source-id: 486aee4e073fb87e9da46a344e8dc77e848a60cf
2020-10-30 12:25:54 -07:00
James Reed
9bc8f071a3 [WIP] Move torch.fx into its own target (#46658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46658

ghstack-source-id: 115213192

Test Plan: waitforsadcastle

Reviewed By: zdevito, vkuzo

Differential Revision: D24374723

fbshipit-source-id: 2b5708001f5df2ffb21ea5e586e26030653ccdcf
2020-10-29 17:03:08 -07:00
Jerry Zhang
6b50ccc41c [quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat (#46738) (#46871)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46871

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24547180

fbshipit-source-id: d2eb9aa74c6e5436204376b1a2ebcc6188d3562f
2020-10-26 23:52:07 -07:00
Alban Desmaison
25db74bf5e Revert D24486972: [quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat
Test Plan: revert-hammer

Differential Revision:
D24486972 (e927b62e73)

Original commit changeset: c9f139bfdd54

fbshipit-source-id: 2a75f5ec93d55a62b40d1cdd49adcf65436058f7
2020-10-26 12:47:05 -07:00
Jerry Zhang
e927b62e73 [quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat (#46738)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46738

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D24486972

fbshipit-source-id: c9f139bfdd54973da1a93a45e32937595dbe67fc
2020-10-26 12:04:42 -07:00
Jerry Zhang
13decddae2 [reland][quant] Add FixedQParamsFakeQuantize module (#45538) (#46657)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46657

This is used to simulate fake quantize operation for ops with fixed quantization parameters
e.g. hardsigmoid

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24451406

fbshipit-source-id: 26cc140c00f12bdec9a8f9dc880f4c425f4d4074
2020-10-21 16:47:11 -07:00
Ashkan Aliabadi
2181449068 Revert D24004795: [quant] Add FixedQParamsFakeQuantize module
Test Plan: revert-hammer

Differential Revision:
D24004795 (253918ec55)

Original commit changeset: fc4797f80842

fbshipit-source-id: 663169e90a2f58e5a89e4d382291ae41c24d0fee
2020-10-20 19:40:21 -07:00
Jerry Zhang
253918ec55 [quant] Add FixedQParamsFakeQuantize module (#45538)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45538

This is used to simulate fake quantize operation for ops with fixed quantization parameters
e.g. hardsigmoid

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24004795

fbshipit-source-id: fc4797f80842daacd3b3584c5b72035774634edd
2020-10-20 17:43:25 -07:00
Jerry Zhang
0da6730f02 [quant][graphmode][fx][eagermode] Add leaky relu support in quantization workflows (#45712)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45712

Eager mode will still be able to use functional leaky relu, but it will be less accurate than
LeakyReLU module.
FX graph mode will support both leaky relu functional and module

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24069961

fbshipit-source-id: 8d91c3c50c0bcd068ba3072378ebb4da9549be3b
2020-10-06 12:16:04 -07:00
Supriya Rao
6013a29fc0 [quant] Support quantization of embedding lookup operators (#44207)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44207

Use existing embedding_bag operator but set offsets to [0, 1, .. len(indices)]

Test Plan:
python test/test_quantization.py TestEmbeddingOps.test_embedding_byte

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23547385

fbshipit-source-id: ccce348bc192c6a4a65a8eca4c8b90f99f40f1b1
2020-09-08 19:03:59 -07:00
Jerry Zhang
5a1aa0e21e [reland][quant][graphmode][fx] Add e2e test on torchvision (#43587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43587

Add tests for graph mode quantization on torchvision and make sure it matches
current eager mode quantization

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: z-a-f

Differential Revision: D23331253

fbshipit-source-id: 0445a44145d99837a2c975684cd0a0b7d965c8f9
2020-08-27 10:12:07 -07:00
Mikhail Zolotukhin
be637fd5f6 Revert D23306683: [quant][graphmode][fx] Testing torchvision
Test Plan: revert-hammer

Differential Revision:
D23306683 (62dcd253e3)

Original commit changeset: 30d27e225d45

fbshipit-source-id: e661334d187d3d6756facd36f2ebdb3ab2cd2e26
2020-08-25 15:24:02 -07:00
Jerry Zhang
62dcd253e3 [quant][graphmode][fx] Testing torchvision (#43526)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43526

Add tests for graph mode quantization on torchvision and make sure it matches
current eager mode quantization

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23306683

fbshipit-source-id: 30d27e225d4557bfc1d9aa462086e416aa9a9c0e
2020-08-25 13:02:14 -07:00
Edmund Williams Jr
17f9edda42 Bias Correction Implementation (#41845)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41845

Test Plan: Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D22661503

Pulled By: edmundw314

fbshipit-source-id: a88c349c6cc15b1c66aa6dee7593ef3df588eb85
2020-08-20 21:40:33 -07:00
Jerry Zhang
b0ec336477 [quant][graphmode][fx][test] Add per op test for graph mode quant on fx (#43229)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/43229

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D23201692

fbshipit-source-id: 37fa54dcf0a9d5029f1101e11bfd4ca45b422641
2020-08-20 17:32:02 -07:00
Jerry Zhang
dae2973fae [quant][graphmode][fx] Add graph mode quantization on fx (#43175)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43175

This PR added graph mode quantization on fx: https://github.com/pytorch/pytorch/pull/42741
Currently it matches eager mode quantization for torchvision with static/dynamic/qat
ddp/synbn test is still wip

Test Plan:
python test/test_quantization.py TestQuantizeFx

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23178602

fbshipit-source-id: 8e7e0322846fbda2cfa79ad188abd7235326f879
2020-08-20 14:50:09 -07:00
Mike Ruberry
b7a9bc0802 Revert D22217029: Add fake quantize operator that works in backward pass
Test Plan: revert-hammer

Differential Revision:
D22217029 (48e978ba18)

Original commit changeset: 7055a2cdafcf

fbshipit-source-id: f57a27be412c6fbfd5a5b07a26f758ac36be3b67
2020-08-07 23:04:40 -07:00
Presley Graham
48e978ba18 Add fake quantize operator that works in backward pass (#40532)
Summary:
This diff adds FakeQuantizeWithBackward. This works the same way as the regular FakeQuantize module, allowing QAT to occur in the forward pass, except it has an additional quantize_backward parameter. When quantize_backward is enabled, the gradients are fake quantized as well (dynamically, using hard-coded values). This allows the user to see whether there would be a significant loss of accuracy if the gradients were quantized in their model.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40532

Test Plan: The relevant test for this can be run using `python test/test_quantization.py TestQATBackward.test_forward_and_backward`

Reviewed By: supriyar

Differential Revision: D22217029

Pulled By: durumu

fbshipit-source-id: 7055a2cdafcf022f1ea11c3442721ae146d2b3f2
2020-08-07 17:47:01 -07:00
Edmund Williams Jr
fd62847eb2 cross_layer_equalization (#41685)
Summary:
The goal is to implement cross layer equalization as described in section 4.1 in this paper: https://arxiv.org/pdf/1906.04721.pdf
Given two adjacent submodules in a trained model, A,B quantization might hurt one of the submodules more than the other. The paper poses the idea that a loss in accuracy from quantizing can be due to a difference in the channel ranges between the two submodules (the output channel range of A can be small, while the input channel range of B can be large). To minimize this source of error, we want to scale the tensors of A,B s.t. their channel ranges are equal (them being equal means no difference in ranges and minimizes this source of error).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41685

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D22630219

Pulled By: edmundw314

fbshipit-source-id: ccc91ba12c10b652d7275222da8b85455b8a7cd5
2020-07-22 08:39:23 -07:00
Radhakrishnan Venkataramani
f41173b975 [PyPer][quant] Add quantized embedding operators to OSS. (#40076)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40076

Pull Request resolved: https://github.com/pytorch/glow/pull/4606

[PyPer][quant] Add quantized embedding operators to OSS.

This is the first step in supporting Graph Mode Quantization for EmbeddingBag.

At a high level, the next steps would be
a) Implementation of Embedding prepack/unpack operators,
b) Implementation of torch.nn.quantized.dynamic.EmbeddingBag Module,
c) Implementation of torch.nn.quantized.EmbeddingBag Module,
d) Implementation (modification) of IR passes to support graph quantization of EmbeddingBag module.

More in-depth details regarding each step will be in the follow up diffs. Consider this as an initial diff that moves operators to respective places that's required for us to proceed.

Test Plan: ```buck test mode/no-gpu caffe2/test:quantization -- --stress-runs 100  test_embedding_bag```

Reviewed By: supriyar

Differential Revision: D21949828

fbshipit-source-id: cad5ed0a855db7583bddb1d93e2da398c128024a
2020-06-25 12:01:49 -07:00
Jerry Zhang
9f9e7c1d71 [quant][refactor] Tests for torch.jit.quantized (#40330)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40330

Test Plan: Imported from OSS

Differential Revision: D22149707

fbshipit-source-id: 44e7545bf9277d9245b5e9c2d9461f664fff0426
2020-06-22 10:41:31 -07:00
Jerry Zhang
b2f489dc57 [quant][graphmode] Rename graph mode quantization API to quantize_jit (#40212)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40212

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D22144745

fbshipit-source-id: 38a19b5afdddbbce262eea8ddf5b68458e6017b3
2020-06-19 18:13:37 -07:00
Edmund Williams Jr
465138ec39 refactoring TestQuantizeScript (#39677)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39677

Test Plan:
Moved a test class suite between files, wanted to have same functionality (simple code refactor) so tested to make sure the test output was the same before/after the refactor.
Image below shows the output of TestGraphModePostTrainingStatic before refactor

{F239676498}

This image shows the output of TestQuantizeScript (renamed version that is in test_quantize_script.py instead of test_quantize.py)

{F239676509}

Differential Revision: D21940638

Pulled By: edmundw314

fbshipit-source-id: 54160a5151aadf3a34bdac2bcaeb52904e6653ed
2020-06-19 11:47:00 -07:00
Wanchao Liang
442ec1dd4e [test] split remaining quantization tests out of test_jit (#40144)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40144

as title, split remaining quantization tests out of test_jit to reduce
the size of test_jit

Test Plan: Imported from OSS

Differential Revision: D22085034

Pulled By: wanchaol

fbshipit-source-id: 0c8639da01ffc3e6a72e6f470837786c73a6b3f0
2020-06-18 13:39:13 -07:00
Supriya Rao
f6739ec8e8 [quant][graphmode] Refactor dynamic quant tests (#40127)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40127

Reland PR.
Similar to static quant, break it up into op level tests and tests for jit passes

Test Plan:
python test/test_quantization.py TestQuantizeScriptPTDQOps
python test/test_quantization.py TestDynamicQuantizeScriptJitPasses

Imported from OSS

Differential Revision: D22081259

fbshipit-source-id: cef8f78f89ef8789683b52508379ae1b9ad00700
2020-06-17 13:40:19 -07:00
Supriya Rao
b5d54db6f4 Revert D22071278: [quant][graphmode] Refactor dynamic quant tests
Test Plan: revert-hammer

Differential Revision:
D22071278

Original commit changeset: 54292addcfbc

fbshipit-source-id: 20ffbea0fd05e974b31381437c61040b5b24c993
2020-06-16 15:01:05 -07:00
Supriya Rao
ddeaa74382 [quant][graphmode] Refactor dynamic quant tests (#40039)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40039

Similar to static quant, break it up into op level tests and tests for jit passes

Test Plan:
python test/test_quantization.py TestQuantizeScriptPTDQOps
python test/test_quantization.py TestDynamicQuantizeScriptJitPasses

Imported from OSS

Differential Revision: D22071278

fbshipit-source-id: 54292addcfbc00f7af960fb333921db2ff9fda04
2020-06-16 13:14:48 -07:00
Kimish Patel
bb12e4dca0 Add JIT fusion pass to fuse quantized add and relu. (#38897)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38897

Quantized ops support add_relu. This pass enables finding quantized add + relu
pattern and fuse them to add_relu.

Test Plan: buck run caffe2/test:quantization -- test_quantization.TestFusionPasses

Reviewed By: jerryzh168

Differential Revision: D21690909

fbshipit-source-id: 607cf72dde535df15eb7638841543ab2156af464
2020-05-27 14:16:57 -07:00
Vasiliy Kuznetsov
b57c8b720e [wip] Make quantization modules work with DataParallel (#37032)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37032

DataParallel requires all params and buffers of child modules to be updated
in place because of how it implements model replication during the
forward pass (see https://github.com/pytorch/pytorch/pull/12671 for
context). Any params or buffers not updated in place are lost and not
propagated back to the master.

This diff updates (some quantized modules) (TBD: all quantized modules? determine a good cut
point) to do their parameter update in-place. This will enable static
quant and QAT to work correctly with DataParallel.

TODO: https://github.com/pytorch/pytorch/pull/32684 needs to land before we can fix the graph mode test failures on this PR.

Test Plan:
script failed before and passes after the diff:
https://gist.github.com/vkuzo/78b06c01f23f98ee2aaaeb37e55f8d40

TODO before land: add integration testing

Imported from OSS

Differential Revision: D21206454

fbshipit-source-id: df6b4b04d0ae0f7ef582c82d81418163019e96f7
2020-05-05 13:06:43 -07:00
Zafar Takhirov
a09cb5f2f5 [quant] quantized reflection_pad1d (#37452)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37452

Test Plan: Imported from OSS

Differential Revision: D21286659

Pulled By: z-a-f

fbshipit-source-id: f9f4de497a790b296149313562d09f8ead5facee
2020-04-30 18:45:38 -07:00
Zafar
297cc5512e [quant] Enable convolution tests (#37494)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37494

Test Plan: Imported from OSS

Differential Revision: D21299442

Pulled By: z-a-f

fbshipit-source-id: 68513b52aaef852278f28031866f85123b016486
2020-04-29 12:24:45 -07:00
Jerry Zhang
facdd15cc6 [quant] Finishing refactor for quantization test files (#37366)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37366

- we can put both fake quant module and observer module tests in the test_workflow_module.py
- added test_quantized_functional.py
- moved tests in test_numerics.py to test_quantize.py and removed test_numerics.py

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D21282198

fbshipit-source-id: 60107cee7d1ed2cd14a45650e91ec28b8a262c52
2020-04-28 21:40:57 -07:00
Jerry Zhang
230b68168b [quant] Refactor test files (#36964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36964

Rename and restructure quantization related tests
https://github.com/pytorch/pytorch/issues/31625

Test Plan:
.

Imported from OSS

Differential Revision: D21192509

fbshipit-source-id: 148c93e86e0ea68ab18a067fe74a8035a29a1e4e
2020-04-23 10:28:56 -07:00
Jerry Zhang
ab26dfb44e [quant] Move quantization tests into test/quantization (#35812)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35812

Test Plan:
.

Imported from OSS

Differential Revision: D20795329

fbshipit-source-id: 42cc905c44ce7b86720aeef512d747ff6788d7a2
2020-04-01 12:44:19 -07:00
Michael Suo
319aee1afb Revert D20771828: [quant] Move quantization tests into test/quantization
Test Plan: revert-hammer

Differential Revision:
D20771828

Original commit changeset: 5f1df5e86c29

fbshipit-source-id: d14f915f291ae8a90026c5b65624459211495f47
2020-03-31 23:01:00 -07:00
Jerry Zhang
fef6c617d4 [quant] Move quantization tests into test/quantization (#35688)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35688

Test Plan:
.

Imported from OSS

Differential Revision: D20771828

fbshipit-source-id: 5f1df5e86c29f7bdfbdc6563450e909b3bfdc07a
2020-03-31 20:30:57 -07:00
Supriya Rao
a090de380c [quant][graph] Add quant fusion for dynamic quantization (#35586)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35586

This pass fuses the choose_qparams-quant-dequant sequence
Fusion for weight tensor is the same as static quant.

Test Plan:
python test/test_quantize_script.py

Imported from OSS

Differential Revision: D20755680

fbshipit-source-id: b7443770642b6e6fa0fa9da8a44637e9b2d4df70
2020-03-30 23:34:56 -07:00