Commit Graph

26 Commits

Author SHA1 Message Date
Raghu Krishnamoorthi
a4b4848c48 Update on "Serialization for nn.quantized.functional modules"
Add load_from_state_dict and save_from_state_dict for quantized functional modules

Differential Revision: [D16923651](https://our.internmc.facebook.com/intern/diff/D16923651/)
2019-08-22 17:05:35 -07:00
Raghu Krishnamoorthi
5c6c2cf876 Update on "Update mapping dictionary to support functionalmodules and pooling operations"
Differential Revision: [D16879132](https://our.internmc.facebook.com/intern/diff/D16879132/)
2019-08-22 17:05:32 -07:00
Raghu Krishnamoorthi
7da92dc45b Update on "Serialization for nn.quantized.functional modules"
Add load_from_state_dict and save_from_state_dict for quantized functional modules

Differential Revision: [D16923651](https://our.internmc.facebook.com/intern/diff/D16923651/)
2019-08-22 16:59:17 -07:00
Raghu Krishnamoorthi
7987827b49 Update on "Update mapping dictionary to support functionalmodules and pooling operations"
Differential Revision: [D16879132](https://our.internmc.facebook.com/intern/diff/D16879132/)
2019-08-22 16:59:13 -07:00
Raghuraman Krishnamoorthi
696cabae9b Baseline observer module, ensuring that (min,max) range includes zero. (#24297)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24297

ghstack-source-id: 88252409

Differential Revision: D16635637

fbshipit-source-id: fcef20b9c88b2c3bd97e311514e5b2d0339ff28a
2019-08-15 15:25:23 -07:00
Jerry Zhang
761ae8e9b6 Add intrinsic module mappings (#23753)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23753

Add intrinsic(fused) module mappings in quantize.py to enable mapping fused modules
in both QAT and post PTQ

Differential Revision: D16820749

fbshipit-source-id: 07de76a4f09b44bde8b193c103eac02c22b875b6
2019-08-15 09:37:24 -07:00
Jianyu Huang
f66c90469b Fix Lint (#24381)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24381

As pointed out in https://github.com/pytorch/pytorch/pull/24299#issuecomment-521471089, the previous PR broke the Lint.
ghstack-source-id: 88339887

Reviewed By: jamesr66a

Differential Revision: D16822443

fbshipit-source-id: 3aed5b9404b0f0fcf453c05b59189974243b0df2
2019-08-14 19:22:09 -07:00
Jianyu Huang
0f64043b49 Remove the activation observer for default_qconfig (#24299)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24299

As suggested in https://github.com/pytorch/pytorch/pull/24232, we will remove the activation observer for dynamic quantization path.
ghstack-source-id: 88287094

Differential Revision: D16798590

fbshipit-source-id: 07a245d5584b5b15c6895d9b09deef4a0605073a
2019-08-14 17:21:50 -07:00
Jianyu Huang
584c6986fd Add the type matching rule for qconfig_dict (#23212)
Summary:
We want to use the Module type as the key for the qconfig_dict for the module replacement during the quantization.

Before this Diff, to dynamic quantize the BERT model, we have to specify each layer:
```
qconfig_dict = {
    'encoder.layer.0.attention.self.query': default_qconfig,
    'encoder.layer.0.attention.self.key': default_qconfig,
    'encoder.layer.0.attention.self.value': default_qconfig,
    'encoder.layer.0.attention.output.dense': default_qconfig,
    'encoder.layer.0.intermediate.dense': default_qconfig,
    'encoder.layer.0.output.dense': default_qconfig,
    'encoder.layer.1.attention.self.query': default_qconfig,
    'encoder.layer.1.attention.self.key': default_qconfig,
    'encoder.layer.1.attention.self.value': default_qconfig,
    'encoder.layer.1.attention.output.dense': default_qconfig,
    'encoder.layer.1.intermediate.dense': default_qconfig,
    'encoder.layer.1.output.dense': default_qconfig,
   ...
}
```
After this Diff, we only need the following
```
qconfig_dict = {
     torch.nn.Linear : default_qconfig
}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23212
ghstack-source-id: 88287091

Reviewed By: zafartahirov

Differential Revision: D16436542

fbshipit-source-id: 11fbe68ee460560c1a7cdded63581eb7a00e5a89
2019-08-14 13:07:36 -07:00
Jianyu Huang
e94ba742b0 Dynamic Quantized Linear Module (#23128)
Summary:
- ~~Add a unit test for the Dynamic Quantized Linear operator (```torch.fbgemm_linear_quantize_weight```, ```torch.fbgemm_pack_quantized_matrix```, and ```torch.fbgemm_linear_int8_weight```) in ```test_quantized.py```.~~ Move this to D16404027 for a separate review.
- Add the Dynamic Quantized Linear module in ```torch/nn/quantized/modules/linear.py```. ~~This is in a rudimentary stage. Will add more functions later~~.
- Add the torch.quantize logic (prepare, eval, convert) for dynamic quantization.
- Add a unit test for the Dynamic Quantized Linear module  in ```test_nn_quantized.py```.
- Add a unit test for the Model-level Quantization API

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23128
ghstack-source-id: 88257232

Differential Revision: D16258664

fbshipit-source-id: 4be3ac39ee27c088b341c741d3f09f51d5a23ef0
2019-08-13 21:01:23 -07:00
Zafar Takhirov
4cc16782f3 Removing the make_module script. (#23635)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23635

It appears it is the same complexity to add new modules using a base class and using a generation script.

Test Plan: Imported from OSS

Differential Revision: D16593364

Pulled By: zafartahirov

fbshipit-source-id: 852dcf41f3dfa2a89152042b8e61d0b6defa8feb
2019-08-13 09:58:28 -07:00
Daya Khudia
f510409281 Enable FBGEMM tests under UBSAN as well (#23570)
Summary:
Enabling tests under UBSAN as well
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23570

Test Plan:
buck test mode/dev caffe2/test:quantized
```
Running 29 tests
Started new test run: https://our.intern.facebook.com/intern/testinfra/testrun/3940649677415136
      ✓ caffe2/test:quantized - test_qtensor (test_quantized_tensor.TestQuantizedTensor) 0.536 1/29 (passed)
      ✓ caffe2/test:quantized - test_qtensor_per_channel_affine (test_quantized_tensor.TestQuantizedTensor) 0.453 2/29 (passed)
      ✓ caffe2/test:quantized - test_qtensor_reshape (test_quantized_tensor.TestQuantizedTensor) 0.302 3/29 (passed)
      ✓ caffe2/test:quantized - test_qadd_relu_same_qparams (test_quantized.TestQuantizedOps) 0.332 4/29 (passed)
      ✓ caffe2/test:quantized - test_qtensor_view (test_quantized_tensor.TestQuantizedTensor) 0.351 5/29 (passed)
      ✓ caffe2/test:quantized - test_qadd_relu_different_qparams (test_quantized.TestQuantizedOps) 0.348 6/29 (passed)
      ✓ caffe2/test:quantized - test_qtensor_dequantize_linear (test_quantized_tensor.TestQuantizedTensor) 0.338 7/29 (passed)
      ✓ caffe2/test:quantized - test_qtensor_copy (test_quantized_tensor.TestQuantizedTensor) 0.267 8/29 (passed)
      ✓ caffe2/test:quantized - test_qtensor_clone (test_quantized_tensor.TestQuantizedTensor) 0.330 9/29 (passed)
      ✓ caffe2/test:quantized - test_qrelu (test_quantized.TestQuantizedOps) 1.774 10/29 (passed)
      ✓ caffe2/test:quantized - test_pool_api (test_nn_quantized.ModuleAPITest) 0.418 11/29 (passed)
      ✓ caffe2/test:quantized - test_qtensor_load_save (test_quantized_tensor.TestQuantizedTensor) 0.724 12/29 (passed)
      ✓ caffe2/test:quantized - test_relu_api (test_nn_quantized.FunctionalAPITest) 1.013 13/29 (passed)
      ✓ caffe2/test:quantized - test_qtensor_quant_dequant (test_quantized_tensor.TestQuantizedTensor) 1.055 14/29 (passed)
      ✓ caffe2/test:quantized - test_qtensor_permute (test_quantized_tensor.TestQuantizedTensor) 0.696 15/29 (passed)
      ✓ caffe2/test:quantized - test_qtensor_dtypes (test_quantized_tensor.TestQuantizedTensor) 0.841 16/29 (passed)
      ✓ caffe2/test:quantized - test_quant_dequant_api (test_nn_quantized.ModuleAPITest) 0.616 17/29 (passed)
      ✓ caffe2/test:quantized - test_qtensor_creation (test_quantized_tensor.TestQuantizedTensor) 0.698 18/29 (passed)
      ✓ caffe2/test:quantized - test_qconv (test_quantized.TestQuantizedConv) 4.743 19/29 (passed)
      ✓ caffe2/test:quantized - test_cat (test_quantized.TestQuantizedOps) 6.992 20/29 (passed)
      ✓ caffe2/test:quantized - test_linear_api (test_nn_quantized.ModuleAPITest) 8.970 21/29 (passed)
      ✓ caffe2/test:quantized - test_conv_api (test_quantized_conv.QuantizedConvTest) 9.403 22/29 (passed)
      ↷ caffe2/test:quantized - test_qnnpack_linear (test_quantized.TestQNNPackOps) 0.000 23/29 (skipped)
Test output:
> Skipped: QNNPACK does not play well with UBSAN at the moment, so we skip the test if we are in a UBSAN environment.
> test_qnnpack_linear (test_quantized.TestQNNPackOps) ... skipped 'QNNPACK does not play well with UBSAN at the moment, so we skip the test if we are in a UBSAN environment.'
>
> ----------------------------------------------------------------------
> Ran 1 test in 0.000s
>
> OK (skipped=1)
      ↷ caffe2/test:quantized - test_qnnpack_relu (test_quantized.TestQNNPackOps) 0.000 24/29 (skipped)
Test output:
> Skipped: QNNPACK does not play well with UBSAN at the moment, so we skip the test if we are in a UBSAN environment.
> test_qnnpack_relu (test_quantized.TestQNNPackOps) ... skipped 'QNNPACK does not play well with UBSAN at the moment, so we skip the test if we are in a UBSAN environment.'
>
> ----------------------------------------------------------------------
> Ran 1 test in 0.000s
>
> OK (skipped=1)
      ✓ caffe2/test:quantized - test_max_pool2d (test_quantized.TestQuantizedOps) 8.453 25/29 (passed)
      ✓ caffe2/test:quantized - test_qlinear_unpack (test_quantized.TestQuantizedLinear) 0.664 26/29 (passed)
      ✓ caffe2/test:quantized - test_qconv_unpack (test_quantized.TestQuantizedConv) 2.965 27/29 (passed)
      ✓ caffe2/test:quantized - test_qlinear (test_quantized.TestQuantizedLinear) 1.915 28/29 (passed)
      ✓ caffe2/test:quantized - test_conv_api (test_nn_quantized.ModuleAPITest) 60.804 29/29 (passed)
      ✓ caffe2/test:quantized - main 0.000 (passed)
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/3940649677415136
Summary (total time 68.66s):
  PASS: 28
  FAIL: 0
  SKIP: 2
    caffe2/test:quantized - test_qnnpack_linear (test_quantized.TestQNNPackOps)
    caffe2/test:quantized - test_qnnpack_relu (test_quantized.TestQNNPackOps)
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```

Reviewed By: jianyuh

Differential Revision: D16569166

Pulled By: dskhudia

fbshipit-source-id: 53522b4162eb1ebb35b408a1503d9664305c85b0
2019-08-12 17:59:22 -07:00
James Reed
a35d2902ef jit.script() testing and fixes (#23891)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23891

This adds an initial set of testing coverage for quantization that checks if the modules can be scripted. Testing for tracing and serialization is forthcoming

Test Plan: Imported from OSS

Differential Revision: D16698045

Pulled By: jamesr66a

fbshipit-source-id: 96d80d938b816220af72359165a7b96d998a30c9
2019-08-08 12:06:18 -07:00
James Reed
40f0b1c844 Enable OSS quantization tests (#23858)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23858

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23718

Changes:

- Enable tests for quantization test files in `run_tests.py`
- Remove `__future__` imports from `torch/nn/qat/modules/__init__.py`, since `unicode_literals` messes up imports on python2 because the elements in `__all__` will be Unicode and not string
- Skip PostTrainingQuantTests if the build doesn't have FBGEMM (only a small subset of targets in tests) or if testing under UBSAN (the suppression file doesn't seem to work)

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D16639467

Pulled By: jamesr66a

fbshipit-source-id: 532766797c216976dd7e07d751f768ff8e0fc207
2019-08-06 11:20:30 -07:00
Jerry Zhang
89956374c3 Remove qconfig_dict from API (#23465)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23465

We decided not to allow user to use qconfig_dict to do quantization
since that API is not robust.

Differential Revision: D16611504

fbshipit-source-id: b0d1d311b32c990a165c480f50e9ce3d68b785b5
2019-08-02 10:28:48 -07:00
Zafar Takhirov
058645acb1 Fusion and _intrinsic modules (#23003)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23003

torch.quantization.fuse_module and torch.nn._intrinsic convRelu and LinearRelu

Fusion function to combine specific modules: (conv,bn) and  (conv,bn,relu).
In all cases, replace modules in place. The first module is replaced with the _intrinsic fused module and the remaining modules are replaced by nn.Identity.
Support both training and eval. For training, the modules are "fused" with a sequential container. This is to allow for further module swaps for quantization aware training.
Also add: torch.nn._intrinsic for convRelu and LinearRelu.

TODO: Add tests for _intrinsic modules.

Conv BN fusion code is based on DsKhudia's implementation

Differential Revision: D16199720

fbshipit-source-id: 95fb9ffe72b361d280313b2ec57de2acd4f9dda2
2019-07-23 14:54:19 -07:00
Jerry Zhang
77353636de Conv module (#23084)
Summary:
Added Conv module for qat

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23084
ghstack-source-id: 86862445

Differential Revision: D16379417

fbshipit-source-id: 742cc8b8e0f132070ca4943a1c2e3db60c2b5bdc
2019-07-19 18:49:52 -07:00
Jerry Zhang
7cc029cb75 Quantization aware training in eager mode (#23082)
Summary:
Add support for quantization aware training in eager mode

Modifications to Post training flow:
## Prepare
* Fusion: e.g. (Conv, Bn) → ConvBn (float)
* Swapping: To insert fake_quant to weight, we need to swap the float modules that has weight with different qat modules, e.g. Conv → torch.nn.qat.Conv , ConvBn → torch.nn._intrinsic.qat.ConvBn
```
    * previously we were thinking about modify the weight in forward_pre hook and change it back in forward_hook:
        * def forward_pre_hook(self, input):
                self.float_weight = self.weight
                self.weight = self.fake_quantize(self.float_weight)

            def forward_hook(self, input):
                self.weight = self.float_weight
```

* Assignments to self.weight are needed because we can’t change forward function and in forward function they are using self.weight.
* But we will need to keep two copies of weight in this case, so it’s probably better to just swap the module
* So we want to just swap Conv to torch.nn.qat.Conv and Linear to torch.nn.qat.Linear
* qat modules will have fake_quant for output and weights inserted in forward function

## Convert
* flow should be identical to ptq, but the swapping dictionary is slightly different since modules are changed in prepare step.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23082
ghstack-source-id: 86824650

Differential Revision: D16379374

fbshipit-source-id: 7d16d1acd87025065a24942ff92abf18e9fc8070
2019-07-19 14:57:25 -07:00
Soumith Chintala
84c2c89e2c Revert D16199356: [qat] Quantization aware training in eager mode
Differential Revision:
D16199356

Original commit changeset: 62aeaf47c12c

fbshipit-source-id: d06a96b0a617ae38029ffb246173ec065454b666
2019-07-19 03:18:48 -07:00
Soumith Chintala
f19aa12ae5 Revert D16274792: [qat] Conv module
Differential Revision:
D16274792

Original commit changeset: 1da10194123b

fbshipit-source-id: 71b34774b463f2350289bd39b8cfd798e095ffa5
2019-07-19 03:18:45 -07:00
Jerry Zhang
12d9d768b8 Conv module (#22899)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22899

Added Conv module for qat

Reviewed By: zafartahirov

Differential Revision: D16274792

fbshipit-source-id: 1da10194123b2759a6a35c60d1c2d2c0b569ccdc
2019-07-18 18:58:07 -07:00
Jerry Zhang
65ef671d11 Quantization aware training in eager mode (#22732)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22732

Add support for quantization aware training in eager mode

Modifications to Post training flow:
## Prepare
* Fusion: e.g. (Conv, Bn) → ConvBn (float)
* Swapping: To insert fake_quant to weight, we need to swap the float modules that has weight with different qat modules, e.g. Conv → torch.nn.qat.Conv , ConvBn → torch.nn._intrinsic.qat.ConvBn
```
    * previously we were thinking about modify the weight in forward_pre hook and change it back in forward_hook:
        * def forward_pre_hook(self, input):
                self.float_weight = self.weight
                self.weight = self.fake_quantize(self.float_weight)

            def forward_hook(self, input):
                self.weight = self.float_weight
```

* Assignments to self.weight are needed because we can’t change forward function and in forward function they are using self.weight.
* But we will need to keep two copies of weight in this case, so it’s probably better to just swap the module
* So we want to just swap Conv to torch.nn.qat.Conv and Linear to torch.nn.qat.Linear
* qat modules will have fake_quant for output and weights inserted in forward function

## Convert
* flow should be identical to ptq, but the swapping dictionary is slightly different since modules are changed in prepare step.

Reviewed By: zafartahirov

Differential Revision: D16199356

fbshipit-source-id: 62aeaf47c12c62a87d9cac208f25f7592e245d6c
2019-07-18 18:58:03 -07:00
Lucas Kabela
86fc417147 Move Quantization Models to common_quantization (#22706)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22706

Moved the models used for quantization test from the test_quantization.py file to common_quantization.py

Reviewed By: jerryzh168

Differential Revision: D16189865

fbshipit-source-id: 409b43454b6b3fe278ac16b1affb9085d6ed6835
2019-07-10 15:05:49 -07:00
Lucas Kabela
3e3e6ee335 Add common_quantized test case utilities (#22694)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22694

Move quantization and quantized utility functions for testing to common_quantized.py and common_quantization.py.  Addditionally, add a quantized test case base class which contains common methods for checking the results of quantization on modules.  As a consequence of the move, fixed the import at the top of test_quantized.py, and test_quantization to use the new utility

Reviewed By: jerryzh168

Differential Revision: D16172012

fbshipit-source-id: 329166af5555fc829f26bf1383d682c25c01a7d9
2019-07-10 12:23:36 -07:00
Jerry Zhang
164388150a fix lint (#22654)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22654

att

Reviewed By: bddppq

Differential Revision: D16168219

fbshipit-source-id: db1a5e2161e7be70b2f6e6b4beaa27ea91f853f2
2019-07-09 16:41:26 -07:00
Jerry Zhang
5040d52a5a torch.quantization conversion utilities, observers for eager mode quantization (#22010)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22010

torch.quantization module with observers and conversion routines

Reviewed By: zafartahirov

Differential Revision: D15554183

fbshipit-source-id: 05a3fabe28dd701978b8ecebf5bfc3a4c044ba5c
2019-07-09 10:51:38 -07:00