Commit Graph

16 Commits

Author SHA1 Message Date
James Reed
a35d2902ef jit.script() testing and fixes (#23891)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23891

This adds an initial set of testing coverage for quantization that checks if the modules can be scripted. Testing for tracing and serialization is forthcoming

Test Plan: Imported from OSS

Differential Revision: D16698045

Pulled By: jamesr66a

fbshipit-source-id: 96d80d938b816220af72359165a7b96d998a30c9
2019-08-08 12:06:18 -07:00
Daya Khudia
4104e80eae qconv+relu and qlinear+relu modules (#23410)
Summary:
adding qconv+relu and qlinear+relu modules in nn/_intrinsic/quantized
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23410

Test Plan:
Extended tests to test these new modules as well

buck test mode/dev caffe2/test:quantized -- 'test_linear_api'  --print-passing-details
```
Running 1 tests
Started new test run: https://our.intern.facebook.com/intern/testinfra/testrun/2251799820197379
      ✓ caffe2/test:quantized - test_linear_api (test_nn_quantized.ModuleAPITest) 4.055 1/1 (passed)
Test output:
> test_linear_api (test_nn_quantized.ModuleAPITest)
> test API functionality for nn.quantized.linear and nn._intrinsic.quantized.linear_relu ... ok
>
> ----------------------------------------------------------------------
> Ran 1 test in 4.056s
>
> OK
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/2251799820197379
Summary (total time 10.66s):
  PASS: 1
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```

buck test mode/dev caffe2/test:quantized -- 'test_conv_api'  --print-passing-details
```
Running 2 tests
Started new test run: https://our.intern.facebook.com/intern/testinfra/testrun/4785074607089664
      ✓ caffe2/test:quantized - test_conv_api (test_quantized_conv.QuantizedConvTest) 5.195 1/2 (passed)
Test output:
> test_conv_api (test_quantized_conv.QuantizedConvTest)
> Tests the correctness of the conv functional. ... ok
>
> ----------------------------------------------------------------------
> Ran 1 test in 5.195s
>
> OK
      ✓ caffe2/test:quantized - test_conv_api (test_nn_quantized.ModuleAPITest) 10.616 2/2 (passed)
Test output:
> test_conv_api (test_nn_quantized.ModuleAPITest)
> Tests the correctness of the conv module. ... ok
>
> ----------------------------------------------------------------------
> Ran 1 test in 10.616s
>
> OK
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/4785074607089664
Summary (total time 17.31s):
  PASS: 2
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
``

Differential Revision: D16505333

Pulled By: dskhudia

fbshipit-source-id: 04f45cd0e76dc55f4694d558b913ab2958b7d727
2019-07-26 08:50:36 -07:00
Zafar Takhirov
94711d7471 Quantized conv avoid functional usage (#22733)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22733

This refactor changes the conv module to avoid the usage of the functional ops.

Reviewed By: jerryzh168

Differential Revision: D15835572

fbshipit-source-id: f2294cd708fbe8372eb3a15cc60d83777d4f7029
2019-07-24 11:43:12 -07:00
Jerry Zhang
d7448c7812 quantized conv module (#23178)
Summary:
att

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23178
ghstack-source-id: 86973164

Differential Revision: D16426871

fbshipit-source-id: a2ebb38997acfeb61b7dfd6b11dd8ee9b3a7a8ed
2019-07-22 20:47:40 -07:00
Jerry Zhang
1c574458b0 nn_quantized test (#23169)
Summary:
- scale/zero_point in quantized modules should be Tensor
- fix conv module permutation API

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23169
ghstack-source-id: 86956383

Reviewed By: zafartahirov

Differential Revision: D16423570

fbshipit-source-id: d29498e07bdd8f71a33b4e16e089f80847bbca6d
2019-07-22 15:53:36 -07:00
Zafar Takhirov
963707c5ea MaxPool2d in the torch (#22765)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22765

the pooling signature is the same as the non-quantized one. Adding it to the native_functions.yaml

Reviewed By: jerryzh168

Differential Revision: D16102608

fbshipit-source-id: 7627ad8f02a231f488b74d1a245b853f89d9c419
2019-07-20 21:41:09 -07:00
Jianyu Huang
8ec712da30 Add the support of handle Bias being nullptr for torch.ops.quantized.fbgemm_linear (#22403)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22403

- C10 Operator Registration (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/core/op_registration/op_registration.cpp) supports None type.

- ATen has None Tensor support, e.g., https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml#L1078

Reviewed By: zafartahirov

Differential Revision: D16069522

fbshipit-source-id: 3acaec783fc138ff36b14ffc0582d0764be4ad34
2019-07-11 17:33:08 -07:00
Junjie Bai
3fabb9f105 Fix lint
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22737

Differential Revision: D16200090

Pulled By: bddppq

fbshipit-source-id: 3819716a9b01f073966fc8b420c6a0b8d13232ac
2019-07-11 11:09:24 -07:00
Zafar Takhirov
d21e476dcd Quantized Conv2d Module (#21323)
Summary:
Stack:
      https://github.com/pytorch/pytorch/issues/21808 Quantized conv avoid functional usage  [💛](https://our.intern.facebook.com/intern/diff/D15835572/)
      **https://github.com/pytorch/pytorch/issues/21323 Quantized Conv2d Module**  [💛](https://our.intern.facebook.com/intern/diff/D15551835/)

Quantized Conv2d Module
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21323

Test Plan:
Tests are split into two parts: functional and API.

`buck test mode/dev caffe2/test:quantized -- test_conv_api` : https://our.intern.facebook.com/intern/testinfra/testrun/4785074605318491

```
Parsing buck files: finished in 1.4 sec
Building: finished in 4.6 sec (100%) 7136/7136 jobs, 2 updated
  Total time: 6.1 sec
Trace available for this run at /tmp/testpilot.20190703-153023.392592.log
TestPilot test runner for Facebook. See https://fburl.com/testpilot for details.
Testpilot build revision 7149de230b9e1cdc7a872bb31fe099f0616dee09 fbpkg e59e6ab0fe8e47a496f915d34555c3ad at Fri Jun 28 12:20:54 2019 by twsvcscm from /usr/local/fbprojects/packages/testinfra.testpilot/647/t.par
Discovering tests
Running 2 tests
Started new test run: https://our.intern.facebook.com/intern/testinfra/testrun/4785074605318491
      ✓ caffe2/test:quantized - test_conv_api (test_nn_quantized.ModuleAPITest) 0.044 1/2 (passed)
      ✓ caffe2/test:quantized - test_conv_api (test_quantized_conv.FunctionalAPITest) 5.109 2/2 (passed)
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/4785074605318491
Summary (total time 9.08s):
  PASS: 2
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```

Differential Revision: D15551835

Pulled By: zafartahirov

fbshipit-source-id: 481a7df4b8a88e485437e1596eefb08d5e6766fa
2019-07-10 21:31:24 -07:00
Jerry Zhang
5e77111486 nn.quantized.Relu and nn.quantize.Quantize/DeQuantize modules
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21930

Differential Revision: D15554224

fbshipit-source-id: 1de9ac7412468106be60e53852c23318ead37bc6
2019-06-27 16:15:17 -07:00
Jerry Zhang
2832e33a94 Add serialization for nn.quantized.Linear module (#21925)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21925

att

Differential Revision: D15483071

fbshipit-source-id: 3a218dad5b653b38a0885339889ff70c75a13bef
2019-06-27 14:57:22 -07:00
Jerry Zhang
5c46e701fc Implementation of nn.quantized.linear module (#21921)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21921

Call FBGEMM kernels to implement quantized linear operator. This operator is used only for inference.

Differential Revision: D15375695

fbshipit-source-id: b9ca6c156fd60481fea83e55603b2897f7bfc3eb
2019-06-27 14:09:48 -07:00
Jerry Zhang
fd19d06db4 remaining use of t.quantize_linear (#21219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21219

att

Differential Revision: D15583802

fbshipit-source-id: 742e8b799d67485b2d48b1458839f3f3b000f200
2019-05-31 16:05:44 -07:00
Jerry Zhang
85fad0597c Add qint8 type (int8_t) (#19984)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19984

Add qint8 for QTensor, with underlying type of int8_t

Reviewed By: jianyuh

Differential Revision: D15150715

fbshipit-source-id: 57580f599d46f9323af5ce462dbbc464b25e40d7
2019-05-17 20:35:05 -07:00
Jerry Zhang
abb3698976 Add QInt32 ScalarType and qint32 data type (#19816)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19816

We need this for quantization for bias
add third argument of ScalarType to `quantize_linear`

Differential Revision: D15094174

fbshipit-source-id: f19ec8f4716cf5fe0aa21b38d45af6d27c9ab377
2019-05-15 18:50:18 -07:00
Jerry Zhang
8ca10d35e5 Add torch.nn.quantized.functional namespace (#20042)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20042

Exposing torch.ops.quantized as torch.nn.quantized.functional

Differential Revision: D15178099

fbshipit-source-id: 8d65134bd727296f2750bbd2b54df0b99fc84b33
2019-05-06 18:49:58 -07:00