Commit Graph

111 Commits

Author SHA1 Message Date
Supriya Rao
b5d54db6f4 Revert D22071278: [quant][graphmode] Refactor dynamic quant tests
Test Plan: revert-hammer

Differential Revision:
D22071278

Original commit changeset: 54292addcfbc

fbshipit-source-id: 20ffbea0fd05e974b31381437c61040b5b24c993
2020-06-16 15:01:05 -07:00
Supriya Rao
ddeaa74382 [quant][graphmode] Refactor dynamic quant tests (#40039)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40039

Similar to static quant, break it up into op level tests and tests for jit passes

Test Plan:
python test/test_quantization.py TestQuantizeScriptPTDQOps
python test/test_quantization.py TestDynamicQuantizeScriptJitPasses

Imported from OSS

Differential Revision: D22071278

fbshipit-source-id: 54292addcfbc00f7af960fb333921db2ff9fda04
2020-06-16 13:14:48 -07:00
Kimish Patel
bb12e4dca0 Add JIT fusion pass to fuse quantized add and relu. (#38897)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38897

Quantized ops support add_relu. This pass enables finding quantized add + relu
pattern and fuse them to add_relu.

Test Plan: buck run caffe2/test:quantization -- test_quantization.TestFusionPasses

Reviewed By: jerryzh168

Differential Revision: D21690909

fbshipit-source-id: 607cf72dde535df15eb7638841543ab2156af464
2020-05-27 14:16:57 -07:00
Vasiliy Kuznetsov
b57c8b720e [wip] Make quantization modules work with DataParallel (#37032)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37032

DataParallel requires all params and buffers of child modules to be updated
in place because of how it implements model replication during the
forward pass (see https://github.com/pytorch/pytorch/pull/12671 for
context). Any params or buffers not updated in place are lost and not
propagated back to the master.

This diff updates (some quantized modules) (TBD: all quantized modules? determine a good cut
point) to do their parameter update in-place. This will enable static
quant and QAT to work correctly with DataParallel.

TODO: https://github.com/pytorch/pytorch/pull/32684 needs to land before we can fix the graph mode test failures on this PR.

Test Plan:
script failed before and passes after the diff:
https://gist.github.com/vkuzo/78b06c01f23f98ee2aaaeb37e55f8d40

TODO before land: add integration testing

Imported from OSS

Differential Revision: D21206454

fbshipit-source-id: df6b4b04d0ae0f7ef582c82d81418163019e96f7
2020-05-05 13:06:43 -07:00
Zafar Takhirov
a09cb5f2f5 [quant] quantized reflection_pad1d (#37452)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37452

Test Plan: Imported from OSS

Differential Revision: D21286659

Pulled By: z-a-f

fbshipit-source-id: f9f4de497a790b296149313562d09f8ead5facee
2020-04-30 18:45:38 -07:00
Zafar
297cc5512e [quant] Enable convolution tests (#37494)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37494

Test Plan: Imported from OSS

Differential Revision: D21299442

Pulled By: z-a-f

fbshipit-source-id: 68513b52aaef852278f28031866f85123b016486
2020-04-29 12:24:45 -07:00
Jerry Zhang
facdd15cc6 [quant] Finishing refactor for quantization test files (#37366)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37366

- we can put both fake quant module and observer module tests in the test_workflow_module.py
- added test_quantized_functional.py
- moved tests in test_numerics.py to test_quantize.py and removed test_numerics.py

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D21282198

fbshipit-source-id: 60107cee7d1ed2cd14a45650e91ec28b8a262c52
2020-04-28 21:40:57 -07:00
Jerry Zhang
230b68168b [quant] Refactor test files (#36964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36964

Rename and restructure quantization related tests
https://github.com/pytorch/pytorch/issues/31625

Test Plan:
.

Imported from OSS

Differential Revision: D21192509

fbshipit-source-id: 148c93e86e0ea68ab18a067fe74a8035a29a1e4e
2020-04-23 10:28:56 -07:00
Jerry Zhang
ab26dfb44e [quant] Move quantization tests into test/quantization (#35812)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35812

Test Plan:
.

Imported from OSS

Differential Revision: D20795329

fbshipit-source-id: 42cc905c44ce7b86720aeef512d747ff6788d7a2
2020-04-01 12:44:19 -07:00
Michael Suo
319aee1afb Revert D20771828: [quant] Move quantization tests into test/quantization
Test Plan: revert-hammer

Differential Revision:
D20771828

Original commit changeset: 5f1df5e86c29

fbshipit-source-id: d14f915f291ae8a90026c5b65624459211495f47
2020-03-31 23:01:00 -07:00
Jerry Zhang
fef6c617d4 [quant] Move quantization tests into test/quantization (#35688)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35688

Test Plan:
.

Imported from OSS

Differential Revision: D20771828

fbshipit-source-id: 5f1df5e86c29f7bdfbdc6563450e909b3bfdc07a
2020-03-31 20:30:57 -07:00
Supriya Rao
a090de380c [quant][graph] Add quant fusion for dynamic quantization (#35586)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35586

This pass fuses the choose_qparams-quant-dequant sequence
Fusion for weight tensor is the same as static quant.

Test Plan:
python test/test_quantize_script.py

Imported from OSS

Differential Revision: D20755680

fbshipit-source-id: b7443770642b6e6fa0fa9da8a44637e9b2d4df70
2020-03-30 23:34:56 -07:00
Jerry Zhang
6fc2403951 [quant][graphmode] qconfig_dict support None (#35336)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35336

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D20655302

fbshipit-source-id: b453f3240ac487aa29629953b4d71274dbbc25fc
2020-03-29 12:47:47 -07:00
Supriya Rao
daba68c601 [quant][graph] Add a new observer type for dynamic quantization (#35455)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35455

In graph mode we need to observer the activation tensor for dynamic quantization. This observer should behave the same way as the quantization functions called in the dynamic operator.
Currently for qlinear_dynamic we call quant_utils::ChooseQuantizationParams which has its own logic for calculating scale and zero_point.
We mimic those calculations in the new observer.

Test Plan:
python test/test_quantization.py ObserverTest

Imported from OSS

Differential Revision: D20664586

fbshipit-source-id: e987ea71fff777c21e00c498504e6586e92568a2
2020-03-26 17:38:21 -07:00
Supriya Rao
b4b8b3c0ca Revert D20630988: [quant][graph] Add a new observer type for dynamic quantization
Test Plan: revert-hammer

Differential Revision:
D20630988

Original commit changeset: 7e7aca77590f

fbshipit-source-id: 6bc67ca322c1703004e0053f8eba9b8f6a3a5f67
2020-03-25 18:52:21 -07:00
Supriya Rao
7e24ab8c4a [quant][graph] Add a new observer type for dynamic quantization (#35265)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35265

In graph mode we need to observer the activation tensor for dynamic quantization. This observer should behave the same way as the quantization functions called in the dynamic operator.
Currently for qlinear_dynamic we call quant_utils::ChooseQuantizationParams which has its own logic for calculating scale and zero_point.
We mimic those calculations in the new observer.

Test Plan:
python test/test_quantization.py ObserverTest

Imported from OSS

Differential Revision: D20630988

fbshipit-source-id: 7e7aca77590f965dcb423a705e68d030aaf98550
2020-03-25 16:50:05 -07:00
Lingyi Liu
fddcd72a31 Add the more fusion (conv3d and batchnorm)support in pytorch quantization flow (#33540)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33540

Differential Revision: D19994498

Pulled By: lly-zero-one

fbshipit-source-id: e5e13eab6924bd2ce1b57b16b672844b8b9638f5
2020-03-23 20:36:03 -07:00
Jerry Zhang
90ca7a1feb [quant][graphmode] Add Finalize function that inlines graph and produce quantized ops (#33927)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33927

Test Plan:
test will be added in later PRs

Imported from OSS

Differential Revision: D20354879

fbshipit-source-id: 03976f4b86c46dbdc4e45764a1e72f1a3855a404
2020-03-12 14:52:58 -07:00
James Reed
8a17dc65af [quantization] Make FP16 RNN use new prepack op (#34339)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34339

Test Plan: Imported from OSS

Differential Revision: D20297194

Pulled By: jamesr66a

fbshipit-source-id: 8bf6d0f2cb047e90bbdd184aaad337b143040d10
2020-03-07 10:04:01 -08:00
Supriya Rao
e236e15934 [quant] Run weight_post_process for QAT (#33852)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33852

This fixes an issue for QAT models. During eval if we call `prepare_qat` and `convert` before calling `load_state_dict` it throws an error because the weight info (num channels) is not updated in the observer module.
It is not an issue for per-tensor case

Fixes issue #33830

Test Plan:
python test/test_quantization.py EagerModePostTrainingQuantTest.test_eval_after_train
python test/test_quantization.py EagerModeQuantizationAwareTrainingTest.test_eval_after_train

Imported from OSS

Differential Revision: D20212996

fbshipit-source-id: a04af8fe4df2e555270ae4d6693f5777d86f8a46
2020-03-04 14:01:32 -08:00
Dmytro Dzhulgakov
a8fc3d8c2a Fix HistogramObserver to not do detach on input (#34114)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/33545, added a unittest
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34114

Differential Revision: D20224719

Pulled By: dzhulgakov

fbshipit-source-id: 053d3b3b0c86340027ba1b95b5f3c247aa151aee
2020-03-03 13:15:22 -08:00
Jianyu Huang
5ef1c2c5d2 Back out "[pt][quant] RNN debug test" (#33750)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33750

Original commit changeset: 8c38d8f067e5
ghstack-source-id: 98911215

Test Plan: CI

Differential Revision: D20090521

fbshipit-source-id: 73df43ad60574e44e80b36ebf6392030c3efb66e
2020-02-25 09:28:00 -08:00
Jianyu Huang
5b031d961d [pt][quant] RNN debug test (#33621)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33621

ghstack-source-id: 98746093

Test Plan: buck test mode/dev caffe2/test:quantization -- 'test_quantized_rnn \(test_quantization\.PostTrainingDynamicQuantTest\)'  --print-passing-details

Differential Revision: D20036968

fbshipit-source-id: 7cbb027a6afbe28bc250fc663089c6a9406e880b
2020-02-24 16:15:17 -08:00
Supriya Rao
c2d736cefb Add support for Dynamic LSTM quantization on Mobile (#32757)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32757

This PR updates the main quantize_dynamic API to use QNNPACK backend for mobile

Test Plan:
python test/test_quantization.py PostTrainingDynamicQuantTest.test_quantized_rnn

Imported from OSS

Differential Revision: D19632220

fbshipit-source-id: b4c51485c281d088524101b97c84dd806438b597
2020-01-29 20:55:48 -08:00
Jianyu Huang
6f7d5bb3e1 Temporarily disable the test_quantized_rnn test (#32742)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32742

As Title says (Check https://github.com/pytorch/pytorch/issues/32644).
ghstack-source-id: 97352793

Test Plan: CI

Differential Revision: D19611029

fbshipit-source-id: 9f4a155c909f419e41c1d7078eb2796dd17cedd2
2020-01-28 16:50:59 -08:00
James Reed
812b1ad869 [quantization] FP16 dynamic quantized Linear
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32331

Test Plan: Imported from OSS

Differential Revision: D19441158

Pulled By: jamesr66a

fbshipit-source-id: c04247ffe707be68718c486c31bc6c6040f7dc11
2020-01-27 15:45:32 -08:00
Jerry Zhang
4cd6b5cda6 [quant] Re-enable test_nested that has different qconfig for shared ClassType (#32206)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32206

att

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D19508028

fbshipit-source-id: 5de3c2ef17de146feca03d7135a7e04f393de398
2020-01-23 15:32:57 -08:00
Pritam Damania
f050b16dd9 Move pytorch distributed tests to separate folder for contbuild. (#30445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445

Create distributed and rpc directories under caffe/test for better management
of unit tests.

Differential Revision: D18702786

fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
2020-01-22 21:16:59 -08:00
Jerry Zhang
4314620ba0 [jit] Module clone work with shared ClassType (#31970)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31970

Now that the ClassType can be shared among different module instances, we'll
preserve the sharing in clone as well, that is if the original module has
a ClassType that is shared, we'll clone this ClassType once and share it between
different module instances as well.

Test Plan:
build/test/test_jit

Imported from OSS

Differential Revision: D19406251

fbshipit-source-id: 2881c695f6e718e5432040a3817cf187a62017bf
2020-01-15 11:24:53 -08:00
James Reed
a3cdb7eca3 Fix default instantation of dynamic quantized LSTM
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31433

Test Plan: Imported from OSS

Differential Revision: D19164539

Pulled By: jamesr66a

fbshipit-source-id: 7045817ab3dfb530c4480a10523c4c6bcdbfc7eb
2019-12-18 16:59:00 -08:00
Michael Suo
62b10721fb Actually make flake8 do something (#30892)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30892

Fixes all outstanding lints and actually installs a properly configured
flake8

Test Plan: Imported from OSS

Differential Revision: D18862825

Pulled By: suo

fbshipit-source-id: 08e9083338a7309272e17bb803feaa42e348aa85
2019-12-06 17:50:50 -08:00
James Reed
4fd20c0816 Kill hypothesis deadline testing (#30890)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30890

We've received way too many complaints about this functionality making tests flaky, and it's not providing value to us anyway. Let's cut the shit and kill deadline testing

Test Plan: Imported from OSS

Differential Revision: D18857597

Pulled By: jamesr66a

fbshipit-source-id: 67e3412795ef2fb7b7ee896169651084e434d2f6
2019-12-06 13:36:14 -08:00
Jerry Zhang
58cdf1429c Add tests for quantizing traced models (#30476)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30476

att

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18795724

fbshipit-source-id: 9253e102bf458d9185f68848071a4e4eff9f9b08
2019-12-05 23:03:45 -08:00
Jerry Zhang
1fa4908ac0 Refactor test_quantization.py and enable test_nested (#30475)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30475

att

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18795727

fbshipit-source-id: c9942c5361e0a34e91a08b8fc27405799db7ff4f
2019-12-05 21:56:03 -08:00
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
Lingyi Liu
59ca9b7430 Graph-mode quantization for convolution from traced model (#30245)
Summary:
In the PR, we enhance the graph-mode quantization for aten::_convolution, which could be generated from tracing path.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30245

Differential Revision: D18671597

Pulled By: lly-zero-one

fbshipit-source-id: 78a2470fbb0fe0def55d63c6bda7cbb5c89f7848
2019-11-23 01:24:50 -08:00
Lingyi Liu
7d3afc4186 enable the per channel dynamic quantization (#30122)
Summary:
The PR tried to enable the per-channel(row-wise) dynamic quantization for linear operator. Given we have seen some accuracy drop due to the per-tensor quantization, we expect the per-channel could help improve the accuracy.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30122

Differential Revision: D18630541

Pulled By: lly-zero-one

fbshipit-source-id: d52685deec5e7de46cd686ae649a8c8765b9cacf
2019-11-21 10:12:05 -08:00
Jerry Zhang
b2291d4600 Make PerChannelMinMaxObserver scriptable using torch.jit.ignore (#29416)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29416

att

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18580906

fbshipit-source-id: 5370300b89e26c2b4662b17e51284e8708cb5843
2019-11-19 19:12:55 -08:00
James Reed
20fb8a814c PackedSequence support for quantized LSTM
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29585

Test Plan: Imported from OSS

Differential Revision: D18436569

Pulled By: jamesr66a

fbshipit-source-id: 0f32c0fcc897894e30d8e7ff203392c1a961ce60
2019-11-12 20:13:38 -08:00
Jerry Zhang
4bcf4796aa Make HistogramObserver scriptable with @torch.jit.ignore (#27950)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27950

att

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18360139

fbshipit-source-id: 5459ae49c087886e4990de136198773a75b1c572
2019-11-07 18:02:44 -08:00
James Reed
821f8bfc2f Fix tracing for dynamic quantized LSTM (#29331)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29331

Closes #27954

This fixes the hard-coding of packed parameter values for the dynamic quantized LSTM by orchestrating the following dance:

1) Each variadic parameter on the module has its own Module. That Module defines the `__getstate__` and __setstate__` method s.t. packed weights are properly re-done on model load.
2) Each of these modules is wrapped into a `torch.nn.ModuleList`, s.t. the parameters appear as attributes in the hierarchy. Then, `gatherParametersAndBuffers` (9c43b16df9/torch/csrc/jit/tracer.cpp (L285)) can see these parameters and create a `Value*` for them in the traced graph.
3) In forward, we need to convert from ModuleList -> Module -> Parameter to a simple TensorList of the parameters. We just use a loop here. In tracing, we simply record a `ListConstruct` with each of the proper parameter values. In scripting, the `ModuleList` is const, so it can be unrolled into the graph and a subsequent `ListConstruct` does its business.

The `forward` of the traced LSTM before and after this change are as follows:

Before
```
def forward(self,
    input: Tensor,
    argument_2: Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]:
  hx, hx0, = argument_2
  _0, _1, _2 = torch.quantized_lstm(input, [hx, hx0], [CONSTANTS.c0, CONSTANTS.c1], True, 1, 0., True, False, False, dtype=12, use_dynamic=True)
  return (_0, (_1, _2))
```

After

```
def forward(self,
    input: Tensor,
    argument_2: Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]:
  _0 = self.cell._all_weight_values
  _1 = getattr(_0, "0").param
  _2 = getattr(_0, "1").param
  hx, hx0, = argument_2
  _3, _4, _5 = torch.quantized_lstm(input, [hx, hx0], [_1, _2], True, 1, 0., True, False, False, dtype=12, use_dynamic=True)
  return (_3, (_4, _5))

```

Test Plan: Imported from OSS

Differential Revision: D18374904

Pulled By: jamesr66a

fbshipit-source-id: f1a9b58998bc365b9baad38c21fd4bb510dd639c
2019-11-07 13:45:39 -08:00
Mike Ruberry
84a6583ba1 Revert D18359880: Fix tracing for dynamic quantized LSTM
Test Plan: revert-hammer

Differential Revision:
D18359880

Original commit changeset: 0ff2cad294a1

fbshipit-source-id: 834cd43b39fb754f90c8b18b8ab9b837f2b511ab
2019-11-06 21:10:33 -08:00
James Reed
f17e02fd94 Fix tracing for dynamic quantized LSTM (#29331)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29331

Closes #27954

This fixes the hard-coding of packed parameter values for the dynamic quantized LSTM by orchestrating the following dance:

1) Each variadic parameter on the module has its own Module. That Module defines the `__getstate__` and __setstate__` method s.t. packed weights are properly re-done on model load.
2) Each of these modules is wrapped into a `torch.nn.ModuleList`, s.t. the parameters appear as attributes in the hierarchy. Then, `gatherParametersAndBuffers` (9c43b16df9/torch/csrc/jit/tracer.cpp (L285)) can see these parameters and create a `Value*` for them in the traced graph.
3) In forward, we need to convert from ModuleList -> Module -> Parameter to a simple TensorList of the parameters. We just use a loop here. In tracing, we simply record a `ListConstruct` with each of the proper parameter values. In scripting, the `ModuleList` is const, so it can be unrolled into the graph and a subsequent `ListConstruct` does its business.

The `forward` of the traced LSTM before and after this change are as follows:

Before
```
def forward(self,
    input: Tensor,
    argument_2: Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]:
  hx, hx0, = argument_2
  _0, _1, _2 = torch.quantized_lstm(input, [hx, hx0], [CONSTANTS.c0, CONSTANTS.c1], True, 1, 0., True, False, False, dtype=12, use_dynamic=True)
  return (_0, (_1, _2))
```

After

```
def forward(self,
    input: Tensor,
    argument_2: Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]:
  _0 = self.cell._all_weight_values
  _1 = getattr(_0, "0").param
  _2 = getattr(_0, "1").param
  hx, hx0, = argument_2
  _3, _4, _5 = torch.quantized_lstm(input, [hx, hx0], [_1, _2], True, 1, 0., True, False, False, dtype=12, use_dynamic=True)
  return (_3, (_4, _5))

```

Test Plan: Imported from OSS

Differential Revision: D18359880

Pulled By: jamesr66a

fbshipit-source-id: 0ff2cad294a1871123015dfc704eaf73a7ac1d9e
2019-11-06 17:02:12 -08:00
Xiang Gao
25e261d6d5 assertEquals is deprecated, use assertEqual instead
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28335

Differential Revision: D18263456

Pulled By: ngimel

fbshipit-source-id: c0f79071feaa5a4c3c4b20505013bf7c4b5455d5
2019-11-05 09:52:21 -08:00
Jerry Zhang
d690521cf6 Add e2e test for conv+bn (#27348)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27348

att

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18182920

fbshipit-source-id: 40edc4d85903f979cd4755d6785d2842faa4d566
2019-11-01 11:28:47 -07:00
Jerry Zhang
59c5de4d0e Don't permute in quantized::conv2d pattern (#27347)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27347

it's already done in the op, we don't need to permute again

Test Plan:
test_jit.py
we'll test in e2e tests

Imported from OSS

Differential Revision: D18182919

fbshipit-source-id: 04dd2a19a719828fbc7b62e451b81752187e0fcb
2019-10-31 15:58:28 -07:00
Jerry Zhang
1c436ded44 Remove test_quantizer.py and reuse one of its test in test_quantization.py (#27269)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27269

Remove `test_quantizer.py`, add and rewrite one of the tests in `test_quantizer`
in `test_quantization.py`
The conv test is removed for now since conv pattern is still broken, we'll add another test
later
ghstack-source-id: 92869823

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18182916

fbshipit-source-id: 325b5d8e877228d6a513e3ddf52c974479250d42
2019-10-29 19:04:21 -07:00
Zafar Takhirov
a5ac7f6387 Changing observer name
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27779

Test Plan: Imported from OSS

Differential Revision: D17886605

Pulled By: z-a-f

fbshipit-source-id: 68c50b482e65015336ff27171fd730da493525b6
2019-10-17 11:36:03 -07:00
Raghuraman Krishnamoorthi
ac0f18437f MovingAverage Observer (#27396)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27396

Observer that estimates moving averages of min and max values per batch,  more suited for quantization aware training instead of minmax observers that track extremal values across batches
ghstack-source-id: 91369018

Test Plan:
buck test caffe2/test:quantization -- 'test_per_tensor_observers \(test_quantization\.ObserverTest\)' --print-passing-details

buck test caffe2/test:quantization -- 'test_per_channel_observers \(test_quantization\.ObserverTest\)' --print-passing-details

Differential Revision: D17727213

fbshipit-source-id: 024a890bf3dd0bf269d8bfe61f19871d027326f0
2019-10-04 16:28:59 -07:00
Zafar Takhirov
6bb7433ad5 Replacing the skip_list with white_list in the qconfig propagation
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27183

Test Plan: Imported from OSS

Differential Revision: D17700548

Pulled By: zafartahirov

fbshipit-source-id: 18e6ffbda496b14ac1da1783f928ad539cdb1d16
2019-10-03 20:40:17 -07:00