Commit Graph

85 Commits

Author SHA1 Message Date
Jerry Zhang
f9446cb15a [quant][refactor] Remove register api and rename get_*_mapping to get_default_*_mapping (#46337)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46337

We plan to pass around the mappings instead of using global registration api to keep
the mappings local to the transformations user is performing

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317436

fbshipit-source-id: 81569b88f05eeeaa9595447e482a12827aeb961f
2020-10-20 15:53:47 -07:00
Jerry Zhang
30d687522d [reland][quant][eagermode] Move custom_module registration to prepare/convert_custom_config_dict (#46293) (#46364)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46364

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24322747

fbshipit-source-id: 4801ba1835fc805bf767fe9810b9edfa2ceeefb4
2020-10-19 15:21:00 -07:00
Mike Ruberry
ff0af7242b Revert D24290811: [quant][eagermode] Move custom_module registration to prepare/convert_custom_config_dict
Test Plan: revert-hammer

Differential Revision:
D24290811 (3ad797c937)

Original commit changeset: 7d2aee98e194

fbshipit-source-id: 24013e92044f2a1b36b1a9f475bbaa6f17bdaa11
2020-10-14 16:42:55 -07:00
Jerry Zhang
3ad797c937 [quant][eagermode] Move custom_module registration to prepare/convert_custom_config_dict (#46293)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46293

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D24290811

fbshipit-source-id: 7d2aee98e1946c2a4268efb94443f1e5daaa793e
2020-10-14 12:10:37 -07:00
Jerry Zhang
53316e8b97 [quant] Remove prehook option (#46292)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46292

since it is not needed

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D24290815

fbshipit-source-id: 5cc24a305dbdfee5de3419dc83a9c3794d949300
2020-10-14 11:08:38 -07:00
Vasiliy Kuznetsov
7094c09ff7 quantizaton: add API usage logging (#46095)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46095

Adds logging on usage of public quantization APIs. This only works in FB codebase
and is a no-op in OSS.

Test Plan: The test plan is fb-only

Reviewed By: raghuramank100

Differential Revision: D24220817

fbshipit-source-id: a2cc957b5a077a70c318242f4a245426e48f75e5
2020-10-09 16:51:27 -07:00
Jerry Zhang
f93ead6d37 [quant][eagermode] Custom module support (#44835)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44835

This is for feature parity with fx graph mode quantization

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D23745086

fbshipit-source-id: ae2fc86129f9896d5a9039b73006a4da15821307
2020-09-23 15:39:40 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Jerry Zhang
0c58a017bd [quant][eagermode][refactor] Add set/get method for quantization and fusion mappings (#43990)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43990

Allow user to register custom quantization and fusion patterns

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D23485344

fbshipit-source-id: 4f0174ee6d8000d83de0f73cb370e9a1941d54aa
2020-09-10 21:29:39 -07:00
Vinod Kumar S
2a1fc56694 replace the white list from default mappings (#41802)
Summary:
Replaced "whitelist" from default_mappings.py
Fixes https://github.com/pytorch/pytorch/issues/41756

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41802

Reviewed By: ngimel

Differential Revision: D23521452

Pulled By: malfet

fbshipit-source-id: 019a2d5c06dc59dc53d6c48b70fb35b216299cf4
2020-09-04 10:04:28 -07:00
maxosen64
1f7434d1ea Fix 'module' to 'model' in quantize_dynamic doc (#43693)
Summary:
Fixes issue https://github.com/pytorch/pytorch/issues/43503

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43693

Reviewed By: malfet

Differential Revision: D23397641

Pulled By: mrshenli

fbshipit-source-id: bc216cea4f0a30c035e84a6cfebabd3755ef1305
2020-08-28 10:44:43 -07:00
Supriya Rao
284ff04792 [quant] Support set API for EmbeddingBag quantization (#43433)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43433

Add support for torch.quint8 dtype

Test Plan: Imported from OSS

Reviewed By: radkris-git

Differential Revision: D23277002

fbshipit-source-id: 4204bc62f124b4fd481aaa6aa47b9437978c43ee
2020-08-24 14:33:35 -07:00
Supriya Rao
3293fdfa80 [quant] Enable from_float for quantized Embedding_Bag (#43176)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43176

Convert floating point nn.EmbeddingBag module to
nn.quantized.dynamic.EmbeddingBag module

Test Plan:
python test/test_quantization.py TestDynamicQuantizedModule.test_embedding_bag_api
python test/test_quantization.py TestPostTrainingDynamic.test_embedding_quantization

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23200196

fbshipit-source-id: 090f47dbf7aceab9c719cbf282fad20fe3e5a983
2020-08-21 11:46:03 -07:00
Jerry Zhang
a55b7e2a6d [reland][quant][fix] Remove activation_post_process in qat modules (#42343) (#43015)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43015

Currently activation_post_process are inserted by default in qat modules, which is not
friendly to automatic quantization tools, this PR removes them.

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23105059

fbshipit-source-id: 3439ac39e718ffb0390468163bcbffd384802b57
2020-08-13 20:44:14 -07:00
Richard Zou
607e49cc83 Revert D22856816: [quant][fix] Remove activation_post_process in qat modules
Test Plan: revert-hammer

Differential Revision:
D22856816 (8cb42fce17)

Original commit changeset: 988a43bce46a

fbshipit-source-id: eff5b9abdfc15b21c02c61eefbda38d349173436
2020-08-13 07:22:20 -07:00
Jerry Zhang
8cb42fce17 [quant][fix] Remove activation_post_process in qat modules (#42343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42343

Currently activation_post_process are inserted by default in qat modules, which is not
friendly to automatic quantization tools, this PR removes them.

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D22856816

fbshipit-source-id: 988a43bce46a992b38fd0d469929f89e5b046131
2020-08-12 20:14:23 -07:00
Jerry Zhang
ac93d45906 [quant] Attach qconfig to all modules (#42576)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42576

Previously we have qconfig propagate list and we only attach qconfig for modules
in the list, this works when everything is quantized in the form of module.
but now we are expanding quantization for functional/torch ops, we'll need to attach qconfig
to all modules

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D22939453

fbshipit-source-id: 7d6a1f73ff9bfe461b3afc75aa266fcc8f7db517
2020-08-11 20:34:34 -07:00
Jerry Zhang
c3236b6649 [quant] Expose register activation post process hook function to user (#42342)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42342

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D22856711

fbshipit-source-id: d6ad080c82b744ae1147a656c321c448ac5e7f10
2020-08-03 12:28:42 -07:00
Haixin Liu
c5b4f60fc2 Move qconfig removal into convert() (#41930)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41930

As title
ghstack-source-id: 108517079

Test Plan: CI

Reviewed By: jerryzh168

Differential Revision: D22698386

fbshipit-source-id: 4f748c9bae4a0b615aa69c7cc8d8e451e5d26863
2020-07-25 13:27:13 -07:00
Edmund Williams Jr
e9e6cc8c83 Added Prehook option to prepare method (#41863)
Summary:
Added a logic so that if a prehook is passed into the prepare method during quantization, then the hook will be added as a prehook to all leaf nodes (and modules specified in the non_leaf_module_list).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41863

Test Plan:
Small demo, made simple module then called prepare with prehook parameter set to the numeric suite logger, printed the results to verify its what we wanted
{F245156246}

Reviewed By: jerryzh168

Differential Revision: D22671288

Pulled By: edmundw314

fbshipit-source-id: ce65a00830ff03360a82c0a075b3b6d8cbc4362e
2020-07-24 10:26:39 -07:00
emil
0c77bd7c0b Quantization: preserving pre and post forward hooks (#37233)
Summary:
1. While do convert() preserve module's **pre and post forward** hooks
2. While do fusion preserve only module's **pre forward** hooks (because after fusion output no longer the same)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/37233

Differential Revision: D22425141

Pulled By: jerryzh168

fbshipit-source-id: e69b81821d507dcd110d2ff3594ba94b9593c8da
2020-07-13 12:41:24 -07:00
Edward Leardi
733b8c23c4 Fix several quantization documentation typos (#40567)
Summary:
This PR fixes several typos I noticed in the docs here: https://pytorch.org/docs/master/quantization.html. In one case there was a misspelled module [torch.nn.instrinsic.qat](https://pytorch.org/docs/master/quantization.html#torch-nn-instrinsic-qat) which I corrected and am including screenshots of below just in case.

<img width="1094" alt="before" src="https://user-images.githubusercontent.com/54918401/85766765-5cdd6280-b6e5-11ea-93e6-4944cf820b71.png">

<img width="1093" alt="after" src="https://user-images.githubusercontent.com/54918401/85766769-5d75f900-b6e5-11ea-8850-0d1f5ed67b16.png">
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40567

Differential Revision: D22311291

Pulled By: ezyang

fbshipit-source-id: 65d1f3dd043357e38a584d9e30f31634a5b0995c
2020-07-07 09:45:23 -07:00
Jerry Zhang
59ca1d31ca [quant][graphmode] docstrings for top level APIs (#40328)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40328

Test Plan: Imported from OSS

Differential Revision: D22149708

fbshipit-source-id: 63a1cd229d9e4668fba0ef3977e894cb8984318b
2020-06-19 22:20:23 -07:00
Haixin Liu
d9c804ce22 [PyTorch Numeric Suite] Add support for dynamic quantization of linear module (#39024)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39024

Add support for dynamic quantization of linear module.
ghstack-source-id: 106205450

Test Plan:
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_submodule_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_functional_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_dynamic'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_conv_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_functional_static'

buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_dynamic'

Differential Revision: D21675971

fbshipit-source-id: c9562744dc59b61cf47f2787a934e6a5a53e12fd
2020-06-19 10:58:56 -07:00
Raghuraman Krishnamoorthi
3258cb61b1 Dynamic quantization support for LSTMCell, RNNCell and GRUCell [Remove randomness in weights] (#40102)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40102

Enable dynamic quantization for LSTMCell, RNNCell and GRUCell
ghstack-source-id: 105997236

(Note: this ignores all push blocking failures!)

Test Plan: buck test caffe2/test:quantization -- 'test_quantized_rnn_cell \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

Differential Revision: D22071017

fbshipit-source-id: 3fe1eac39db9c1e0566838eb8b969bbb1fa983c9
2020-06-16 21:29:50 -07:00
Raghuraman Krishnamoorthi
e55e0cb1a9 Revert D20978736: Dynamic quantization support for LSTMCell, RNNCell and GRUCell
Test Plan: revert-hammer

Differential Revision:
D20978736

Original commit changeset: 8f303ba1d7f8

fbshipit-source-id: bcd300819616d6536f582fcd3c90decd543c4657
2020-06-16 10:11:32 -07:00
Raghuraman Krishnamoorthi
48db06e39a Dynamic quantization support for LSTMCell, RNNCell and GRUCell (#37159)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37159

Enable dynamic quantization for LSTMCell, RNNCell and GRUCell
ghstack-source-id: 105946183

(Note: this ignores all push blocking failures!)

Test Plan: buck test caffe2/test:quantization -- 'test_quantized_rnn_cell \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

Differential Revision: D20978736

fbshipit-source-id: 8f303ba1d7f8e0c646ac73e862d2c1e735b7ff61
2020-06-16 09:14:59 -07:00
Vasiliy Kuznetsov
6a60a8c1da add_observer: respect device affinity for ReLU (#39337)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39337

In #39031 we made fake quantize respect device affinity of the
original module. However, that PR only handled modules with parameters
or buffers, and did not work properly for `ReLU`.

Fixing the logic to also work for `ReLU` by passing the parent's
device when adding observers.

Test Plan:
```
python test/test_quantization.py TestDistributed.test_device_affinity
```

Imported from OSS

Differential Revision: D21821243

fbshipit-source-id: cc6abda3694b80ce8ba0440dc6c1b5b58f3c0066
2020-06-03 09:31:36 -07:00
Vasiliy Kuznetsov
c193bd41f5 fake_quantize: respect device affinity (#39031)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39031

Makes the eager mode QAT prepare logic respect device affinity.
This fixes the issue where a module is on `cuda:0`, and running
the QAT prepare script would add observers on `cpu`.  Now it
will add them on the original device.

Test Plan:
```
python test/test_quantization.py TestDistributed.test_device_affinity
```

Imported from OSS

Differential Revision: D21729272

fbshipit-source-id: 5537bf3977ddc23412184941978bf0d1cc6fb479
2020-06-01 08:55:14 -07:00
Supriya Rao
530d48e93a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452) (#38749)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38749

Test Plan: python test/test_quantization.py TestFused

Differential Revision: D21654659

Pulled By: supriyar

fbshipit-source-id: 301be24083e794f4e71ff1d6d842e1aaefa640f0
2020-05-19 22:48:05 -07:00
Natalia Gimelshein
b995540a01 Revert D21632878: [quant] Support for fused ConvBn1d and ConvBnRelu1d modules
Test Plan: revert-hammer

Differential Revision:
D21632878

Original commit changeset: 0d73398b95d7

fbshipit-source-id: c4dd18a4220d175237f31f741a782f2596228009
2020-05-19 15:22:16 -07:00
Supriya Rao
7d38db0f9a [quant] Support for fused ConvBn1d and ConvBnRelu1d modules (#38452)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38452

Test Plan:
python test/test_quantization.py TestFused

Imported from OSS

Differential Revision: D21632878

fbshipit-source-id: 0d73398b95d72a0a23b42ef36f3ede1bfcc35eda
2020-05-19 09:53:56 -07:00
Supriya Rao
f6626aaf43 [quant] Add support for Quantized Conv1d and ConvRELU1d (#38283)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38283

Adds support for the modules and tests

Test Plan:
python test/test_quantization.py TestStaticQuantizedModule.test_conv1d_api

Imported from OSS

Differential Revision: D21553665

fbshipit-source-id: 7ea28da024bdf59f87f300d616c266f2b41f0bcd
2020-05-13 16:59:13 -07:00
Haixin Liu
cc0f1b22a2 [PyTorch Numeric Suite] Add module output comparison (#36701)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36701

Add module output comparison API.
ghstack-source-id: 103368194

Test Plan: buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs'

Differential Revision: D21053197

fbshipit-source-id: cabcafbeeac1b604db069833a0f17ebce506ba65
2020-05-03 00:04:35 -07:00
Lingyi Liu
fddcd72a31 Add the more fusion (conv3d and batchnorm)support in pytorch quantization flow (#33540)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33540

Differential Revision: D19994498

Pulled By: lly-zero-one

fbshipit-source-id: e5e13eab6924bd2ce1b57b16b672844b8b9638f5
2020-03-23 20:36:03 -07:00
James Reed
812b1ad869 [quantization] FP16 dynamic quantized Linear
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32331

Test Plan: Imported from OSS

Differential Revision: D19441158

Pulled By: jamesr66a

fbshipit-source-id: c04247ffe707be68718c486c31bc6c6040f7dc11
2020-01-27 15:45:32 -08:00
Jerry Zhang
f995ec2076 Remove qconfig_dict in top level eager mode quantization API (#31972)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31972

Since eager mode quantization requires many user modifications, we can't
consistently quantize a given model by just changing qconfig_dict, therefore
the top level `qconfig_dict` is not that useful.
fixes: https://github.com/pytorch/pytorch/issues/31549

Test Plan:
.

Imported from OSS

Differential Revision: D19330691

fbshipit-source-id: 8aee6e5249e0c14e8a363ac1a83836e88887cd7d
2020-01-10 11:04:37 -08:00
Xiaomeng Yang
c12f9a12a8 Fix quantized ConvReLU3d test (#30266)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30266

Fix quantized ConvReLU3d test

Test Plan: buck test mode/dev-nosan //caffe2/test:quantized -- "conv"

Reviewed By: hl475

Differential Revision: D18645717

fbshipit-source-id: bbe93f9daf5046f2aa05363efc7d0e59eaff37bf
2019-11-25 14:52:32 -08:00
Raghuraman Krishnamoorthi
94757e035d Do not insert observers for empty sequential modules (#28384)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28384

ghstack-source-id: 92340259

Test Plan:
buck test caffe2/test:quantization -- 'test_fusion_sequential_model_train \(test_quantization\.FusionTest\)' --print-passing-details

 buck test caffe2/test:quantization -- 'test_fusion_sequential_model_eval \(test_quantization\.FusionTest\)' --print-passing-details

Differential Revision: D18047293

fbshipit-source-id: 7e18b1aa76cc0fd26e8ee48a70c3a45688e73549
2019-10-21 20:32:13 -07:00
Zafar Takhirov
07b5666a87 Add default arg to prepare_qat mapping. (#28193)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28193

Fixes #28015

Test Plan: Imported from OSS

Differential Revision: D17973121

Pulled By: z-a-f

fbshipit-source-id: 03b3f70c70b89060c1f03d7ed8ab6002fe60bd49
2019-10-17 14:11:54 -07:00
Zafar Takhirov
a5ac7f6387 Changing observer name
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27779

Test Plan: Imported from OSS

Differential Revision: D17886605

Pulled By: z-a-f

fbshipit-source-id: 68c50b482e65015336ff27171fd730da493525b6
2019-10-17 11:36:03 -07:00
Zafar Takhirov
dc8785a022 Refactoing names for consistency
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27670

Test Plan: Imported from OSS

Differential Revision: D17846269

Pulled By: z-a-f

fbshipit-source-id: ed3c7441c185bf11b2e62879aa3ecbc654aa2d4e
2019-10-16 12:18:26 -07:00
zou3519
e5d6b75319 Bag of documentation fixes; fix more sphinx warnings (#27850)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27850

Many of these are real problems in the documentation (i.e., link or
bullet point doesn't display correctly).

Test Plan: - built and viewed the documentation for each change locally.

Differential Revision: D17908123

Pulled By: zou3519

fbshipit-source-id: 65c92a352c89b90fb6b508c388b0874233a3817a
2019-10-15 07:31:14 -07:00
Chris Gottbrath
a96b003b39 docstring only formatting changes: quantize.py, fake_quantize.py, observer.py
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27415

Reviewed By: zafartahirov

Differential Revision: D17783101

Pulled By: gottbrath

fbshipit-source-id: a7acbc55edfaa75fdbd17fd30d530710a401b22f
2019-10-08 09:21:03 -07:00
Zafar Takhirov
6bb7433ad5 Replacing the skip_list with white_list in the qconfig propagation
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27183

Test Plan: Imported from OSS

Differential Revision: D17700548

Pulled By: zafartahirov

fbshipit-source-id: 18e6ffbda496b14ac1da1783f928ad539cdb1d16
2019-10-03 20:40:17 -07:00
Zafar Takhirov
111da77912 Factored out the default mappings
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27164

Test Plan: Imported from OSS

Differential Revision: D17694475

Pulled By: zafartahirov

fbshipit-source-id: df8df5f7d66062ed35da957064a31344e1d3c961
2019-10-03 11:52:21 -07:00
James Reed
a423817055 Fix reprs for _intrinsic modules
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27184

Test Plan: Imported from OSS

Differential Revision: D17717481

Pulled By: jamesr66a

fbshipit-source-id: 4bd72bcd42191d9b21d03f5bb6698198dbffffda
2019-10-02 19:55:49 -07:00
James Reed
1affa7c32c Allow set for qconfig for dynamic_quantize
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27181

Test Plan: Imported from OSS

Differential Revision: D17717482

Pulled By: jamesr66a

fbshipit-source-id: f3930fc87831cbdcf4390cd769c594bb13f5cd81
2019-10-02 19:55:45 -07:00
Zafar Takhirov
27dc595215 Rename _intrinsic to intrinsic
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27194

Test Plan: Imported from OSS

Differential Revision: D17704957

Pulled By: zafartahirov

fbshipit-source-id: 46f02d129aa77c3047b2a6c606bfadd831a6b0fc
2019-10-02 18:53:06 -07:00
Dmytro Dzhulgakov
0a8a779abe Add more inplace arguments to quantization top level API (#26782)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26782

At least we should be consistent on top-level APIs and prepare/convert/etc.

Logic is inplace=False by default but top-level APIs take care of doing fewer copies.

Also renames always-inplace methods like add_observer to have underscore in the end.

One fix for MinMaxObserver was triggered by deepcopy surfacing that we were accidentally keeping autograd around

Test Plan: Imported from OSS

Differential Revision: D17595956

Pulled By: dzhulgakov

fbshipit-source-id: 801f9f5536b553f24c7a660064dd6fce685edd65
2019-09-26 00:07:07 -07:00