Commit Graph

382 Commits

Author SHA1 Message Date
Vasiliy Kuznetsov
4779553921 Revert "[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)" (#47949)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47949

This reverts commit 1478e5ec2a.

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24966363

Pulled By: vkuzo

fbshipit-source-id: ca1126f699eef84027a15df35962728296c8a790
2020-11-14 08:40:30 -08:00
Jerry Zhang
c0aa863c56 [quant][graphmode][fx][refactor] insert_quantize_node (#47880)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47880

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24928797

fbshipit-source-id: 9a8b359cabfb800da86da114bf26bb5bd99d3fff
2020-11-13 14:50:42 -08:00
Jerry Zhang
1915ae9510 [quant][graphmode][fx][refactor] is_output_quantized (#47879)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47879

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24928796

fbshipit-source-id: 55c49243b6a0b4811953cf72af57e5f56be8c419
2020-11-13 11:15:55 -08:00
Jerry Zhang
1589ede8dd [quant][graphmode][fx] insert_observer_for_input_arg_of_observed_node (#47785)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47785

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24900302

fbshipit-source-id: 61d6287c462898837aed85d5c3a48b6e47b4a41b
2020-11-12 22:19:51 -08:00
Jerry Zhang
1afdcbfbb3 [quant][graphmode][fx][refactor] insert_observer_for_output_of_the_node (#47784)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47784

Test Plan:
python test/test_quantization.py TestQuantizeFx

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24900301

fbshipit-source-id: abaeae1b5747e517adeb0d50cec5998a8a3fc24d
2020-11-12 21:39:29 -08:00
Jerry Zhang
c4ecbcdcb3 [quant][graphmode][fx][refactor] insert_observer_for_special_module (#47783)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47783

Test Plan:
python test/test_quantization.py TestQuantizeFx

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24900304

fbshipit-source-id: 11cc3dd4ea5e272209db9f3c419deadd40db5f42
2020-11-12 20:48:34 -08:00
Jerry Zhang
1478e5ec2a [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47415

nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU
that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu

this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24747035

fbshipit-source-id: b8fdf13e513a0d5f0c4c6c9835635bdf9fdc2769
2020-11-12 10:56:30 -08:00
Jerry Zhang
47386722da [quant][graphmode][fx][refactor] insert_observer (#47782)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47782

Test Plan:
python test/test_quantization.py TestQuantizeFx

Imported from OSS

Reviewed By: supriyar

Differential Revision: D24900305

fbshipit-source-id: b00a90ab85badea7d18ae007cc68d0bcd58ab15c
2020-11-11 21:31:24 -08:00
Jerry Zhang
dd77d5a1d4 [quant][refactor] factor out get_combined_dict function (#47781)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47781

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24900303

fbshipit-source-id: 1a2cb0ec536384abcd140e0d073f0965ed2800cd
2020-11-11 21:01:31 -08:00
Jerry Zhang
1239d067ae [quant][graphmode][fx] Support standalone_module_class (#47705)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47705

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24872380

fbshipit-source-id: db2ec7ba03da27203033fbebc11666be572622bb
2020-11-11 09:15:14 -08:00
Supriya Rao
6bb18b24fb [quant][qat] Ensure observer respects device affinity (#47514)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47514

Previosuly the scale and zero_point were returned on the CPU even if
the input tensor was on the GPU.
This is because `copy_()` doesn't respect the device when copying over the tensor.

Also fixed a bug where we were always setting the device to 'cuda' (irrespective of the device id)
in the calculate_qparams function

Test Plan:
python test/test_quantization.py TestObserver.test_observer_qparams_respects_device_affinity

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24800495

fbshipit-source-id: d7a76c59569842ed69029d0eb4fa9df63f87e28c
2020-11-10 08:43:52 -08:00
Jerry Zhang
65e5bd23d8 [quant] Add _FusedModule type to capture all fused modules for quantization (#47484)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47484

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24774703

fbshipit-source-id: f0efc5d77035b9854ec3e31a1d34f05d5680bc22
2020-11-09 10:28:45 -08:00
Vasiliy Kuznetsov
5977d1d864 FixedQParamsFakeQuantize: adjust default quant_min and quant_max (#47423)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47423

Since the dtype of this fake_quant is `quint8`, the output range should be
from 0 to 255.  Fixing.  This should address the numerical inaccuracies with
sigmoid and hardsigmoid with `FixedQParamsFakeQuantize` attached compared
to their quantized counterparts.

In a future PR, might be safer to also make the activation functions
using `FixedQParamsFakeQuantize` to explicitly specify their expected
output range and zero_point.  Leaving that for later, as this bugfix
should be landed urgently.

Test Plan:
Manual script which gives low SQNR before this PR and high SQNR after
this PR: https://gist.github.com/vkuzo/9906bae29223da72b10d6b6aafadba42

https://github.com/pytorch/pytorch/pull/47376, which can be landed after
this, adds a proper test.

Imported from OSS

Reviewed By: ayush29feb, jerryzh168

Differential Revision: D24751497

fbshipit-source-id: 4c32e22a30116caaceeedb4cd47146d066054a89
2020-11-05 09:06:55 -08:00
Jerry Zhang
0cba3e3704 [quant][graphmode][fx] Add support for qat convbn{relu}1d (#47248)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47248

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24696524

fbshipit-source-id: 684db12be201307acbdc89a44192cf2270491dba
2020-11-03 22:43:33 -08:00
Jerry Zhang
53a5f08e0c [quant][eagermode] Avoid inserting fakequant for sigmoid/hardsigmoid/tanh in eval mode (#47297)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47297

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24708270

fbshipit-source-id: a19b6dbe07d5c80f3cc78a987742d345d86e1cd1
2020-11-03 21:33:35 -08:00
Jerry Zhang
c6fe65bf90 [quant][graphmode][fx][fix] Fix error that DefaultQuantizer is not inserted after a module configured with None qconfig (#47316)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47316

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24713727

fbshipit-source-id: e604ef2274ff4bb4e8b6ebbb6ba681018e9ae248
2020-11-03 20:08:41 -08:00
Jerry Zhang
0ead9d545a [quant][graphmode][fx] Add test for non quantized embedding and embeddingbag (#47092)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47092

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24637423

fbshipit-source-id: baaa431931242072edd9519a3393efba7469da6f
2020-11-02 23:56:43 -08:00
Jerry Zhang
be2e3dd2a1 [quant][graphmode][fx][fix] Linear work with float_qparam_dynamic_qconfig (#47068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47068

Filter the dtype config before performing the quantization in linear

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24627907

fbshipit-source-id: 162fa47b3fcf6648049f8bc0438e41ee97ac19e9
2020-11-02 16:28:33 -08:00
Jerry Zhang
085193c291 [quant][graphmode][fx][fusion] Add test for fuse_fx (#47085)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47085

Both in train and eval mode

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24632457

fbshipit-source-id: 486aee4e073fb87e9da46a344e8dc77e848a60cf
2020-10-30 12:25:54 -07:00
Jerry Zhang
366888a5e2 [quant][graphmode][fx] Remove logging for standalone module api calls (#47032)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47032

these are not top level apis, not supposed to be called directly by user.

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24610602

fbshipit-source-id: c5510f06b05499387d70f23508470b676aea582c
2020-10-29 18:39:43 -07:00
James Reed
9bc8f071a3 [WIP] Move torch.fx into its own target (#46658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46658

ghstack-source-id: 115213192

Test Plan: waitforsadcastle

Reviewed By: zdevito, vkuzo

Differential Revision: D24374723

fbshipit-source-id: 2b5708001f5df2ffb21ea5e586e26030653ccdcf
2020-10-29 17:03:08 -07:00
Jerry Zhang
c2a3951352 [quant][graphmode][fx] Remove inplace option for convert_fx (#46955)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46955

Initially we were thinking of adding a `invalidate_quantized_float_parameters` option to free the memory
of quantized floating parameters, but it turns out we will do module swap just like in eager mode for the modules
that are quantized, so the old floating point module will not be referenced after quantization. therefore this feature
is only needed for functionals, since most people are using quantization with modules we may not need this.

we'll revisit after we find there is a need for this.

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24579400

fbshipit-source-id: fbb0e567405dc0604a2089fc001573affdade986
2020-10-28 21:07:19 -07:00
Jerry Zhang
cd8ed93287 [quant][graphmode][fx][api] Remove inplace option from prepare_fx (#46954)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46954

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24579401

fbshipit-source-id: adce623ce819fa220f7bb08d1ff3beaa69850621
2020-10-28 08:00:12 -07:00
Jerry Zhang
d92bf921db [quant][graphmode][fx] Remove inplace option for fuse_fx (#46953)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46953

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24579402

fbshipit-source-id: 5e0b8abf682287ab3c7dd54c2fc2cf309295e147
2020-10-27 22:34:11 -07:00
Jerry Zhang
998b9b9e68 [quant][graphmode][fx] custom_module support static/dynamic/weight_only quant (#46786)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46786

Previously we only support static quant, this PR added support for other types of quantization.

Note qat is actually orthogonal to these quant types, this is referring to the convert step where we
convert the observed module to a quantized module.

for qat, user will provide a CustomModule -> FakeQuantizedCustomModule in prepare_custom_config_dict
and FakeQuantizedCustomModule -> static/dynamic/weight_only quantized CustomModule in convert_custom_config_dict.

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D24514701

fbshipit-source-id: 2918be422dd76093d67a6df560aaaf949b7f338c
2020-10-27 21:41:33 -07:00
Jerry Zhang
5a8198eb3c [quant][graphmode][fx][fix] scalar as first input for add/mul (#46751)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46751

Currently we assume the first input for add/mul is node (Tensor), but it might not be the case

Test Plan:
python test/test_quantization.py TestQuantizeFxOps.test_quantized_add
python test/test_quantization.py TestQuantizeFxOps.test_quantized_mul
python test/test_quantization.py TestQuantizeFxOps.test_quantized_add_relu
python test/test_quantization.py TestQuantizeFxOps.test_quantized_mul_relu

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D24494456

fbshipit-source-id: ef5e23ba60eb22a57771791f4934306b25c27c01
2020-10-27 19:59:28 -07:00
Vasiliy Kuznetsov
8066e89f64 quant: fix bug with copy.deepcopy of FX prepared quantization models (#46895)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46895

Bug: models after the FX graph mode quant prepare step lost information,
such as the extra attributes defined in `Quantizer.save_state`,
if the user performed `copy.deepcopy` on them.  The information was lost
because `GraphModule` does not copy attributes which are not present on
`nn.Module` by default.

Fix: define a custom `__deepcopy__` method on observed models and
whitelist the attributes we care about.

This is needed because users sometimes run `copy.deepcopy` on their
models during non-quantization related preparations, and we should make
sure that quantization related state survives these calls.

Test Plan:
```
python test/test_quantization.py TestQuantizeFx.test_deepcopy
python test/test_quantization.py TestQuantizeFx.test_standalone_module
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D24556035

fbshipit-source-id: f7a6b28b6d2225fa6189016f967f175f6733b124
2020-10-27 16:05:35 -07:00
Jerry Zhang
6b50ccc41c [quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat (#46738) (#46871)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46871

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24547180

fbshipit-source-id: d2eb9aa74c6e5436204376b1a2ebcc6188d3562f
2020-10-26 23:52:07 -07:00
Alban Desmaison
25db74bf5e Revert D24486972: [quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat
Test Plan: revert-hammer

Differential Revision:
D24486972 (e927b62e73)

Original commit changeset: c9f139bfdd54

fbshipit-source-id: 2a75f5ec93d55a62b40d1cdd49adcf65436058f7
2020-10-26 12:47:05 -07:00
Jerry Zhang
e927b62e73 [quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat (#46738)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46738

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D24486972

fbshipit-source-id: c9f139bfdd54973da1a93a45e32937595dbe67fc
2020-10-26 12:04:42 -07:00
Jerry Zhang
37dbc6117f [quant][eagermode] Add additional_fuser_method_mapping to config (#46355)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46355

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24319562

fbshipit-source-id: be9800723c0b3e36f26e73c25c0c6ae1d4344f45
2020-10-24 02:18:04 -07:00
Jerry Zhang
343260a1cc [quant][graphmode][fx] Add support for additional_{fusion/quant}_pattern (#46346)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46346

Allow user to provide additional fusion/quant patterns for fx graph mode

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317437

fbshipit-source-id: 719927cce50c74dffa4f848bd5c98995c944a26a
2020-10-23 15:03:42 -07:00
Supriya Rao
842494af77 [quant][fx] EmbeddingBag quantization support (#46678)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46678

Test Plan:
python test/test_quantization.py TestQuantzeFxOps.test_qembedding_bag_module

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D24463306

fbshipit-source-id: 175e77f4450344fbf63409be35338b0c29afd585
2020-10-22 18:04:31 -07:00
Supriya Rao
e34c825b77 [quant][fx] Embedding quantization support (#46677)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46677

Add support for weight only embedding quantization

Test Plan:
python test/test_quantization.py TestQuantizeFxOps.test_qembedding_module

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D24463305

fbshipit-source-id: 2dba49d8a77cf237a8e6da2efdd83b1ebdc432d6
2020-10-22 17:59:52 -07:00
Jerry Zhang
bd90379df5 [quant][graphmode][fx] Add support for additional_fuse_method_mapping (#46345)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46345

Allow user to add more fusion mappings

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317439

fbshipit-source-id: 3b144bbc305e41efbdf3e9fb25dbbeaad9e86c6a
2020-10-22 15:15:31 -07:00
Jerry Zhang
23fad9111e [quant][graphmode][fx] Add additional_qat_module_mapping (#46344)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46344

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317438

fbshipit-source-id: f9e73aeb4c7a107c8df0bae8319464e7d5d7275b
2020-10-22 13:11:26 -07:00
Jerry Zhang
ab28bd528d [quant][graphmode][fx] Support quantizing FloatFunctional (#46634)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46634

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24438227

fbshipit-source-id: f33439d51112e13f59ee4292e804495d38fa3899
2020-10-22 01:21:17 -07:00
Jerry Zhang
13decddae2 [reland][quant] Add FixedQParamsFakeQuantize module (#45538) (#46657)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46657

This is used to simulate fake quantize operation for ops with fixed quantization parameters
e.g. hardsigmoid

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24451406

fbshipit-source-id: 26cc140c00f12bdec9a8f9dc880f4c425f4d4074
2020-10-21 16:47:11 -07:00
Jerry Zhang
746febdeac [quant][graphmode][fx] Add additional_object_mapping argument to convert (#46338)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46338

Should we merge quantized module and quantized operator configurations?

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317435

fbshipit-source-id: 3575251fe9d80a6628b8c3243c2ed92ea5e921e3
2020-10-21 16:39:07 -07:00
Ashkan Aliabadi
2181449068 Revert D24004795: [quant] Add FixedQParamsFakeQuantize module
Test Plan: revert-hammer

Differential Revision:
D24004795 (253918ec55)

Original commit changeset: fc4797f80842

fbshipit-source-id: 663169e90a2f58e5a89e4d382291ae41c24d0fee
2020-10-20 19:40:21 -07:00
Jerry Zhang
253918ec55 [quant] Add FixedQParamsFakeQuantize module (#45538)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45538

This is used to simulate fake quantize operation for ops with fixed quantization parameters
e.g. hardsigmoid

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24004795

fbshipit-source-id: fc4797f80842daacd3b3584c5b72035774634edd
2020-10-20 17:43:25 -07:00
Jerry Zhang
f9446cb15a [quant][refactor] Remove register api and rename get_*_mapping to get_default_*_mapping (#46337)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46337

We plan to pass around the mappings instead of using global registration api to keep
the mappings local to the transformations user is performing

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317436

fbshipit-source-id: 81569b88f05eeeaa9595447e482a12827aeb961f
2020-10-20 15:53:47 -07:00
Jerry Zhang
a06b95b2ba [quant][graphmode][fx] Support non_traceable_module/module_class (#46298)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46298

Allow user to specify a list of qualified names for non traceable submodule
or type of the non traceable submodule
See quantize_fx.py for api

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24294210

fbshipit-source-id: eb1e309065e3dfbf31e63507aaed73587f0dae29
2020-10-19 18:50:08 -07:00
Jerry Zhang
30d687522d [reland][quant][eagermode] Move custom_module registration to prepare/convert_custom_config_dict (#46293) (#46364)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46364

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24322747

fbshipit-source-id: 4801ba1835fc805bf767fe9810b9edfa2ceeefb4
2020-10-19 15:21:00 -07:00
Vasiliy Kuznetsov
6dc763df30 PyTorch: add API usage logging to numeric suite (#46504)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46504

As titled, so we can start seeing who is using this.

Test Plan: CI

Reviewed By: hx89

Differential Revision: D24375254

fbshipit-source-id: ff7b5560d0a6a175cecbf546eefc910759296dbb
2020-10-19 13:17:02 -07:00
Rong Rong
89108ba6ea type check for torch.quantization.stubs (#46475)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42973

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46475

Reviewed By: malfet

Differential Revision: D24368088

Pulled By: walterddr

fbshipit-source-id: 7a0ccb4fa66b28d4ac59923d727e632351a02b3f
2020-10-16 15:34:23 -07:00
Rong Rong
d1745c36dc fix type check for torch.quantization._numeric_suite (#46330)
Summary:
fix https://github.com/pytorch/pytorch/issues/42977

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46330

Reviewed By: malfet

Differential Revision: D24320449

Pulled By: walterddr

fbshipit-source-id: f892b5c83cb932aee53245d6c825568b3e05f3c6
2020-10-15 20:45:07 -07:00
Mike Ruberry
ff0af7242b Revert D24290811: [quant][eagermode] Move custom_module registration to prepare/convert_custom_config_dict
Test Plan: revert-hammer

Differential Revision:
D24290811 (3ad797c937)

Original commit changeset: 7d2aee98e194

fbshipit-source-id: 24013e92044f2a1b36b1a9f475bbaa6f17bdaa11
2020-10-14 16:42:55 -07:00
Zachary DeVito
fc1d6bf135 [fx] make sure args/kwargs are immutable (#46325)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46325

Otherwise, mutating them would make the uses/users lists inaccurate.
You can still mutate the node by assigning a new value to .args or .kwargs

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D24308672

Pulled By: zdevito

fbshipit-source-id: a5305e1d82668b36e46876c3bc517f6f1d03dd78
2020-10-14 15:51:43 -07:00
Zafar
635aebdfab [quant] Refactoring the mappings files (#44847)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44847

Test Plan: Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D23747007

Pulled By: z-a-f

fbshipit-source-id: 7d8fcc84a77454cc1479e5158f5a62eda5824a87
2020-10-14 13:15:34 -07:00