Commit Graph

350 Commits

Author SHA1 Message Date
Supriya Rao
842494af77 [quant][fx] EmbeddingBag quantization support (#46678)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46678

Test Plan:
python test/test_quantization.py TestQuantzeFxOps.test_qembedding_bag_module

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D24463306

fbshipit-source-id: 175e77f4450344fbf63409be35338b0c29afd585
2020-10-22 18:04:31 -07:00
Supriya Rao
e34c825b77 [quant][fx] Embedding quantization support (#46677)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46677

Add support for weight only embedding quantization

Test Plan:
python test/test_quantization.py TestQuantizeFxOps.test_qembedding_module

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D24463305

fbshipit-source-id: 2dba49d8a77cf237a8e6da2efdd83b1ebdc432d6
2020-10-22 17:59:52 -07:00
Jerry Zhang
bd90379df5 [quant][graphmode][fx] Add support for additional_fuse_method_mapping (#46345)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46345

Allow user to add more fusion mappings

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317439

fbshipit-source-id: 3b144bbc305e41efbdf3e9fb25dbbeaad9e86c6a
2020-10-22 15:15:31 -07:00
Jerry Zhang
23fad9111e [quant][graphmode][fx] Add additional_qat_module_mapping (#46344)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46344

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317438

fbshipit-source-id: f9e73aeb4c7a107c8df0bae8319464e7d5d7275b
2020-10-22 13:11:26 -07:00
Jerry Zhang
ab28bd528d [quant][graphmode][fx] Support quantizing FloatFunctional (#46634)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46634

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24438227

fbshipit-source-id: f33439d51112e13f59ee4292e804495d38fa3899
2020-10-22 01:21:17 -07:00
Jerry Zhang
13decddae2 [reland][quant] Add FixedQParamsFakeQuantize module (#45538) (#46657)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46657

This is used to simulate fake quantize operation for ops with fixed quantization parameters
e.g. hardsigmoid

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24451406

fbshipit-source-id: 26cc140c00f12bdec9a8f9dc880f4c425f4d4074
2020-10-21 16:47:11 -07:00
Jerry Zhang
746febdeac [quant][graphmode][fx] Add additional_object_mapping argument to convert (#46338)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46338

Should we merge quantized module and quantized operator configurations?

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317435

fbshipit-source-id: 3575251fe9d80a6628b8c3243c2ed92ea5e921e3
2020-10-21 16:39:07 -07:00
Ashkan Aliabadi
2181449068 Revert D24004795: [quant] Add FixedQParamsFakeQuantize module
Test Plan: revert-hammer

Differential Revision:
D24004795 (253918ec55)

Original commit changeset: fc4797f80842

fbshipit-source-id: 663169e90a2f58e5a89e4d382291ae41c24d0fee
2020-10-20 19:40:21 -07:00
Jerry Zhang
253918ec55 [quant] Add FixedQParamsFakeQuantize module (#45538)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45538

This is used to simulate fake quantize operation for ops with fixed quantization parameters
e.g. hardsigmoid

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24004795

fbshipit-source-id: fc4797f80842daacd3b3584c5b72035774634edd
2020-10-20 17:43:25 -07:00
Jerry Zhang
f9446cb15a [quant][refactor] Remove register api and rename get_*_mapping to get_default_*_mapping (#46337)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46337

We plan to pass around the mappings instead of using global registration api to keep
the mappings local to the transformations user is performing

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24317436

fbshipit-source-id: 81569b88f05eeeaa9595447e482a12827aeb961f
2020-10-20 15:53:47 -07:00
Jerry Zhang
a06b95b2ba [quant][graphmode][fx] Support non_traceable_module/module_class (#46298)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46298

Allow user to specify a list of qualified names for non traceable submodule
or type of the non traceable submodule
See quantize_fx.py for api

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24294210

fbshipit-source-id: eb1e309065e3dfbf31e63507aaed73587f0dae29
2020-10-19 18:50:08 -07:00
Jerry Zhang
30d687522d [reland][quant][eagermode] Move custom_module registration to prepare/convert_custom_config_dict (#46293) (#46364)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46364

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24322747

fbshipit-source-id: 4801ba1835fc805bf767fe9810b9edfa2ceeefb4
2020-10-19 15:21:00 -07:00
Vasiliy Kuznetsov
6dc763df30 PyTorch: add API usage logging to numeric suite (#46504)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46504

As titled, so we can start seeing who is using this.

Test Plan: CI

Reviewed By: hx89

Differential Revision: D24375254

fbshipit-source-id: ff7b5560d0a6a175cecbf546eefc910759296dbb
2020-10-19 13:17:02 -07:00
Rong Rong
89108ba6ea type check for torch.quantization.stubs (#46475)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42973

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46475

Reviewed By: malfet

Differential Revision: D24368088

Pulled By: walterddr

fbshipit-source-id: 7a0ccb4fa66b28d4ac59923d727e632351a02b3f
2020-10-16 15:34:23 -07:00
Rong Rong
d1745c36dc fix type check for torch.quantization._numeric_suite (#46330)
Summary:
fix https://github.com/pytorch/pytorch/issues/42977

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46330

Reviewed By: malfet

Differential Revision: D24320449

Pulled By: walterddr

fbshipit-source-id: f892b5c83cb932aee53245d6c825568b3e05f3c6
2020-10-15 20:45:07 -07:00
Mike Ruberry
ff0af7242b Revert D24290811: [quant][eagermode] Move custom_module registration to prepare/convert_custom_config_dict
Test Plan: revert-hammer

Differential Revision:
D24290811 (3ad797c937)

Original commit changeset: 7d2aee98e194

fbshipit-source-id: 24013e92044f2a1b36b1a9f475bbaa6f17bdaa11
2020-10-14 16:42:55 -07:00
Zachary DeVito
fc1d6bf135 [fx] make sure args/kwargs are immutable (#46325)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46325

Otherwise, mutating them would make the uses/users lists inaccurate.
You can still mutate the node by assigning a new value to .args or .kwargs

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D24308672

Pulled By: zdevito

fbshipit-source-id: a5305e1d82668b36e46876c3bc517f6f1d03dd78
2020-10-14 15:51:43 -07:00
Zafar
635aebdfab [quant] Refactoring the mappings files (#44847)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44847

Test Plan: Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D23747007

Pulled By: z-a-f

fbshipit-source-id: 7d8fcc84a77454cc1479e5158f5a62eda5824a87
2020-10-14 13:15:34 -07:00
Jerry Zhang
3ad797c937 [quant][eagermode] Move custom_module registration to prepare/convert_custom_config_dict (#46293)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46293

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D24290811

fbshipit-source-id: 7d2aee98e1946c2a4268efb94443f1e5daaa793e
2020-10-14 12:10:37 -07:00
Jerry Zhang
53316e8b97 [quant] Remove prehook option (#46292)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46292

since it is not needed

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D24290815

fbshipit-source-id: 5cc24a305dbdfee5de3419dc83a9c3794d949300
2020-10-14 11:08:38 -07:00
Jerry Zhang
49903a5cd5 [quant][graphmode][fx] Move custom_module_class config to prepare/convert_custom_config_dict (#46251)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46251

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D24290810

fbshipit-source-id: 7a96f04a0f33f0315943ac18ef2d08e4f5a5d1c0
2020-10-14 10:43:48 -07:00
Mike Ruberry
38e64cf949 Revert D24232288: [fx] make sure args/kwargs are immutable
Test Plan: revert-hammer

Differential Revision:
D24232288 (61df99b78e)

Original commit changeset: c95b1a73ae55

fbshipit-source-id: b910a6618f76ef64caead20e8207997317bc2f5e
2020-10-14 01:39:33 -07:00
Zachary DeVito
61df99b78e [fx] make sure args/kwargs are immutable (#46121)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46121

Otherwise, mutating them would make the uses/users lists inaccurate.
You can still mutate the node by assigning a new value to .args or .kwargs

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D24232288

Pulled By: zdevito

fbshipit-source-id: c95b1a73ae55ad9bdb922ca960c8f744ff732100
2020-10-13 21:33:19 -07:00
Jerry Zhang
67a0c0af27 [quant][fx][graphmode] Add prepare_custom_config_dict and convert_custom_config_dict (#46223)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46223

Also move standalone module config to the prepare_custom_config_dict

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24266900

fbshipit-source-id: fe3ff5b8c657af3f377041e7881d400938e044f8
2020-10-13 14:19:49 -07:00
Jerry Zhang
7f6a1b2bd5 [quant][fx][graphmode][api] Change API for custom module (#45920)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45920

See docs for new way of defining custom modules

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D24145856

fbshipit-source-id: 488673fba503e39e8e303ed5a776fe36899ea4e3
2020-10-12 23:42:27 -07:00
Vasiliy Kuznetsov
7094c09ff7 quantizaton: add API usage logging (#46095)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46095

Adds logging on usage of public quantization APIs. This only works in FB codebase
and is a no-op in OSS.

Test Plan: The test plan is fb-only

Reviewed By: raghuramank100

Differential Revision: D24220817

fbshipit-source-id: a2cc957b5a077a70c318242f4a245426e48f75e5
2020-10-09 16:51:27 -07:00
Jerry Zhang
2b204e6db3 [quant][fx][graphmode] Run symbolic_trace in quantization (#45919)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45919

As discussed with JIT team, we'll run symbolic trace in quantization functions
prepare_fx now takes orginal pytorch model (torch.nn.Module) instead of `GraphModule` as input

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24145857

fbshipit-source-id: 2b7a4ca525a7a8c23a26af54ef594c6a951e4024
2020-10-08 17:26:03 -07:00
James Reed
8cdb638c62 [FX] Track use nodes in Node (#45775)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45775

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24091082

Pulled By: jamesr66a

fbshipit-source-id: b09bb6ae78436a7722fb135b8ec71464ef9587cd
2020-10-07 00:15:04 -07:00
Supriya Rao
11c32611d7 [quant] Support 4-bit embedding_bag operators using the dtype quint4x2 (#45752)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45752

Use the torch.quint4x2 dtype to create 4-bit packed tensors in the previous PR.
These packed tensors can be directly consumed by the operator.
Serialization of the packed tensors is supported using torchbind custom class.
Module support will follow in a later PR.

Test Plan:
python test/test_quantization.py TestEmbeddingBagOps

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D24120996

fbshipit-source-id: 2639353b3343ebc69e058b5ba237d3fc56728e1c
2020-10-06 21:11:49 -07:00
Jerry Zhang
14997f2125 [quant][graphmode][fx] Add warning for unsupported case (#45714)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45714

Hit the problem when writing a test like following:
```
class M(...):
      def forward(self, x):
          x = x.some_op()
          return x
```
we need to know the scope of `x` to figure out the qconfig for `x`

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24069959

fbshipit-source-id: 95ac8963c802ebce5d0e54d55f5ebb42085ca8a6
2020-10-06 15:33:34 -07:00
Jerry Zhang
0da6730f02 [quant][graphmode][fx][eagermode] Add leaky relu support in quantization workflows (#45712)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45712

Eager mode will still be able to use functional leaky relu, but it will be less accurate than
LeakyReLU module.
FX graph mode will support both leaky relu functional and module

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24069961

fbshipit-source-id: 8d91c3c50c0bcd068ba3072378ebb4da9549be3b
2020-10-06 12:16:04 -07:00
James Reed
b04ae953b4 [FX][WIP] Mutable Graph APIs (#45227)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45227

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23880730

Pulled By: jamesr66a

fbshipit-source-id: eb4e8c14d7f6b1deb1ddd6cf38a360413a1705ed
2020-10-05 17:07:08 -07:00
James Reed
53aea60bce [FX] Make output a non-special Node (#45599)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45599

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24027586

Pulled By: jamesr66a

fbshipit-source-id: 747c25e3c7668ca45f03bed0be71fd3c9af67286
2020-10-02 17:08:17 -07:00
Rong Rong
322855e380 type check for torch.quantization.observer (#45630)
Summary:
add type checker for observer

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45630

Reviewed By: malfet

Differential Revision: D24058304

Pulled By: walterddr

fbshipit-source-id: ac1c0f5ff0d34b0445bd1364653fc5c9d7571b05
2020-10-02 13:25:41 -07:00
Sam Estep
24187a0b42 Enable type check for torch.quantization.fake_quantize (#45701)
Summary:
Addresses part of https://github.com/pytorch/pytorch/issues/42969.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45701

Reviewed By: walterddr

Differential Revision: D24066672

Pulled By: samestep

fbshipit-source-id: 53bb5e7b4703738d3de86fa89fb0980f1d6251f3
2020-10-02 09:27:34 -07:00
Jerry Zhang
4f685ecc25 [reland][quant][graphmode][fx] Merge all quantization mode (#45292) (#45672)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45672

This PR merges all quantization mode and will only expose the following top level functions:
```
prepare_fx
prepare_qat_fx
convert_fx
```

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24053439

fbshipit-source-id: 03d545e26a36bc22a73349061b751eeb35171e64
2020-10-01 15:47:11 -07:00
Mike Ruberry
c36b354072 Revert D23913105: [quant][graphmode][fx] Merge all quantization mode
Test Plan: revert-hammer

Differential Revision:
D23913105 (ffcb0989e7)

Original commit changeset: 4e335286d6de

fbshipit-source-id: 5765b4e8ec917423f1745f73a9f3f235fc53423d
2020-10-01 03:12:42 -07:00
Jerry Zhang
ffcb0989e7 [quant][graphmode][fx] Merge all quantization mode (#45292)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45292

This PR merges all quantization mode and will only expose the following top level functions:
```
prepare_fx
prepare_qat_fx
convert_fx
```

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23913105

fbshipit-source-id: 4e335286d6de225839daf51d1df54322d52d68e5
2020-09-30 21:20:34 -07:00
Jerry Zhang
9d5607fcd9 [quant] Use PlaceholderObserver as default dynamic quant observer (#45343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45343

Current default dynamic quant observer is not correct since we don't accumulate
min/max and we don't need to calculate qparams.

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D23933995

fbshipit-source-id: 3ff497c9f5f74c687e8e343ab9948d05ccbba09b
2020-09-30 19:01:18 -07:00
Jerry Zhang
5539066d12 [quant][graphmode][fx] Support quantization for custom module (#44074)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44074

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23580642

fbshipit-source-id: a80b0b3e5e1f4c4a9647da872239cc0a4d58dd3b
2020-09-30 10:24:54 -07:00
Supriya Rao
489af4ddcb [quant] Add quant APIs to save/load observer state_dict (#44846)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44846

The save function traverses the model state dict to pick out the observer stats
load function traverse the module hierarchy to load the state dict into module attributes depending on observer type

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_save_observer_state_dict

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D23746821

fbshipit-source-id: 05c571b62949a2833602d736a81924d77e7ade55
2020-09-29 01:52:42 -07:00
James Reed
b0bdc82a00 [FX][EZ] Fix bug where copying node made non-unique name (#45311)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45311

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D23917864

Pulled By: jamesr66a

fbshipit-source-id: 10d0a4017ffe160bce4ba0d830e035616bbded74
2020-09-28 22:55:20 -07:00
Jerry Zhang
f93ead6d37 [quant][eagermode] Custom module support (#44835)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44835

This is for feature parity with fx graph mode quantization

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D23745086

fbshipit-source-id: ae2fc86129f9896d5a9039b73006a4da15821307
2020-09-23 15:39:40 -07:00
Jerry Zhang
adb2b380ba [quant][graphmode][fx] qconfig_dict support more types of configurations (#44856)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44856

Support following format of qconfig_dict
```python
qconfig_dict = {
    # optional, global config
    "": qconfig?,

    # optional, used for module and function types
    # could also be split into module_types and function_types if we prefer
    "object_type": [
      (nn.Conv2d, qconfig?),
      (F.add, qconfig?),
      ...,
    ],

    # optional, used for module names
    "module_name": [
      ("foo.bar", qconfig?)
      ...,
    ],

    # optional, matched in order, first match takes precedence
    "module_name_regex": [
      ("foo.*bar.*conv[0-9]+", qconfig?)
      ...,
    ]
    # priority (in increasing order): global, object_type, module_name_regex, module_name
    # qconfig == None means fusion and quantization should be skipped for anything
    # matching the rule
}
```

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23751304

fbshipit-source-id: 5b98f4f823502b12ae2150c93019c7b229c49c50
2020-09-23 13:59:53 -07:00
Supriya Rao
7fba30c2be [quant][fx][bug] Fix error in convert step for QAT (#45050)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45050

Update tests to actually test for QAT

Test Plan:
python test/test_quantization.py TestQuantizeFxOps.test_linear

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D23808022

fbshipit-source-id: d749ab2d215fe19238ff9d539307ffce9ef0ca9b
2020-09-22 22:48:31 -07:00
Jerry Zhang
f575df201f [quant][graphmode][jit][api] Expose preserved_attrs from finalize to convert_jit (#44490)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44490

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D23631142

fbshipit-source-id: f0913f0cb4576067e2a7288326024942d12e0ae0
2020-09-22 19:37:25 -07:00
Jerry Zhang
ccfbfe5eb5 [quant][graphmode][fx] Custom module support (#44766)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44766

There might be modules that are not symbolically traceable, e.g. LSTM (since it has
input dependent control flows), to support quantization in these cases, user will provide
the corresponding observed and quantized version of the custom module, the observed
custom module with observers already inserted in the module and the quantized version will
have the corresponding ops quantized. And use
```
from torch.quantization import register_observed_custom_module_mapping
from torch.quantization import register_quantized_custom_module_mapping
register_observed_custom_module_mapping(CustomModule, ObservedCustomModule)
register_quantized_custom_module_mapping(CustomModule, QuantizedCustomModule)
```
to register the custom module mappings, we'll also need to define a custom delegate class
for symbolic trace in order to prevent the custom module from being traced:
```python
class CustomDelegate(DefaultDelegate):
      def is_leaf_module(self, m):
          return (m.__module__.startswith('torch.nn') and
                    not isinstance(m, torch.nn.Sequential)) or \
                    isinstance(m, CustomModule)
m = symbolic_trace(original_m, delegate_class=CustomDelegate)
```

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D23723455

fbshipit-source-id: 50d666e29b94cbcbea5fb6bcc73b00cff87eb77a
2020-09-22 17:11:46 -07:00
James Reed
7f4a27be3a [resubmit][FX] s/get_param/get_attr/ (#45147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45147

ghstack-source-id: 112605923

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D23845096

fbshipit-source-id: 9ca209aa84cbaddd6e89c52b541e43b11197e2d5
2020-09-22 17:06:18 -07:00
Zafar
2b1f25885e [quant] Fix ConvTranspose mapping (#44844)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44844

Test Plan: Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D23746466

Pulled By: z-a-f

fbshipit-source-id: cb84e0fef5ab82e8ed8dd118d9fb21ee7b480ef7
2020-09-22 11:59:42 -07:00
James Reed
1fd48a9d1f Revert D23798016: [FX] s/get_param/get_attr/
Test Plan: revert-hammer

Differential Revision:
D23798016 (c941dd3492)

Original commit changeset: 1d2f3db1994a

fbshipit-source-id: 974d930064b37d396c5d66c905a63d45449813e5
2020-09-22 10:32:51 -07:00