pytorch/torch/quantization
Vasiliy Kuznetsov d75e99b709 fx quant: enable qconfig_dict to target function invocations by order (#59605)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59605

Enables targeting of individual function invocations by execution order.
For example, given a module such as

```
class M1(torch.nn.Module):
  def forward(self, x):
    x = torch.add(x, x)
    x = torch.add(x, x)
    return x

class M2(torch.nn.Module):
  def __init__(self):
    self.m1 = M1()

  def forward(self, x):
    x = self.m1(x)
    return x
```

We can now target the first add of `m1` with

```
qconfig_dict = {
  "module_name_function_order": ("m1", torch.add, 0, custom_qconfig),
}
```

Test Plan:
```
python test/test_quantization.py TestQuantizeFx.test_qconfig_module_name_function_order
```

Imported from OSS

Reviewed By: hx89

Differential Revision: D28951077

fbshipit-source-id: 311d423724a31193d4fa4bbf3a712b46464b5a29
2021-06-11 08:53:40 -07:00
..
fx fx quant: enable qconfig_dict to target function invocations by order (#59605) 2021-06-11 08:53:40 -07:00
ns [quant][graphmode][fx][refactor] Split quantize.py to prepare.py and convert.py (#59353) 2021-06-02 23:52:39 -07:00
__init__.py Un-ignore F403 in .flake8 (#55838) 2021-04-13 09:24:07 -07:00
_correct_bias.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
_equalize.py [quant] Eager mode equalization support for ConvReLU and LinearReLU (#58792) 2021-05-24 17:25:13 -07:00
_learnable_fake_quantize.py [quant][graphmode][fx][refactor] Split quantize.py to prepare.py and convert.py (#59353) 2021-06-02 23:52:39 -07:00
_numeric_suite_fx.py ns for fx: move relatedness mapping to mappings file (#57171) 2021-05-05 06:29:11 -07:00
_numeric_suite.py Add lint for unqualified type: ignore (#56290) 2021-04-21 08:07:23 -07:00
fake_quantize.py memory efficient per-channel fq: use it everywhere, delete old version (#51265) 2021-01-28 19:42:25 -08:00
fuse_modules.py Add lint for unqualified noqa (#56272) 2021-04-19 13:16:18 -07:00
fuser_method_mappings.py fix docstring for fusing functions (#58638) 2021-05-24 18:27:22 -07:00
observer.py Move _PartialWrapper to module scope (#59660) 2021-06-09 11:55:04 -07:00
qconfig.py Un-ignore F403 in .flake8 (#55838) 2021-04-13 09:24:07 -07:00
quant_type.py [quant][graphmode][fx] custom_module support static/dynamic/weight_only quant (#46786) 2020-10-27 21:41:33 -07:00
quantization_mappings.py fix nn.MHA scriptability (#58727) 2021-05-26 15:29:49 -07:00
quantize_fx.py fx quant: enable qconfig_dict to target function invocations by order (#59605) 2021-06-11 08:53:40 -07:00
quantize_jit.py Enable the quantization on XPU devices (#54857) 2021-05-20 17:02:13 -07:00
quantize.py [quant][eager][fix] Fix a typo in convert function in eager mode quantization (#59571) 2021-06-08 10:24:22 -07:00
stubs.py type check for torch.quantization.stubs (#46475) 2020-10-16 15:34:23 -07:00
utils.py [quant][graphmode][fx] Fix a condition check for CopyNode (#53585) 2021-03-11 09:32:20 -08:00