pytorch/torch/quantization
Vasiliy Kuznetsov ddedeab66d ns for fx: bug fix for shadowing fp16 emulation patterns (#57024)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57024

Enables shadow copies of fp16 emulation patterns where weights
are cast to fp16 before being passed to linear.  This previously
did not work because copying of `call_method` nodes was not implemented.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_linear_fp16_vs_linear_fp16_shadow_activations
```

Reviewed By: jerryzh168

Differential Revision: D28030096

Pulled By: vkuzo

fbshipit-source-id: 13a39ea6c106180df6d750246672286b58b4d04c
2021-04-27 16:28:56 -07:00
..
fx [quant][graphmode][fx] Support preserving attributes in deepcopy of observed/quantized graphmodule (#56550) 2021-04-22 15:02:44 -07:00
ns ns for fx: bug fix for shadowing fp16 emulation patterns (#57024) 2021-04-27 16:28:56 -07:00
__init__.py Un-ignore F403 in .flake8 (#55838) 2021-04-13 09:24:07 -07:00
_correct_bias.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
_equalize.py Fix type annotation errors in torch.functional (#43446) 2020-08-26 08:27:59 -07:00
_learnable_fake_quantize.py Add lint for unqualified type: ignore (#56290) 2021-04-21 08:07:23 -07:00
_numeric_suite_fx.py ns for fx: allow user functions in shadowing (#57022) 2021-04-27 16:28:53 -07:00
_numeric_suite.py Add lint for unqualified type: ignore (#56290) 2021-04-21 08:07:23 -07:00
fake_quantize.py memory efficient per-channel fq: use it everywhere, delete old version (#51265) 2021-01-28 19:42:25 -08:00
fuse_modules.py Add lint for unqualified noqa (#56272) 2021-04-19 13:16:18 -07:00
fuser_method_mappings.py quantization: Linear + BatchNorm1d fusion (#50748) 2021-01-20 12:59:02 -08:00
observer.py Support factory kwargs in torch.nn modules (#54508) 2021-04-22 16:16:53 -07:00
qconfig.py Un-ignore F403 in .flake8 (#55838) 2021-04-13 09:24:07 -07:00
quant_type.py [quant][graphmode][fx] custom_module support static/dynamic/weight_only quant (#46786) 2020-10-27 21:41:33 -07:00
quantization_mappings.py ns for fx: add test for op relationship coverage (#55837) 2021-04-15 16:11:26 -07:00
quantize_fx.py [quant][graphmode][fx] Support preserving attributes in deepcopy of observed/quantized graphmodule (#56550) 2021-04-22 15:02:44 -07:00
quantize_jit.py Enforce PEP263 for PyTorch python codebase (#55346) 2021-04-06 18:31:38 -07:00
quantize.py Back out "[quant][graphmode][fx] Separate handling Copy operator to a helper function" (#55388) 2021-04-06 14:20:36 -07:00
stubs.py type check for torch.quantization.stubs (#46475) 2020-10-16 15:34:23 -07:00
utils.py [quant][graphmode][fx] Fix a condition check for CopyNode (#53585) 2021-03-11 09:32:20 -08:00