pytorch/torch/quantization
Vasiliy Kuznetsov 502c58ad84 ns for fx: allow comparing int8 to int8 for functionals (#56742)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56742

Fixes a bug to allow shadowing of linear and conv functionals.
The bug is to only detach tensors, not all objects.

Test Plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIs.test_int8_shadows_int8_fun
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27960767

fbshipit-source-id: abc911ca4b9edafd1effb9dada7731981538c2df
2021-04-26 17:03:30 -07:00
..
fx [quant][graphmode][fx] Support preserving attributes in deepcopy of observed/quantized graphmodule (#56550) 2021-04-22 15:02:44 -07:00
ns ns for fx: allow comparing int8 to int8 for functionals (#56742) 2021-04-26 17:03:30 -07:00
__init__.py Un-ignore F403 in .flake8 (#55838) 2021-04-13 09:24:07 -07:00
_correct_bias.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
_equalize.py Fix type annotation errors in torch.functional (#43446) 2020-08-26 08:27:59 -07:00
_learnable_fake_quantize.py Add lint for unqualified type: ignore (#56290) 2021-04-21 08:07:23 -07:00
_numeric_suite_fx.py ns for fx: add option to skip matching classes and functions (#56493) 2021-04-26 17:03:28 -07:00
_numeric_suite.py Add lint for unqualified type: ignore (#56290) 2021-04-21 08:07:23 -07:00
fake_quantize.py memory efficient per-channel fq: use it everywhere, delete old version (#51265) 2021-01-28 19:42:25 -08:00
fuse_modules.py Add lint for unqualified noqa (#56272) 2021-04-19 13:16:18 -07:00
fuser_method_mappings.py quantization: Linear + BatchNorm1d fusion (#50748) 2021-01-20 12:59:02 -08:00
observer.py Support factory kwargs in torch.nn modules (#54508) 2021-04-22 16:16:53 -07:00
qconfig.py Un-ignore F403 in .flake8 (#55838) 2021-04-13 09:24:07 -07:00
quant_type.py [quant][graphmode][fx] custom_module support static/dynamic/weight_only quant (#46786) 2020-10-27 21:41:33 -07:00
quantization_mappings.py ns for fx: add test for op relationship coverage (#55837) 2021-04-15 16:11:26 -07:00
quantize_fx.py [quant][graphmode][fx] Support preserving attributes in deepcopy of observed/quantized graphmodule (#56550) 2021-04-22 15:02:44 -07:00
quantize_jit.py Enforce PEP263 for PyTorch python codebase (#55346) 2021-04-06 18:31:38 -07:00
quantize.py Back out "[quant][graphmode][fx] Separate handling Copy operator to a helper function" (#55388) 2021-04-06 14:20:36 -07:00
stubs.py type check for torch.quantization.stubs (#46475) 2020-10-16 15:34:23 -07:00
utils.py [quant][graphmode][fx] Fix a condition check for CopyNode (#53585) 2021-03-11 09:32:20 -08:00