pytorch/torch/quantization
Vasiliy Kuznetsov 8fc1ca0d22 fx quant: fix prepacking for F.conv1d (#55311)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55311

Before this PR, `F.conv1d` was matched by FX graph mode quant patterns
but the prepacking was happening inline.  There was also a bug with
argument type mismatch.

This PR fixes both issues and adds a test. Thanks jerryzh168 for the
code tip.

Test Plan:
```
python test/test_quantization.py TestQuantizeFx.test_functional_not_reference
```

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D27575422

fbshipit-source-id: 42301e23cb101a9e64e46800813bc771317e233e
2021-04-14 09:04:28 -07:00
..
fx fx quant: fix prepacking for F.conv1d (#55311) 2021-04-14 09:04:28 -07:00
ns ns for fx: move more weight matching logic to weight_utils.py (#55288) 2021-04-14 09:04:26 -07:00
__init__.py Un-ignore F403 in .flake8 (#55838) 2021-04-13 09:24:07 -07:00
_correct_bias.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
_equalize.py Fix type annotation errors in torch.functional (#43446) 2020-08-26 08:27:59 -07:00
_learnable_fake_quantize.py mem-efficient learnable fake quantization (#49315) 2021-02-03 18:57:47 -08:00
_numeric_suite_fx.py ns for fx: move more weight matching logic to weight_utils.py (#55288) 2021-04-14 09:04:26 -07:00
_numeric_suite.py ns_eager: rename Logger I/O var names to logger_cls (#51359) 2021-02-09 22:30:44 -08:00
fake_quantize.py memory efficient per-channel fq: use it everywhere, delete old version (#51265) 2021-01-28 19:42:25 -08:00
fuse_modules.py quantization: Linear + BatchNorm1d fusion (#50748) 2021-01-20 12:59:02 -08:00
fuser_method_mappings.py quantization: Linear + BatchNorm1d fusion (#50748) 2021-01-20 12:59:02 -08:00
observer.py update HistogramObserver to be scriptable (#51081) 2021-01-27 07:27:03 -08:00
qconfig.py Un-ignore F403 in .flake8 (#55838) 2021-04-13 09:24:07 -07:00
quant_type.py [quant][graphmode][fx] custom_module support static/dynamic/weight_only quant (#46786) 2020-10-27 21:41:33 -07:00
quantization_mappings.py [quantization] Add some support for 3d operations (#50003) 2021-03-10 16:40:35 -08:00
quantize_fx.py [quant][fx] add _remove_qconfig flag to convert_fx (#53166) 2021-03-03 12:58:05 -08:00
quantize_jit.py Enforce PEP263 for PyTorch python codebase (#55346) 2021-04-06 18:31:38 -07:00
quantize.py Back out "[quant][graphmode][fx] Separate handling Copy operator to a helper function" (#55388) 2021-04-06 14:20:36 -07:00
stubs.py type check for torch.quantization.stubs (#46475) 2020-10-16 15:34:23 -07:00
utils.py [quant][graphmode][fx] Fix a condition check for CopyNode (#53585) 2021-03-11 09:32:20 -08:00