pytorch/test/quantization
Jerry Zhang 08d8f81704 [quant][fix][fx][graphmode] Fix qconfig setting for fused modules (#71254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71254

when we configure linear and relu with the same qconfig, we currently have utility functions to also
generate a qconfig for the fused linear relu module, but this code is not called in correct order before
which resulted in unexpected behaviors. This PR fixes the issue. Please see test case for more details.
(Test case is from Supriya)

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_fused_module_qat_swap

Imported from OSS

Reviewed By: supriyar

Differential Revision: D33558321

fbshipit-source-id: d95114dc4b77264e603c262c2da02a3de4acba69
2022-01-14 23:31:11 -08:00
..
ao_migration fx quant: move _parent_name to common utils (#69720) 2021-12-17 05:59:46 -08:00
bc Set test owners for quantization tests (#66832) 2021-10-21 16:04:41 -07:00
core [quant] fix dropout in FX graph mode quantization (#71043) 2022-01-13 15:59:59 -08:00
dbr [Quant][DBR] Add test for serialization (#70078) 2022-01-10 17:50:05 -08:00
eager Back out "[Quant][Eager] Added 4 bit support for eager mode quantization flow" (#70272) 2021-12-21 21:28:01 -08:00
fx [quant][fix][fx][graphmode] Fix qconfig setting for fused modules (#71254) 2022-01-14 23:31:11 -08:00
jit Hoisting common expressions out of If blocks [retry] (#65645) 2022-01-10 13:28:17 -08:00
serialized fx quant: add a BC test for loading old torch.package models (#65538) 2021-10-11 08:23:38 -07:00
__init__.py remediation of S205607 2020-07-17 17:19:47 -07:00