pytorch/torch/ao/quantization
Terry Chen e7c87e8b44 [quant] fix dropout in FX graph mode quantization (#71043)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71043

fix issue #68250
dropout break fx graph model quantization

Test Plan:
python test/test_quantization.py TestStaticQuantizedModule

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D33490176

fbshipit-source-id: 155546505b28ffc635ada65a1464b9d622dbc235
2022-01-13 15:59:59 -08:00
..
_dbr [Quant][DBR] Add test for serialization (#70078) 2022-01-10 17:50:05 -08:00
fx fix op not scriptable 2022-01-07 16:55:28 -08:00
__init__.py [quant] Add imports to the torch/ao/quantization/__init__.py (#64911) 2021-09-29 19:08:45 -07:00
_correct_bias.py [quant] Fix the parts that were missing after initial migration (#66058) 2021-10-05 11:45:37 -07:00
_equalize.py [quant] AO migration of the _correct_bias.py, _equalize.py, and _learnable_fake_quantize.py (#64917) 2021-09-15 18:15:39 -07:00
_learnable_fake_quantize.py [quant] Fix the parts that were missing after initial migration (#66058) 2021-10-05 11:45:37 -07:00
_quantize_dbr.py dbr quant support for custom leaf modules, part 3/x (#70349) 2022-01-06 13:25:10 -08:00
_quantize_fx_do_not_use.py [quant][fx][graphmode][be] Change the type for output of convert to be torch.nn.Module (#69959) 2021-12-29 20:33:32 -08:00
fake_quantize.py [quant][embedding qat] Set FakeQuant zeropoint dtype matches observer (#68390) 2021-11-30 12:21:14 -08:00
fuse_modules.py [quant] Fix the parts that were missing after initial migration (#66058) 2021-10-05 11:45:37 -07:00
fuser_method_mappings.py [fusion] Add ConvTranspose+BN fusion support (#70022) 2021-12-20 18:42:48 -08:00
observer.py [quant] fix reduce_range warning (#71027) 2022-01-10 20:05:36 -08:00
pattern.md [quant][refactor] Move pattern type definition to ao/quantization/utils.py (#68769) 2021-12-07 11:00:22 -08:00
qconfig_dict_utils.py fx quant: move _parent_name to common utils (#69720) 2021-12-17 05:59:46 -08:00
qconfig.py [quant][be] Replace QConfigDynamic with QConfig in code (#69864) 2021-12-17 22:30:57 -08:00
quant_type.py [quant] AO migration of the quant_types.py (phase 1) (#64916) 2021-09-15 17:30:00 -07:00
quantization_mappings.py [quant] fix dropout in FX graph mode quantization (#71043) 2022-01-13 15:59:59 -08:00
quantize_fx.py [quant][fx][graphmode] Support standalone module in _convert_do_not_use (#70151) 2021-12-30 12:31:03 -08:00
quantize_jit.py [quant] Fix the parts that were missing after initial migration (#66058) 2021-10-05 11:45:37 -07:00
quantize.py [quant][be] Add a check in prepare_qat to make sure the model is in training mode (#69879) 2021-12-22 11:00:00 -08:00
stubs.py torch.ao migration: stubs.py phase 1 (#64861) 2021-09-13 08:40:29 -07:00
utils.py dbr quant: support dynamic linear (#70257) 2022-01-06 13:24:55 -08:00