pytorch/torch/quantization
Jerry Zhang 1478e5ec2a [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47415

nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU
that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu

this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D24747035

fbshipit-source-id: b8fdf13e513a0d5f0c4c6c9835635bdf9fdc2769
2020-11-12 10:56:30 -08:00
..
fx [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) 2020-11-12 10:56:30 -08:00
__init__.py [WIP] Move torch.fx into its own target (#46658) 2020-10-29 17:03:08 -07:00
_correct_bias.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
_equalize.py Fix type annotation errors in torch.functional (#43446) 2020-08-26 08:27:59 -07:00
_learnable_fake_quantize.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
_numeric_suite.py [quant][refactor] Remove register api and rename get_*_mapping to get_default_*_mapping (#46337) 2020-10-20 15:53:47 -07:00
fake_quantize.py FixedQParamsFakeQuantize: adjust default quant_min and quant_max (#47423) 2020-11-05 09:06:55 -08:00
fuse_modules.py [quant][eagermode] Add additional_fuser_method_mapping to config (#46355) 2020-10-24 02:18:04 -07:00
fuser_method_mappings.py [quant][refactor] factor out get_combined_dict function (#47781) 2020-11-11 21:01:31 -08:00
observer.py [quant][qat] Ensure observer respects device affinity (#47514) 2020-11-10 08:43:52 -08:00
qconfig.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
quant_type.py [quant][graphmode][fx] custom_module support static/dynamic/weight_only quant (#46786) 2020-10-27 21:41:33 -07:00
quantization_mappings.py [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) 2020-11-12 10:56:30 -08:00
quantize_fx.py [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) 2020-11-12 10:56:30 -08:00
quantize_jit.py quantizaton: add API usage logging (#46095) 2020-10-09 16:51:27 -07:00
quantize.py [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) 2020-11-12 10:56:30 -08:00
stubs.py type check for torch.quantization.stubs (#46475) 2020-10-16 15:34:23 -07:00
utils.py [quant][refactor] factor out get_combined_dict function (#47781) 2020-11-11 21:01:31 -08:00