mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23003 torch.quantization.fuse_module and torch.nn._intrinsic convRelu and LinearRelu Fusion function to combine specific modules: (conv,bn) and (conv,bn,relu). In all cases, replace modules in place. The first module is replaced with the _intrinsic fused module and the remaining modules are replaced by nn.Identity. Support both training and eval. For training, the modules are "fused" with a sequential container. This is to allow for further module swaps for quantization aware training. Also add: torch.nn._intrinsic for convRelu and LinearRelu. TODO: Add tests for _intrinsic modules. Conv BN fusion code is based on DsKhudia's implementation Differential Revision: D16199720 fbshipit-source-id: 95fb9ffe72b361d280313b2ec57de2acd4f9dda2
7 lines
408 B
Python
7 lines
408 B
Python
from . import rnn # noqa: F401
|
|
from .clip_grad import clip_grad_norm, clip_grad_norm_, clip_grad_value_ # noqa: F401
|
|
from .weight_norm import weight_norm, remove_weight_norm # noqa: F401
|
|
from .convert_parameters import parameters_to_vector, vector_to_parameters # noqa: F401
|
|
from .spectral_norm import spectral_norm, remove_spectral_norm # noqa: F401
|
|
from .fusion import fuse_conv_bn_eval # noqa: F401
|