Commit Graph

11 Commits

Author SHA1 Message Date
XiaobingSuper
9b0b31a5e3 fix conv+bn folding issue for mixed dtype (#99696)
Align the conv+bn folding behavior with jit path for mixed type case: always keep conv's weight and bias dtype after folding.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99696
Approved by: https://github.com/jgong5, https://github.com/jansel
2023-04-23 05:13:40 +00:00
Jerry Zhang
1adb7b9b84 [nn][utils] Preserve requires_grad from original weight and bias in fuse conv/linear bn weights (#89100)
Summary:
att, previously we just call nn.Parameter which will have requires_grad=True by default, after
this PR we will preserve the requires_grad

Test Plan:
python test/test_nn.py TestFusionUtils

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D41343694](https://our.internmc.facebook.com/intern/diff/D41343694)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89100
Approved by: https://github.com/ngimel
2022-11-17 03:58:16 +00:00
Jon Morton
123be0e5b7 [fusion] Add ConvTranspose+BN fusion support (#70022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70022

Add support for fusing ConvTranpose{1,2,3}d with BatchNorm{1,2,3}d. This re-uses the existing fusion logic but adds a "transpose" flag to the fusing function which when enabled will use the appropriate reshape for ConTranspose's transposed weights.

Test Plan: `buck test mode/dev //caffe2/test:quantization -- -r quantization.eager.test_fusion.TestFusion`

Reviewed By: jerryzh168

Differential Revision: D33074405

fbshipit-source-id: 5e9eff1a06d8f98d117e7d18e80da8e842e973b7
2021-12-20 18:42:48 -08:00
Vasiliy Kuznetsov
ac8e90fa6d quantization: Linear + BatchNorm1d fusion (#50748)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50748

Adds support for Linear + BatchNorm1d fusion to quantization.

This is a redo of dreiss's https://github.com/pytorch/pytorch/pull/37467, faster
to copy-paste it than rebase and deal with conflicts.

Test Plan:
```
python test/test_quantization.py TestFusion.test_fusion_linear_bn_eval
```

Imported from OSS

Reviewed By: supriyar

Differential Revision: D25957432

fbshipit-source-id: 24e5b760f70186aa953ef65ab0182770e89495e4
2021-01-20 12:59:02 -08:00
Sam Tsai
0cf0b5f2e8 Minor refactor to normalize assignments (#45671)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45671

This is a follow up on D23977080 (2596113a79) and https://github.com/pytorch/pytorch/pull/45474.

Test Plan: See D23977080 (2596113a79).

Reviewed By: z-a-f

Differential Revision: D24043125

fbshipit-source-id: 0c05930668533bfd7145fa605f3785484391130b
2020-10-08 16:06:48 -07:00
Sam Tsai
2596113a79 Add fuse support for batchnorm with affine=False (#45474)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45474

When batchnorm affine is set to false, weight and bias is set to None, which is not supported in this case. Added a fix to set weights to 1 and bias to 0 if they are not set.

Test Plan: Add unit test for testing fusing conv, batchnorm where batchnorm is in affine=False mode.

Reviewed By: z-a-f

Differential Revision: D23977080

fbshipit-source-id: 2782be626dc67553f3d27d8f8b1ddc7dea022c2a
2020-09-30 14:15:05 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Vadim Kantorov
c6d908d491 Support Conv+BatchNorm fusion for 1d/3d (#29113)
Summary:
Support Conv+BatchNorm fusion for 1d/3d by being adaptive to number of dimensions (partially fixes https://github.com/pytorch/pytorch/issues/28757)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29113

Differential Revision: D18298248

Pulled By: soumith

fbshipit-source-id: 2fc75353aecc0e315c90e63476481acef6ebf784
2019-11-05 08:43:51 -08:00
Jerry Zhang
761ae8e9b6 Add intrinsic module mappings (#23753)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23753

Add intrinsic(fused) module mappings in quantize.py to enable mapping fused modules
in both QAT and post PTQ

Differential Revision: D16820749

fbshipit-source-id: 07de76a4f09b44bde8b193c103eac02c22b875b6
2019-08-15 09:37:24 -07:00
Zafar Takhirov
bb4f380f35 Optimizing out the division in the fusion
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23275

Test Plan: Imported from OSS

Differential Revision: D16450294

Pulled By: zafartahirov

fbshipit-source-id: 2f1ebf3673ed0467a9f6a912e08e5d95f9b27d0b
2019-08-12 11:35:37 -07:00
Zafar Takhirov
058645acb1 Fusion and _intrinsic modules (#23003)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23003

torch.quantization.fuse_module and torch.nn._intrinsic convRelu and LinearRelu

Fusion function to combine specific modules: (conv,bn) and  (conv,bn,relu).
In all cases, replace modules in place. The first module is replaced with the _intrinsic fused module and the remaining modules are replaced by nn.Identity.
Support both training and eval. For training, the modules are "fused" with a sequential container. This is to allow for further module swaps for quantization aware training.
Also add: torch.nn._intrinsic for convRelu and LinearRelu.

TODO: Add tests for _intrinsic modules.

Conv BN fusion code is based on DsKhudia's implementation

Differential Revision: D16199720

fbshipit-source-id: 95fb9ffe72b361d280313b2ec57de2acd4f9dda2
2019-07-23 14:54:19 -07:00