Vasiliy Kuznetsov
4779553921
Revert "[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu ( #47415 )" ( #47949 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47949
This reverts commit 1478e5ec2a .
Test Plan: Imported from OSS
Reviewed By: supriyar
Differential Revision: D24966363
Pulled By: vkuzo
fbshipit-source-id: ca1126f699eef84027a15df35962728296c8a790
2020-11-14 08:40:30 -08:00
Jerry Zhang
1478e5ec2a
[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu ( #47415 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47415
nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU
that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu
this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode
Test Plan: Imported from OSS
Reviewed By: z-a-f
Differential Revision: D24747035
fbshipit-source-id: b8fdf13e513a0d5f0c4c6c9835635bdf9fdc2769
2020-11-12 10:56:30 -08:00
Jerry Zhang
dd77d5a1d4
[quant][refactor] factor out get_combined_dict function ( #47781 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47781
Test Plan: Imported from OSS
Reviewed By: supriyar
Differential Revision: D24900303
fbshipit-source-id: 1a2cb0ec536384abcd140e0d073f0965ed2800cd
2020-11-11 21:01:31 -08:00
Jerry Zhang
0cba3e3704
[quant][graphmode][fx] Add support for qat convbn{relu}1d ( #47248 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47248
Test Plan: Imported from OSS
Reviewed By: vkuzo
Differential Revision: D24696524
fbshipit-source-id: 684db12be201307acbdc89a44192cf2270491dba
2020-11-03 22:43:33 -08:00
Jerry Zhang
53a5f08e0c
[quant][eagermode] Avoid inserting fakequant for sigmoid/hardsigmoid/tanh in eval mode ( #47297 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47297
Test Plan: Imported from OSS
Reviewed By: z-a-f
Differential Revision: D24708270
fbshipit-source-id: a19b6dbe07d5c80f3cc78a987742d345d86e1cd1
2020-11-03 21:33:35 -08:00
Jerry Zhang
6b50ccc41c
[quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat ( #46738 ) ( #46871 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46871
Test Plan:
Imported from OSS
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D24547180
fbshipit-source-id: d2eb9aa74c6e5436204376b1a2ebcc6188d3562f
2020-10-26 23:52:07 -07:00
Alban Desmaison
25db74bf5e
Revert D24486972: [quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat
...
Test Plan: revert-hammer
Differential Revision:
D24486972 (e927b62e73 )
Original commit changeset: c9f139bfdd54
fbshipit-source-id: 2a75f5ec93d55a62b40d1cdd49adcf65436058f7
2020-10-26 12:47:05 -07:00
Jerry Zhang
e927b62e73
[quant][graphmode][fx] Support sigmoid/hardsigmoid/tanh in qat ( #46738 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46738
Test Plan: Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D24486972
fbshipit-source-id: c9f139bfdd54973da1a93a45e32937595dbe67fc
2020-10-26 12:04:42 -07:00
Jerry Zhang
746febdeac
[quant][graphmode][fx] Add additional_object_mapping argument to convert ( #46338 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46338
Should we merge quantized module and quantized operator configurations?
Test Plan: Imported from OSS
Reviewed By: vkuzo
Differential Revision: D24317435
fbshipit-source-id: 3575251fe9d80a6628b8c3243c2ed92ea5e921e3
2020-10-21 16:39:07 -07:00
Jerry Zhang
f9446cb15a
[quant][refactor] Remove register api and rename get_*_mapping to get_default_*_mapping ( #46337 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46337
We plan to pass around the mappings instead of using global registration api to keep
the mappings local to the transformations user is performing
Test Plan: Imported from OSS
Reviewed By: vkuzo
Differential Revision: D24317436
fbshipit-source-id: 81569b88f05eeeaa9595447e482a12827aeb961f
2020-10-20 15:53:47 -07:00
Zafar
635aebdfab
[quant] Refactoring the mappings files ( #44847 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44847
Test Plan: Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D23747007
Pulled By: z-a-f
fbshipit-source-id: 7d8fcc84a77454cc1479e5158f5a62eda5824a87
2020-10-14 13:15:34 -07:00
Jerry Zhang
0da6730f02
[quant][graphmode][fx][eagermode] Add leaky relu support in quantization workflows ( #45712 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45712
Eager mode will still be able to use functional leaky relu, but it will be less accurate than
LeakyReLU module.
FX graph mode will support both leaky relu functional and module
Test Plan: Imported from OSS
Reviewed By: z-a-f
Differential Revision: D24069961
fbshipit-source-id: 8d91c3c50c0bcd068ba3072378ebb4da9549be3b
2020-10-06 12:16:04 -07:00
Zafar
2b1f25885e
[quant] Fix ConvTranspose mapping ( #44844 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44844
Test Plan: Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D23746466
Pulled By: z-a-f
fbshipit-source-id: cb84e0fef5ab82e8ed8dd118d9fb21ee7b480ef7
2020-09-22 11:59:42 -07:00
Jerry Zhang
0c58a017bd
[quant][eagermode][refactor] Add set/get method for quantization and fusion mappings ( #43990 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43990
Allow user to register custom quantization and fusion patterns
Test Plan: Imported from OSS
Reviewed By: z-a-f
Differential Revision: D23485344
fbshipit-source-id: 4f0174ee6d8000d83de0f73cb370e9a1941d54aa
2020-09-10 21:29:39 -07:00