mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
c855f8632e
31 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
b5c006acac |
[BE][Easy] enable UFMT for torch/nn/ (#128865)
Part of #123062 - #123062 Pull Request resolved: https://github.com/pytorch/pytorch/pull/128865 Approved by: https://github.com/ezyang |
||
|
|
eb5487361d |
docs: fix docstring errors in quantized modules and others (#112695)
Fixes #112632 Before: 171 ``` torch/backends/_nnapi/prepare.py:24 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/_nnapi/prepare.py:46 in public method `init`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:60 in public method `forward`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:94 in public function `convert_model_to_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/prepare.py:153 in public function `process_for_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/prepare.py:177 in private nested class `ShapeComputeModule`: D400: First line should end with a period (not 'n') torch/backends/_nnapi/serializer.py:19 in public class `NNAPI_OperandCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:35 in public class `NNAPI_OperationCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:133 in public class `NNAPI_FuseCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:140 in public class `OperandValueSourceType`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:150 in public class `TorchScalarTypes`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:154 in public function `approx_equal`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:158 in public function `tensor_size`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:172 in public function `change_element`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:194 in public class `DimOrder`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:225 in public method `use_nchw`: D102: Missing docstring in public method torch/backends/_nnapi/serializer.py:233 in public function `broadcast_shapes`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:260 in public function `get_conv_pool_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:284 in public function `fix_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:301 in public function `reverse_map_dim`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:312 in public function `flex_name`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:1337 in private method `_do_add_binary`: D400: First line should end with a period (not 's') torch/backends/_nnapi/serializer.py:1337 in private method `_do_add_binary`: D401: First line should be in imperative mood; try rephrasing (found 'Helper') torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`: D202: No blank lines allowed after function docstring (found 1) torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`: D205: 1 blank line required between summary line and description (found 0) torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`: D400: First line should end with a period (not ':') torch/backends/cuda/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cuda/__init__.py:30 in public function `is_built`: D205: 1 blank line required between summary line and description (found 0) torch/backends/cuda/__init__.py:30 in public function `is_built`: D209: Multi-line docstring closing quotes should be on a separate line torch/backends/cuda/__init__.py:30 in public function `is_built`: D400: First line should end with a period (not 's') torch/backends/cuda/__init__.py:30 in public function `is_built`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/cuda/__init__.py:37 in public class `cuFFTPlanCacheAttrContextProp`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:40 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:44 in public method `__get__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:47 in public method `__set__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:54 in public class `cuFFTPlanCache`: D205: 1 blank line required between summary line and description (found 0) torch/backends/cuda/__init__.py:54 in public class `cuFFTPlanCache`: D400: First line should end with a period (not 'e') torch/backends/cuda/__init__.py:60 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:73 in public method `clear`: D102: Missing docstring in public method torch/backends/cuda/__init__.py:78 in public class `cuFFTPlanCacheManager`: D205: 1 blank line required between summary line and description (found 0) torch/backends/cuda/__init__.py:78 in public class `cuFFTPlanCacheManager`: D400: First line should end with a period (not ',') torch/backends/cuda/__init__.py:89 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:93 in public method `__getitem__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:106 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:109 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:116 in public class `cuBLASModule`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:117 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:126 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:147 in public function `preferred_linalg_library`: D202: No blank lines allowed after function docstring (found 1) torch/backends/cuda/__init__.py:204 in public class `SDPBackend`: D204: 1 blank line required after class docstring (found 0) torch/backends/cudnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cudnn/__init__.py:81 in public function `version`: D400: First line should end with a period (not 'N') torch/backends/cudnn/__init__.py:81 in public function `version`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/cudnn/__init__.py:95 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/cudnn/__init__.py:99 in public function `is_acceptable`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:122 in public function `set_flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:150 in public function `flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:174 in public class `CudnnModule`: D101: Missing docstring in public class torch/backends/cudnn/__init__.py:175 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkl/__init__.py:5 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/mkl/__init__.py:14 in public class `verbose`: D205: 1 blank line required between summary line and description (found 0) torch/backends/mkl/__init__.py:14 in public class `verbose`: D400: First line should end with a period (not 'y') torch/backends/mkl/__init__.py:41 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:44 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkl/__init__.py:53 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkldnn/__init__.py:9 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/mkldnn/__init__.py:19 in public class `verbose`: D205: 1 blank line required between summary line and description (found 0) torch/backends/mkldnn/__init__.py:19 in public class `verbose`: D400: First line should end with a period (not 'y') torch/backends/mkldnn/__init__.py:47 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkldnn/__init__.py:50 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:59 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:64 in public function `set_flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:71 in public function `flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:81 in public class `MkldnnModule`: D101: Missing docstring in public class torch/backends/mkldnn/__init__.py:82 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/openmp/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/openmp/__init__.py:5 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/nn/intrinsic/qat/modules/conv_fused.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/intrinsic/qat/modules/linear_fused.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/intrinsic/qat/modules/linear_relu.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/dynamic/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/dynamic/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/embedding_ops.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantizable/modules/activation.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantizable/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/sparse.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/utils.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/functional.py:1 at module level: D400: First line should end with a period (not 'l') torch/nn/quantized/modules/__init__.py:1 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/activation.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/batchnorm.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/dropout.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/embedding_ops.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/functional_modules.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/normalization.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/utils.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/utils/_expanded_weights/conv_utils.py:13 in public function `conv_picker`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:23 in public function `conv_args_and_kwargs`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:31 in public function `conv_normalizer`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:35 in public function `conv_input_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:43 in public function `int_padding_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:59 in public function `conv_padding_for_same`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:66 in public function `conv_backward`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:131 in public function `conv_unfold_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:166 in public function `conv_group_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`: D401: First line should be in imperative mood (perhaps 'Extract', not 'Extracts') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:6 in public function `is_batch_first`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`: D300: Use """triple double quotes""" (found '''-quotes) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`: D400: First line should end with a period (not 'e') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`: D300: Use """triple double quotes""" (found '''-quotes) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`: D400: First line should end with a period (not ')') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:84 in public function `maybe_scale_by_batch_size`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:90 in public function `set_grad_sample_if_exists`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:108 in public function `unpack_expanded_weight_or_tensor`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`: D400: First line should end with a period (not 't') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`: D401: First line should be in imperative mood (perhaps 'Calculate', not 'Calculates') torch/nn/utils/convert_parameters.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D400: First line should end with a period (not 'd') torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D401: First line should be in imperative mood; try rephrasing (found 'This') torch/nn/utils/rnn.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/rnn.py:28 in public class `PackedSequence`: D204: 1 blank line required after class docstring (found 0) torch/nn/utils/rnn.py:63 in public method `__new__`: D102: Missing docstring in public method torch/nn/utils/rnn.py:73 in public method `pin_memory`: D102: Missing docstring in public method torch/nn/utils/rnn.py:80 in public method `cuda`: D102: Missing docstring in public method torch/nn/utils/rnn.py:87 in public method `cpu`: D102: Missing docstring in public method torch/nn/utils/rnn.py:94 in public method `double`: D102: Missing docstring in public method torch/nn/utils/rnn.py:97 in public method `float`: D102: Missing docstring in public method torch/nn/utils/rnn.py:100 in public method `half`: D102: Missing docstring in public method torch/nn/utils/rnn.py:103 in public method `long`: D102: Missing docstring in public method torch/nn/utils/rnn.py:106 in public method `int`: D102: Missing docstring in public method torch/nn/utils/rnn.py:109 in public method `short`: D102: Missing docstring in public method torch/nn/utils/rnn.py:112 in public method `char`: D102: Missing docstring in public method torch/nn/utils/rnn.py:115 in public method `byte`: D102: Missing docstring in public method torch/nn/utils/rnn.py:119 in public method `to`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:119 in public method `to`: D401: First line should be in imperative mood (perhaps 'Perform', not 'Performs') torch/nn/utils/rnn.py:146 in public method `is_cuda`: D400: First line should end with a period (not 'u') torch/nn/utils/rnn.py:150 in public method `is_pinned`: D400: First line should end with a period (not 'y') torch/nn/utils/rnn.py:150 in public method `is_pinned`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/nn/utils/rnn.py:198 in public function `invert_permutation`: D103: Missing docstring in public function torch/nn/utils/rnn.py:274 in public function `pad_packed_sequence`: D401: First line should be in imperative mood (perhaps 'Pad', not 'Pads') torch/nn/utils/rnn.py:347 in public function `pad_sequence`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:347 in public function `pad_sequence`: D400: First line should end with a period (not '`') torch/nn/utils/rnn.py:408 in public function `unpad_sequence`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:408 in public function `unpad_sequence`: D400: First line should end with a period (not 's') torch/nn/utils/rnn.py:454 in public function `pack_sequence`: D400: First line should end with a period (not 's') torch/nn/utils/rnn.py:490 in public function `unpack_sequence`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:490 in public function `unpack_sequence`: D400: First line should end with a period (not 's') 171 ``` After: 81 ``` torch/backends/_nnapi/prepare.py:24 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/_nnapi/prepare.py:46 in public method `init`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:60 in public method `forward`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:94 in public function `convert_model_to_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/prepare.py:153 in public function `process_for_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:19 in public class `NNAPI_OperandCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:35 in public class `NNAPI_OperationCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:133 in public class `NNAPI_FuseCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:140 in public class `OperandValueSourceType`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:150 in public class `TorchScalarTypes`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:154 in public function `approx_equal`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:158 in public function `tensor_size`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:172 in public function `change_element`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:194 in public class `DimOrder`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:225 in public method `use_nchw`: D102: Missing docstring in public method torch/backends/_nnapi/serializer.py:233 in public function `broadcast_shapes`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:260 in public function `get_conv_pool_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:284 in public function `fix_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:301 in public function `reverse_map_dim`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:312 in public function `flex_name`: D103: Missing docstring in public function torch/backends/cuda/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cuda/__init__.py:39 in public class `cuFFTPlanCacheAttrContextProp`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:42 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:46 in public method `__get__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:49 in public method `__set__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:63 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:76 in public method `clear`: D102: Missing docstring in public method torch/backends/cuda/__init__.py:91 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:95 in public method `__getitem__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:108 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:111 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:118 in public class `cuBLASModule`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:119 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:128 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cudnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cudnn/__init__.py:99 in public function `is_acceptable`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:122 in public function `set_flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:150 in public function `flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:174 in public class `CudnnModule`: D101: Missing docstring in public class torch/backends/cudnn/__init__.py:175 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkl/__init__.py:42 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:45 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkl/__init__.py:54 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkldnn/__init__.py:48 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkldnn/__init__.py:51 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:60 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:65 in public function `set_flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:72 in public function `flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:82 in public class `MkldnnModule`: D101: Missing docstring in public class torch/backends/mkldnn/__init__.py:83 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/openmp/__init__.py:1 at module level: D104: Missing docstring in public package torch/nn/utils/_expanded_weights/conv_utils.py:13 in public function `conv_picker`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:23 in public function `conv_args_and_kwargs`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:31 in public function `conv_normalizer`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:35 in public function `conv_input_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:43 in public function `int_padding_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:59 in public function `conv_padding_for_same`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:66 in public function `conv_backward`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:131 in public function `conv_unfold_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:166 in public function `conv_group_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:6 in public function `is_batch_first`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:87 in public function `maybe_scale_by_batch_size`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:93 in public function `set_grad_sample_if_exists`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:111 in public function `unpack_expanded_weight_or_tensor`: D103: Missing docstring in public function torch/nn/utils/convert_parameters.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/rnn.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/rnn.py:64 in public method `__new__`: D102: Missing docstring in public method torch/nn/utils/rnn.py:74 in public method `pin_memory`: D102: Missing docstring in public method torch/nn/utils/rnn.py:81 in public method `cuda`: D102: Missing docstring in public method torch/nn/utils/rnn.py:88 in public method `cpu`: D102: Missing docstring in public method torch/nn/utils/rnn.py:95 in public method `double`: D102: Missing docstring in public method torch/nn/utils/rnn.py:98 in public method `float`: D102: Missing docstring in public method torch/nn/utils/rnn.py:101 in public method `half`: D102: Missing docstring in public method torch/nn/utils/rnn.py:104 in public method `long`: D102: Missing docstring in public method torch/nn/utils/rnn.py:107 in public method `int`: D102: Missing docstring in public method torch/nn/utils/rnn.py:110 in public method `short`: D102: Missing docstring in public method torch/nn/utils/rnn.py:113 in public method `char`: D102: Missing docstring in public method torch/nn/utils/rnn.py:116 in public method `byte`: D102: Missing docstring in public method torch/nn/utils/rnn.py:198 in public function `invert_permutation`: D103: Missing docstring in public function 81 ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/112695 Approved by: https://github.com/mikaylagawarecki |
||
|
|
d8eae6283d |
Rename 'torch/ao/nn/quantized._reference' to 'torch/ao/nn/quantized/reference'. (#84974)
Currently, the path for reference modules contains _ which means it's private (https://github.com/pytorch/pytorch/tree/master/torch/ao/nn/quantized/_reference), but we would like to make it public since the reference module is now enabled by default in the fx graph mode quantization flow and it will be added to eager mode flow as well in the future. To make '_reference' public, it should satisfy the [public API rules](https://github.com/pytorch/pytorch/wiki/Public-API-definition-and-documentation). I did in the first commit (prepare '_reference' to be public): 1: add __all__ to public modules and packages; 2. made functions, that are only used in the file that the function is defined, private by adding _ at their names. Fixes #83090. (we rename the 'torch/ao/nn/quantized/_reference', because of migration #81667.) This is a dup for the #84786. Pull Request resolved: https://github.com/pytorch/pytorch/pull/84974 Approved by: https://github.com/andrewor14, https://github.com/z-a-f |
||
|
|
b1455f9424 |
[quant][ao_migration] torch.nn.quantized._reference → torch.ao.nn.quantized._reference (#78715)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [X] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [X] [Current PR] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- None
Differential Revision: [D36860927](https://our.internmc.facebook.com/intern/diff/D36860927/)
**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36860927/)!
Differential Revision: [D36860927](https://our.internmc.facebook.com/intern/diff/D36860927)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78715
Approved by: https://github.com/jerryzh168
|
||
|
|
591222f5d9 |
Fix use-dict-literal lint (#83718)
Fix use-dict-literal pylint suggestions by changing `dict()` to `{}`. This PR should do the change for every Python file except test/jit/test_list_dict.py, where I think the intent is to test the constructor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83718
Approved by: https://github.com/albanD
|
||
|
|
355d343fa8 |
Revert "[quant][ao_migration] torch.nn.quantized._reference → torch.ao.nn.quantized._reference (#78715)"
This reverts commit
|
||
|
|
a7344e52b9 |
[quant][ao_migration] torch.nn.quantized._reference → torch.ao.nn.quantized._reference (#78715)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [X] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [X] [Current PR] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- None
Differential Revision: [D36860927](https://our.internmc.facebook.com/intern/diff/D36860927/)
**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36860927/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78715
Approved by: https://github.com/jerryzh168
|
||
|
|
caee732aa1 |
Revert "[quant][fx] Support keyword arguments for functional linear (#79095)"
This reverts commit
|
||
|
|
d71fb40d98 |
[quant][fx] Support keyword arguments for functional linear (#79095)
Summary: Fixes: https://github.com/pytorch/pytorch/issues/78117 Fixes: https://github.com/pytorch/pytorch/issues/73463 This PR adds a normalization pass that normalizes all the args to keyword args in positional order and fixes lowering code that previously only uses node.args to use both args and kwargs instead. Also tried to add a test for F.conv2d, but since conv2d matches multiple schemas we are doing an extra schema match, and because we are using symbolic values in `transform`, we don't have a schema match, so F.conv2d still fails with runtime errors. we can resolve this issue later when there is a need. Another thing I'm considering is to do the normalization with real inputs instead of symbolic inputs and not rely on operator_schemas (which is based on torchscript), and rely on inspect.signature, I tried this briefly but didn't get too far, it looks like we cannot get the python signature for `torch._C._nn.linear`, it might be possible to fix as well, but will need follow up discussions. The goal for this PR is just to introduce normalization in our codebase so that we can adapt some downstream code to this, and also fix the F.linear issue. Test Plan: python test/test_quantization.py TestQuantizeFx.test_normalize_args Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D37163228](https://our.internmc.facebook.com/intern/diff/D37163228) Pull Request resolved: https://github.com/pytorch/pytorch/pull/79095 Approved by: https://github.com/andrewor14 |
||
|
|
ce0786add2 |
fx quant: fix warning in util function when cloning tensors (#80883)
Summary: Some of the util functions in FX graph mode quantization throw warnings such as: ``` /Users/vasiliy/pytorch/torch/ao/quantization/fx/utils.py:410: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach(). requires_grad_(True), rather than torch.tensor(sourceTensor). ``` This PR fixes the warnings by moving the code to the recommended syntax if the value is a tensor. Test plan: ``` python test/test_quantization.py -k test_conv_linear_reference // warning appeared before this PR and disappeared after this PR ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/80883 Approved by: https://github.com/jerryzh168 |
||
|
|
8c05f44fe2 |
[quant] fix int16 quantization scale in conv weight (#74665)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/74665 fix int16 quantization scale in conv weight Test Plan: python3 test/test_quantization.py TestQuantizeEagerOps.test_int16_reference_module Imported from OSS Reviewed By: mrshenli Differential Revision: D35106497 fbshipit-source-id: 61030786d20d845ef36ea40cdacdd7dcccf12ae9 (cherry picked from commit 187b6bd02c9ccc17416b1620c3b164c84352ef81) |
||
|
|
d64e7634ff |
[quant] Remove assert for weight since it could be non-Tensor (#74365)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/74365 att Test Plan: Meta-Internal tests with fx2trt Imported from OSS Reviewed By: andrewor14 Differential Revision: D34952754 fbshipit-source-id: 11d392a520c9ab7c9484c96841f2b39fbbbc3f80 (cherry picked from commit e8a2348d5c6f9717b010972819723affba37a0e4) |
||
|
|
7ddf212f33 |
[quant][fx] Fully align convert with the reference model design and simplify the implementation (#73863)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73863 This PR fully aligns the convert function with the design: https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md and simplifies the implementation of convert function by always produce a reference quantized model (with reference patterns) first, and then lower the model to a quantized model that is runnable with PyTorch native backend (fbgemm/qnnpack). This PR makes the convert.py much easier to understand than the previous implementation, and we are able to remove majority of code in quantization_patterns.py as well (in followup PRs). Test Plan: ``` python test/test_quantization.py TestQuantizeFx python test/test_quantization.py TestQuantizeFxOps python test/test_quantization.py TestFXNumericSuiteCoreAPIs python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels ``` and other internal/oss regression tests Imported from OSS Reviewed By: andrewor14 Differential Revision: D34778506 fbshipit-source-id: 0678b66addf736039a8749b352f6f569caca962b (cherry picked from commit 33ec9caf23f3ab373d827117efbd9db0668b2437) |
||
|
|
4e6aefaf72 |
[Qunat] Refactor reference module mapping (#72755)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72755 Add is_refernece flag in convert function Test Plan: python3 test/test_quantization.py TestQuantizeEagerOps.test_conv_transpose_2d Imported from OSS Reviewed By: mruberry Differential Revision: D34188856 fbshipit-source-id: 291014a7b3b4d4b40ca0ca76a80711097dcc4b58 (cherry picked from commit cfba3b8dc0373708712c0d847d590f0d587df002) |
||
|
|
727debb18e |
dbr quant: enable reference module support for torch.qint32 (#73493)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73493 This PR enables basic support for reference modules in DBR quant. For now, the support is limited to: 1. modules that have reference versions defined only (no functions) 2. torch.qint32 dtype only Currently, the reference module logic is enabled whenever dtype is torch.qint32. This is done because this is needed the earliest for the first use case. A future PR will support more dtypes and also add the `is_reference` flag to the API. Test Plan: ``` python test/test_quantization.py TestQuantizeDBR.test_conv_int32_reference_model ``` Reviewed By: jerryzh168 Differential Revision: D34520759 Pulled By: vkuzo fbshipit-source-id: 363db715315c5c7c20962a1818330ce288948778 (cherry picked from commit 6ccdfe2889c252211f191edc49f4147f66e803a4) |
||
|
|
2ab9702955 |
[quant][core] Add Embedding and EmbeddingBag reference module (#73436)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73436 This PR adds support reference module support for Embedding and EmbeddingBag, following https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md * the reference module inherits from the corresponding float module (e.g. nn.Embedding), and the ReferenceQuantizedModule (which defines some utility functions to store qparms for a single weight) * in forward, we first quantize and then dequantize weight (to generate the pattern) and then feed the weight to the original fp32 op We'll connect this with fx grpah mode quantization later, in the final PR that deprecates the current convert implementation. Since current convert doesn't support emitting quantize_per_tensor_dynamic ops, we don't want to implement it and immediately throw away the code, so might be better to just implement this in the final flow. Test Plan: Will be tested later, in the final PR that deprecates the current convert implementation Imported from OSS Reviewed By: vkuzo Differential Revision: D34480325 fbshipit-source-id: bc353f3be035a364e013fa9132d0422f19120ac3 (cherry picked from commit 1722ec2f8d82e9763ef252fed5796fd09d120e34) |
||
|
|
81437e66c1 |
[quant][fx] Add RNN reference module (#73386)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73386 This PR adds support for RNN reference module, following https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md This includes: RNNCell, LSTMCell, GRUCell, LSTM Test Plan: will be tested in the lowering flow in a separate PR Imported from OSS Reviewed By: vkuzo Differential Revision: D34469445 fbshipit-source-id: 71a13d7d056f7aaccdd98fb477c8a3a38aecc249 (cherry picked from commit 0b10f0d127515556b677eae3150f026ac8cd9acd) |
||
|
|
5723c03bad |
[quant][core] Refactor implementations for reference module (#73385)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73385 * Refactored some methods to ReferenceQuantizedModule ** _init_weight_qparams, get_quantized_weight, get_wegiht * Added float16 conversion in _quantize_weight Test Plan: python test/test_quantization.py TestQuantizeFx python test/test_quantization.py TestQuantizeFxOps Imported from OSS Reviewed By: vkuzo Differential Revision: D34469447 fbshipit-source-id: 2082c91631128d31d96b56ef96e3792d45bf3eee (cherry picked from commit a57318043b27039c1b8a64adb117b51780d6c573) |
||
|
|
16e2f5d291 |
[quant] Add ConvTranspose reference module - Reland #73031 (#73094)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73094
Add ConvTranspose reference module
Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_conv_transpose_2d
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D34352228
fbshipit-source-id: 03062d6b441bc5a3298ec094f421a69c4c3d5c40
(cherry picked from commit
|
||
|
|
477d1bd6cf |
Revert D34313425: [quant] Add ConvTranspose reference module
Test Plan: revert-hammer Differential Revision: D34313425 ( |
||
|
|
710f12f58e |
[quant] Add ConvTranspose reference module (#73031)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73031
Add ConvTranspose reference module
Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_conv_transpose_2d
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D34313425
fbshipit-source-id: 3eeec1b24a51c7951c4d4b0c7dca43a012468b85
(cherry picked from commit
|
||
|
|
e6fd28fb05 |
Revert D34126542: [Qunat] Add ConvTranspose reference module
Test Plan: revert-hammer Differential Revision: D34126542 ( |
||
|
|
7a031ec17f |
[Qunat] Add ConvTranspose reference module (#72473)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72473
Add ConvTranspose reference module
Test Plan:
python3 test/test_quantization.py TestQuantizeEagerOps.test_conv_transpose_op
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D34126542
fbshipit-source-id: 7da167695a1fd9c141059bce14cce4f0608b086c
(cherry picked from commit
|
||
|
|
3e43c478a8 |
[Quant][fx] Lower reference conv[1-3]d module (#69228)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69228 Implement lowering logic for reference conv modules, similar to https://github.com/pytorch/pytorch/pull/65723. ghstack-source-id: 145058198 Test Plan: python test/test_quantization.py TestQuantizeFx.test_conv_lowering Imported from OSS Reviewed By: anjali411 Differential Revision: D32890743 fbshipit-source-id: 04f2500628c60b0fbc84d22705164215e190aeba |
||
|
|
06e49ea088 |
[not4land][quant][fx][graphmode] lower reference linear module example (#65723)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65723 Example lowering reference linear module to fbgemm/qnnpack quantized linear module Test Plan: Imported from OSS Reviewed By: vkuzo Differential Revision: D31567461 fbshipit-source-id: 0b8fffaf8e742ec15cb07bf6a4672cf3e856db2d |
||
|
|
d4a86c1f3b |
[quant][fx2trt] Add lowering support for reference linear/conv modules (#64368)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64368 Test Plan: python torch/fx/experimental/fx2trt/example/quantized_resnet_test.py Imported from OSS Reviewed By: 842974287 Differential Revision: D30708738 fbshipit-source-id: 88142b7ce43ed96093597112dab03a2d277de993 |
||
|
|
8f88f797db |
[quant][graphmode][fx] Add reference quantized conv module (#63828)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63828 Added reference quantized conv module for the custom backend flow, the reference quantized module will have the following code: ``` w(float) -- quant - dequant \ x(float) ------------- F.conv2d --- ``` In the full model, we will see ``` w(float) -- quant - *dequant \ x -- quant --- *dequant -- *F.conv2d --- *quant - dequant ``` and the backend should be able to fuse the ops with `*` into a quantized linear Test Plan: python test/test_quantization.py TestQuantizeFx.test_conv_linear_reference Imported from OSS Reviewed By: vkuzo Differential Revision: D30504749 fbshipit-source-id: e1d8c43a0e0d6d9ea2375b8ca59a9c0f455514fb |
||
|
|
0d0605eaa9 |
[quant][graphmode][fx] Add reference quantized linear module (#63627)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63627 Added reference quantized linear module for the custom backend flow, the reference quantized module will have the following code: ``` w(float) -- quant - dequant \ x(float) ------------- F.linear --- ``` In the full model, we will see ``` w(float) -- quant - *dequant \ x -- quant --- *dequant -- *F.linear --- *quant - dequant ``` and the backend should be able to fuse the ops with `*` into a quantized linear Test Plan: python test/test_quantization.py TestQuantizeFx.test_conv_linear_reference Imported from OSS Reviewed By: vkuzo Differential Revision: D30504750 fbshipit-source-id: 5729921745c2b6a0fb344efc3689f3b170e89500 |
||
|
|
4753100a3b |
Un-ignore F403 in .flake8 (#55838)
Summary: Generally wildcard imports are bad for the reasons described here: https://www.flake8rules.com/rules/F403.html This PR replaces wildcard imports with an explicit list of imported items where possible, and adds a `# noqa: F403` comment in the other cases (mostly re-exports in `__init__.py` files). This is a prerequisite for https://github.com/pytorch/pytorch/issues/55816, because currently [`tools/codegen/dest/register_dispatch_key.py` simply fails if you sort its imports](https://github.com/pytorch/pytorch/actions/runs/742505908). Pull Request resolved: https://github.com/pytorch/pytorch/pull/55838 Test Plan: CI. You can also run `flake8` locally. Reviewed By: jbschlosser Differential Revision: D27724232 Pulled By: samestep fbshipit-source-id: 269fb09cb4168f8a51fd65bfaacc6cda7fb87c34 |
||
|
|
7b54a8fc23 |
[quant] Reference option for conv module (#52316)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52316 Test Plan: Imported from OSS Reviewed By: vkuzo Differential Revision: D26505642 fbshipit-source-id: e25b1a3cc37c4b744e694946e6ddf1470dd4692b |
||
|
|
636fa8fda8 |
[quant] Add backend_independent option for quantized linear module (#48192)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48192 This is to allow producing a backend independent quantized module since some backend don't have packed weight for linear Test Plan: test_quantized_module.py Imported from OSS Reviewed By: raghuramank100 Differential Revision: D25061645 fbshipit-source-id: a65535e53f35af4f2926af0ee330fdaae6dae996 |