mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
c855f8632e
26 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
b5c006acac |
[BE][Easy] enable UFMT for torch/nn/ (#128865)
Part of #123062 - #123062 Pull Request resolved: https://github.com/pytorch/pytorch/pull/128865 Approved by: https://github.com/ezyang |
||
|
|
eb5487361d |
docs: fix docstring errors in quantized modules and others (#112695)
Fixes #112632 Before: 171 ``` torch/backends/_nnapi/prepare.py:24 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/_nnapi/prepare.py:46 in public method `init`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:60 in public method `forward`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:94 in public function `convert_model_to_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/prepare.py:153 in public function `process_for_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/prepare.py:177 in private nested class `ShapeComputeModule`: D400: First line should end with a period (not 'n') torch/backends/_nnapi/serializer.py:19 in public class `NNAPI_OperandCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:35 in public class `NNAPI_OperationCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:133 in public class `NNAPI_FuseCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:140 in public class `OperandValueSourceType`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:150 in public class `TorchScalarTypes`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:154 in public function `approx_equal`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:158 in public function `tensor_size`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:172 in public function `change_element`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:194 in public class `DimOrder`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:225 in public method `use_nchw`: D102: Missing docstring in public method torch/backends/_nnapi/serializer.py:233 in public function `broadcast_shapes`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:260 in public function `get_conv_pool_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:284 in public function `fix_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:301 in public function `reverse_map_dim`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:312 in public function `flex_name`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:1337 in private method `_do_add_binary`: D400: First line should end with a period (not 's') torch/backends/_nnapi/serializer.py:1337 in private method `_do_add_binary`: D401: First line should be in imperative mood; try rephrasing (found 'Helper') torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`: D202: No blank lines allowed after function docstring (found 1) torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`: D205: 1 blank line required between summary line and description (found 0) torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`: D400: First line should end with a period (not ':') torch/backends/cuda/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cuda/__init__.py:30 in public function `is_built`: D205: 1 blank line required between summary line and description (found 0) torch/backends/cuda/__init__.py:30 in public function `is_built`: D209: Multi-line docstring closing quotes should be on a separate line torch/backends/cuda/__init__.py:30 in public function `is_built`: D400: First line should end with a period (not 's') torch/backends/cuda/__init__.py:30 in public function `is_built`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/cuda/__init__.py:37 in public class `cuFFTPlanCacheAttrContextProp`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:40 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:44 in public method `__get__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:47 in public method `__set__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:54 in public class `cuFFTPlanCache`: D205: 1 blank line required between summary line and description (found 0) torch/backends/cuda/__init__.py:54 in public class `cuFFTPlanCache`: D400: First line should end with a period (not 'e') torch/backends/cuda/__init__.py:60 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:73 in public method `clear`: D102: Missing docstring in public method torch/backends/cuda/__init__.py:78 in public class `cuFFTPlanCacheManager`: D205: 1 blank line required between summary line and description (found 0) torch/backends/cuda/__init__.py:78 in public class `cuFFTPlanCacheManager`: D400: First line should end with a period (not ',') torch/backends/cuda/__init__.py:89 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:93 in public method `__getitem__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:106 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:109 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:116 in public class `cuBLASModule`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:117 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:126 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:147 in public function `preferred_linalg_library`: D202: No blank lines allowed after function docstring (found 1) torch/backends/cuda/__init__.py:204 in public class `SDPBackend`: D204: 1 blank line required after class docstring (found 0) torch/backends/cudnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cudnn/__init__.py:81 in public function `version`: D400: First line should end with a period (not 'N') torch/backends/cudnn/__init__.py:81 in public function `version`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/cudnn/__init__.py:95 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/cudnn/__init__.py:99 in public function `is_acceptable`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:122 in public function `set_flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:150 in public function `flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:174 in public class `CudnnModule`: D101: Missing docstring in public class torch/backends/cudnn/__init__.py:175 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkl/__init__.py:5 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/mkl/__init__.py:14 in public class `verbose`: D205: 1 blank line required between summary line and description (found 0) torch/backends/mkl/__init__.py:14 in public class `verbose`: D400: First line should end with a period (not 'y') torch/backends/mkl/__init__.py:41 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:44 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkl/__init__.py:53 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkldnn/__init__.py:9 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/mkldnn/__init__.py:19 in public class `verbose`: D205: 1 blank line required between summary line and description (found 0) torch/backends/mkldnn/__init__.py:19 in public class `verbose`: D400: First line should end with a period (not 'y') torch/backends/mkldnn/__init__.py:47 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkldnn/__init__.py:50 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:59 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:64 in public function `set_flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:71 in public function `flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:81 in public class `MkldnnModule`: D101: Missing docstring in public class torch/backends/mkldnn/__init__.py:82 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/openmp/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/openmp/__init__.py:5 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/nn/intrinsic/qat/modules/conv_fused.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/intrinsic/qat/modules/linear_fused.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/intrinsic/qat/modules/linear_relu.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/dynamic/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/dynamic/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/embedding_ops.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantizable/modules/activation.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantizable/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/sparse.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/utils.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/functional.py:1 at module level: D400: First line should end with a period (not 'l') torch/nn/quantized/modules/__init__.py:1 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/activation.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/batchnorm.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/dropout.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/embedding_ops.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/functional_modules.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/normalization.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/utils.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/utils/_expanded_weights/conv_utils.py:13 in public function `conv_picker`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:23 in public function `conv_args_and_kwargs`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:31 in public function `conv_normalizer`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:35 in public function `conv_input_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:43 in public function `int_padding_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:59 in public function `conv_padding_for_same`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:66 in public function `conv_backward`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:131 in public function `conv_unfold_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:166 in public function `conv_group_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`: D401: First line should be in imperative mood (perhaps 'Extract', not 'Extracts') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:6 in public function `is_batch_first`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`: D300: Use """triple double quotes""" (found '''-quotes) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`: D400: First line should end with a period (not 'e') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`: D300: Use """triple double quotes""" (found '''-quotes) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`: D400: First line should end with a period (not ')') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:84 in public function `maybe_scale_by_batch_size`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:90 in public function `set_grad_sample_if_exists`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:108 in public function `unpack_expanded_weight_or_tensor`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`: D400: First line should end with a period (not 't') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`: D401: First line should be in imperative mood (perhaps 'Calculate', not 'Calculates') torch/nn/utils/convert_parameters.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D400: First line should end with a period (not 'd') torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D401: First line should be in imperative mood; try rephrasing (found 'This') torch/nn/utils/rnn.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/rnn.py:28 in public class `PackedSequence`: D204: 1 blank line required after class docstring (found 0) torch/nn/utils/rnn.py:63 in public method `__new__`: D102: Missing docstring in public method torch/nn/utils/rnn.py:73 in public method `pin_memory`: D102: Missing docstring in public method torch/nn/utils/rnn.py:80 in public method `cuda`: D102: Missing docstring in public method torch/nn/utils/rnn.py:87 in public method `cpu`: D102: Missing docstring in public method torch/nn/utils/rnn.py:94 in public method `double`: D102: Missing docstring in public method torch/nn/utils/rnn.py:97 in public method `float`: D102: Missing docstring in public method torch/nn/utils/rnn.py:100 in public method `half`: D102: Missing docstring in public method torch/nn/utils/rnn.py:103 in public method `long`: D102: Missing docstring in public method torch/nn/utils/rnn.py:106 in public method `int`: D102: Missing docstring in public method torch/nn/utils/rnn.py:109 in public method `short`: D102: Missing docstring in public method torch/nn/utils/rnn.py:112 in public method `char`: D102: Missing docstring in public method torch/nn/utils/rnn.py:115 in public method `byte`: D102: Missing docstring in public method torch/nn/utils/rnn.py:119 in public method `to`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:119 in public method `to`: D401: First line should be in imperative mood (perhaps 'Perform', not 'Performs') torch/nn/utils/rnn.py:146 in public method `is_cuda`: D400: First line should end with a period (not 'u') torch/nn/utils/rnn.py:150 in public method `is_pinned`: D400: First line should end with a period (not 'y') torch/nn/utils/rnn.py:150 in public method `is_pinned`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/nn/utils/rnn.py:198 in public function `invert_permutation`: D103: Missing docstring in public function torch/nn/utils/rnn.py:274 in public function `pad_packed_sequence`: D401: First line should be in imperative mood (perhaps 'Pad', not 'Pads') torch/nn/utils/rnn.py:347 in public function `pad_sequence`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:347 in public function `pad_sequence`: D400: First line should end with a period (not '`') torch/nn/utils/rnn.py:408 in public function `unpad_sequence`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:408 in public function `unpad_sequence`: D400: First line should end with a period (not 's') torch/nn/utils/rnn.py:454 in public function `pack_sequence`: D400: First line should end with a period (not 's') torch/nn/utils/rnn.py:490 in public function `unpack_sequence`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:490 in public function `unpack_sequence`: D400: First line should end with a period (not 's') 171 ``` After: 81 ``` torch/backends/_nnapi/prepare.py:24 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/_nnapi/prepare.py:46 in public method `init`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:60 in public method `forward`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:94 in public function `convert_model_to_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/prepare.py:153 in public function `process_for_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:19 in public class `NNAPI_OperandCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:35 in public class `NNAPI_OperationCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:133 in public class `NNAPI_FuseCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:140 in public class `OperandValueSourceType`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:150 in public class `TorchScalarTypes`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:154 in public function `approx_equal`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:158 in public function `tensor_size`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:172 in public function `change_element`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:194 in public class `DimOrder`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:225 in public method `use_nchw`: D102: Missing docstring in public method torch/backends/_nnapi/serializer.py:233 in public function `broadcast_shapes`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:260 in public function `get_conv_pool_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:284 in public function `fix_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:301 in public function `reverse_map_dim`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:312 in public function `flex_name`: D103: Missing docstring in public function torch/backends/cuda/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cuda/__init__.py:39 in public class `cuFFTPlanCacheAttrContextProp`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:42 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:46 in public method `__get__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:49 in public method `__set__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:63 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:76 in public method `clear`: D102: Missing docstring in public method torch/backends/cuda/__init__.py:91 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:95 in public method `__getitem__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:108 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:111 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:118 in public class `cuBLASModule`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:119 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:128 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cudnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cudnn/__init__.py:99 in public function `is_acceptable`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:122 in public function `set_flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:150 in public function `flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:174 in public class `CudnnModule`: D101: Missing docstring in public class torch/backends/cudnn/__init__.py:175 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkl/__init__.py:42 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:45 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkl/__init__.py:54 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkldnn/__init__.py:48 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkldnn/__init__.py:51 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:60 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:65 in public function `set_flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:72 in public function `flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:82 in public class `MkldnnModule`: D101: Missing docstring in public class torch/backends/mkldnn/__init__.py:83 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/openmp/__init__.py:1 at module level: D104: Missing docstring in public package torch/nn/utils/_expanded_weights/conv_utils.py:13 in public function `conv_picker`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:23 in public function `conv_args_and_kwargs`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:31 in public function `conv_normalizer`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:35 in public function `conv_input_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:43 in public function `int_padding_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:59 in public function `conv_padding_for_same`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:66 in public function `conv_backward`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:131 in public function `conv_unfold_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:166 in public function `conv_group_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:6 in public function `is_batch_first`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:87 in public function `maybe_scale_by_batch_size`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:93 in public function `set_grad_sample_if_exists`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:111 in public function `unpack_expanded_weight_or_tensor`: D103: Missing docstring in public function torch/nn/utils/convert_parameters.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/rnn.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/rnn.py:64 in public method `__new__`: D102: Missing docstring in public method torch/nn/utils/rnn.py:74 in public method `pin_memory`: D102: Missing docstring in public method torch/nn/utils/rnn.py:81 in public method `cuda`: D102: Missing docstring in public method torch/nn/utils/rnn.py:88 in public method `cpu`: D102: Missing docstring in public method torch/nn/utils/rnn.py:95 in public method `double`: D102: Missing docstring in public method torch/nn/utils/rnn.py:98 in public method `float`: D102: Missing docstring in public method torch/nn/utils/rnn.py:101 in public method `half`: D102: Missing docstring in public method torch/nn/utils/rnn.py:104 in public method `long`: D102: Missing docstring in public method torch/nn/utils/rnn.py:107 in public method `int`: D102: Missing docstring in public method torch/nn/utils/rnn.py:110 in public method `short`: D102: Missing docstring in public method torch/nn/utils/rnn.py:113 in public method `char`: D102: Missing docstring in public method torch/nn/utils/rnn.py:116 in public method `byte`: D102: Missing docstring in public method torch/nn/utils/rnn.py:198 in public function `invert_permutation`: D103: Missing docstring in public function 81 ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/112695 Approved by: https://github.com/mikaylagawarecki |
||
|
|
c92e5ac95b |
[quant][ao_migration] torch.nn.quantized.modules → torch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- Documentation @vkuzo
- docs/source/conf.py
- docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
- torch/csrc/jit/passes/hoist_conv_packed_params.cpp
- torch/csrc/jit/passes/quantization/helper.h
- torch/csrc/jit/serialization/import_source.cpp
Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012/)
Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
|
||
|
|
6a9c02339d |
Revert "[quant][ao_migration] torch.nn.quantized.modules → torch.ao.nn.quantized.modules (#78713)"
This reverts commit
|
||
|
|
432f037498 |
[quant][ao_migration] torch.nn.quantized.modules → torch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- Documentation @vkuzo
- docs/source/conf.py
- docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
- torch/csrc/jit/passes/hoist_conv_packed_params.cpp
- torch/csrc/jit/passes/quantization/helper.h
- torch/csrc/jit/serialization/import_source.cpp
Differential Revision: [D36860145](https://our.internmc.facebook.com/intern/diff/D36860145/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
|
||
|
|
f68f77610a |
Add __all__ to torch.nn.quantized, fx.passes, ao.nn and amp submodules (#80376)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80376 Approved by: https://github.com/albanD |
||
|
|
5613527ef9 |
[quant][fx] Add lowering support for functional ops using DefaultNodeQuantizeHandler (#73120)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73120 att This is to align our implementation with https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md Test Plan: python test/test_quantization.py TestQuantizeFx python test/test_quantization.py TestQuantizeFxOps Imported from OSS Reviewed By: vkuzo Differential Revision: D34354038 fbshipit-source-id: 873a867e62bd541ef236974c697fac2334bf02ea (cherry picked from commit 3fce7cade2f057b985833659c2cb365ee4d6d9f3) |
||
|
|
febff45900 |
Support factory kwargs in torch.nn modules (#54508)
Summary: Continuation of https://github.com/pytorch/pytorch/pull/53144 Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508 Reviewed By: albanD Differential Revision: D27939544 Pulled By: jbschlosser fbshipit-source-id: 4bf517e5f74f093e27ca38a85e732da65e44d805 |
||
|
|
12b2bc94d7 |
Revert D27909732: [pytorch][PR] Support factory kwargs in torch.nn modules
Test Plan: revert-hammer
Differential Revision:
D27909732 (
|
||
|
|
5a09def9b0 |
Support factory kwargs in torch.nn modules (#54508)
Summary: Continuation of https://github.com/pytorch/pytorch/pull/53144 Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508 Reviewed By: malfet Differential Revision: D27909732 Pulled By: jbschlosser fbshipit-source-id: d8684b2403ab7eb336371d118799146a2520bd76 |
||
|
|
92d24e3060 |
Revert D27855386: [pytorch][PR] Support factory kwargs in torch.nn modules
Test Plan: revert-hammer
Differential Revision:
D27855386 (
|
||
|
|
40483acc51 |
Support factory kwargs in torch.nn modules (#54508)
Summary: Continuation of https://github.com/pytorch/pytorch/pull/53144 Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508 Reviewed By: bdhirsh Differential Revision: D27855386 Pulled By: jbschlosser fbshipit-source-id: dabd505d2a04208e74b158570fb2859c736eea2c |
||
|
|
d05e7c163f |
Revert D27600457: [pytorch][PR] Support factory kwargs in torch.nn modules
Test Plan: revert-hammer
Differential Revision:
D27600457 (
|
||
|
|
1077f87269 |
Support factory kwargs in torch.nn modules (#54508)
Summary: Continuation of https://github.com/pytorch/pytorch/pull/53144 Pull Request resolved: https://github.com/pytorch/pytorch/pull/54508 Reviewed By: mrshenli Differential Revision: D27600457 Pulled By: jbschlosser fbshipit-source-id: b58bfee61c3917524b4622f63ef216c27a588eb1 |
||
|
|
a27aaa49aa |
quant norm layers: move scale + zp to buffers (#52861)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52861 Currently, scale and zp in these layers are not buffers, which means they do not get saved to the state dict. Movin them into buffers to allow people to properly use state_dict. Note: this is a redo of https://github.com/pytorch/pytorch/pull/45313, with BN taken out. Not doing this for BN because it has dependencies on existing behavior. We should clean it up eventually. Note: not handling BC because it's 100% broken now, so there is no practical value in handling BC. Test Plan: ``` python test/test_quantization.py TestPostTrainingStatic.test_normalization ``` Imported from OSS Reviewed By: jerryzh168 Differential Revision: D26671761 fbshipit-source-id: 7615b1dd0d1ae88eeff8b1d150f3846815dc2bc9 |
||
|
|
5f2ec6293d |
Unused variables in neural net classes and functions (#50100)
Summary: These unused variables were identified by [pyflakes](https://pypi.org/project/pyflakes/). They can be safely removed to simplify the code and possibly improve performance. Pull Request resolved: https://github.com/pytorch/pytorch/pull/50100 Reviewed By: ezyang Differential Revision: D25797764 Pulled By: smessmer fbshipit-source-id: ced341aee692f429d2dcc3a4ef5c46c8ee99cabb |
||
|
|
37658b144b |
Remove useless py2 compatibility import __future__, part 1 (#43808)
Summary: To avoid conflicts, this PR does not remove all imports. More are coming in further PRs. Pull Request resolved: https://github.com/pytorch/pytorch/pull/43808 Reviewed By: wanchaol Differential Revision: D23436675 Pulled By: ailzhang fbshipit-source-id: ccc21a1955c244f0804277e9e47e54bfd23455cd |
||
|
|
5e683517a7 |
quant docs: add and clean up InstanceNorm{n}d (#40345)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40345 Fixes docstrings and adds to quantization docs for quantized InstanceNorm. Test Plan: * build on Mac OS and inspect Differential Revision: D22152637 Pulled By: vkuzo fbshipit-source-id: 7a485311ead20796b7a0944827d1d04e14ec8dcd |
||
|
|
6e3fdd77ca |
quant docs: add and clean up GroupNorm (#40343)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40343 Cleans up the quantized GroupNorm docstring and adds it to quantization docs. Test Plan: * build on Mac OS and inspect Differential Revision: D22152635 Pulled By: vkuzo fbshipit-source-id: 5553b841c7a5d77f1467f0c40657db9e5d730a12 |
||
|
|
d15fcc7e49 |
quant docs: add and clean up LayerNorm (#40342)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40342 Cleans up the docstrings for quantized LayerNorm, and adds it to the docs. Test Plan: * build on Mac OS and inspect Differential Revision: D22152639 Pulled By: vkuzo fbshipit-source-id: 38adf14b34675d1983ac4ed751938aa396e5400b |
||
|
|
2140874228 |
instancenorm: eager static quant support (#39091)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39091 Adds eager mode static quant support for instancenorm. Test Plan: ``` python test/test_quantization.py TestPostTrainingStatic.test_normalization python test/test_quantization.py TestStaticQuantizedModule.test_instance_norm ``` Imported from OSS Differential Revision: D21885265 fbshipit-source-id: 277506faf108f3561867cd8449a2390b7a44c462 |
||
|
|
f9b675f7b6 |
groupnorm: eager static quant support (#39090)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39090 Makes quantized GroupNorm work in eager mode post training static quant. Test Plan: ``` python test/test_quantization.py TestPostTrainingStatic.test_normalization python test/test_quantization.py TestStaticQuantizedModule.test_group_norm ``` Imported from OSS Differential Revision: D21885262 fbshipit-source-id: 58b0ffb59c601fcb4c79f711c7c98a667ffc6170 |
||
|
|
4fef3763dd |
Revert "Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings" (#37778)
Summary: Original PR: https://github.com/pytorch/pytorch/pull/37419 cc mattip suo Pull Request resolved: https://github.com/pytorch/pytorch/pull/37778 Differential Revision: D21385774 Pulled By: ezyang fbshipit-source-id: 5de532faab8bae132736b6b5189e0ee2ac9935be |
||
|
|
20f7e62b1d |
Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings
Test Plan: revert-hammer Differential Revision: D21337640 Original commit changeset: d4ad198780c3 fbshipit-source-id: fa9ba6ac542173a50bdb45bfa12f3fec0ed704fb |
||
|
|
f10fbcc820 |
Split up documentation into subpages and clean up some warnings (#37419)
Summary: xref gh-32838, gh-34032 This is a major refactor of parts of the documentation to split it up using sphinx's `autosummary` feature which will build out `autofuction` and `autoclass` stub files and link to them. The end result is that the top module pages like torch.nn.rst and torch.rst are now more like table-of-contents to the actual single-class or single-function documentations pages. Along the way, I modified many of the docstrings to eliminate sphinx warnings when building. I think the only thing I changed from a non-documentation perspective is to add names to `__all__` when adding them to `globals()` in `torch.__init__.py` I do not know the CI system: are the documentation build artifacts available after the build, so reviewers can preview before merging? Pull Request resolved: https://github.com/pytorch/pytorch/pull/37419 Differential Revision: D21337640 Pulled By: ezyang fbshipit-source-id: d4ad198780c3ae7a96a9f22651e00ff2d31a0c0f |
||
|
|
2c558dba3d |
quantized layer norm: add to static quant (#36690)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36690 Adds the static quantization hook for LayerNorm Test Plan: ``` python test/quantization/test_quantized_nn_mods.py ModuleAPITest.test_layer_norm python test/quantization/test_quantization.py EagerModePostTrainingQuantTest.test_normalization ``` Imported from OSS Differential Revision: D21055401 fbshipit-source-id: 188329f35359576d50ed0db5fb675ce68c28bf7d |