mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
c855f8632e
81 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
eb5487361d |
docs: fix docstring errors in quantized modules and others (#112695)
Fixes #112632 Before: 171 ``` torch/backends/_nnapi/prepare.py:24 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/_nnapi/prepare.py:46 in public method `init`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:60 in public method `forward`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:94 in public function `convert_model_to_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/prepare.py:153 in public function `process_for_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/prepare.py:177 in private nested class `ShapeComputeModule`: D400: First line should end with a period (not 'n') torch/backends/_nnapi/serializer.py:19 in public class `NNAPI_OperandCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:35 in public class `NNAPI_OperationCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:133 in public class `NNAPI_FuseCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:140 in public class `OperandValueSourceType`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:150 in public class `TorchScalarTypes`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:154 in public function `approx_equal`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:158 in public function `tensor_size`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:172 in public function `change_element`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:194 in public class `DimOrder`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:225 in public method `use_nchw`: D102: Missing docstring in public method torch/backends/_nnapi/serializer.py:233 in public function `broadcast_shapes`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:260 in public function `get_conv_pool_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:284 in public function `fix_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:301 in public function `reverse_map_dim`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:312 in public function `flex_name`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:1337 in private method `_do_add_binary`: D400: First line should end with a period (not 's') torch/backends/_nnapi/serializer.py:1337 in private method `_do_add_binary`: D401: First line should be in imperative mood; try rephrasing (found 'Helper') torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`: D202: No blank lines allowed after function docstring (found 1) torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`: D205: 1 blank line required between summary line and description (found 0) torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`: D400: First line should end with a period (not ':') torch/backends/cuda/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cuda/__init__.py:30 in public function `is_built`: D205: 1 blank line required between summary line and description (found 0) torch/backends/cuda/__init__.py:30 in public function `is_built`: D209: Multi-line docstring closing quotes should be on a separate line torch/backends/cuda/__init__.py:30 in public function `is_built`: D400: First line should end with a period (not 's') torch/backends/cuda/__init__.py:30 in public function `is_built`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/cuda/__init__.py:37 in public class `cuFFTPlanCacheAttrContextProp`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:40 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:44 in public method `__get__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:47 in public method `__set__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:54 in public class `cuFFTPlanCache`: D205: 1 blank line required between summary line and description (found 0) torch/backends/cuda/__init__.py:54 in public class `cuFFTPlanCache`: D400: First line should end with a period (not 'e') torch/backends/cuda/__init__.py:60 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:73 in public method `clear`: D102: Missing docstring in public method torch/backends/cuda/__init__.py:78 in public class `cuFFTPlanCacheManager`: D205: 1 blank line required between summary line and description (found 0) torch/backends/cuda/__init__.py:78 in public class `cuFFTPlanCacheManager`: D400: First line should end with a period (not ',') torch/backends/cuda/__init__.py:89 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:93 in public method `__getitem__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:106 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:109 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:116 in public class `cuBLASModule`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:117 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:126 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:147 in public function `preferred_linalg_library`: D202: No blank lines allowed after function docstring (found 1) torch/backends/cuda/__init__.py:204 in public class `SDPBackend`: D204: 1 blank line required after class docstring (found 0) torch/backends/cudnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cudnn/__init__.py:81 in public function `version`: D400: First line should end with a period (not 'N') torch/backends/cudnn/__init__.py:81 in public function `version`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/cudnn/__init__.py:95 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/cudnn/__init__.py:99 in public function `is_acceptable`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:122 in public function `set_flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:150 in public function `flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:174 in public class `CudnnModule`: D101: Missing docstring in public class torch/backends/cudnn/__init__.py:175 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkl/__init__.py:5 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/mkl/__init__.py:14 in public class `verbose`: D205: 1 blank line required between summary line and description (found 0) torch/backends/mkl/__init__.py:14 in public class `verbose`: D400: First line should end with a period (not 'y') torch/backends/mkl/__init__.py:41 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:44 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkl/__init__.py:53 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkldnn/__init__.py:9 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/backends/mkldnn/__init__.py:19 in public class `verbose`: D205: 1 blank line required between summary line and description (found 0) torch/backends/mkldnn/__init__.py:19 in public class `verbose`: D400: First line should end with a period (not 'y') torch/backends/mkldnn/__init__.py:47 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkldnn/__init__.py:50 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:59 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:64 in public function `set_flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:71 in public function `flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:81 in public class `MkldnnModule`: D101: Missing docstring in public class torch/backends/mkldnn/__init__.py:82 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/openmp/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/openmp/__init__.py:5 in public function `is_available`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/nn/intrinsic/qat/modules/conv_fused.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/intrinsic/qat/modules/linear_fused.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/intrinsic/qat/modules/linear_relu.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/dynamic/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/dynamic/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/embedding_ops.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/qat/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantizable/modules/activation.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantizable/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/sparse.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/_reference/modules/utils.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/__init__.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/dynamic/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/functional.py:1 at module level: D400: First line should end with a period (not 'l') torch/nn/quantized/modules/__init__.py:1 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/activation.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/batchnorm.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/conv.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/dropout.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/embedding_ops.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/functional_modules.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/linear.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/normalization.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/rnn.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/quantized/modules/utils.py:2 at module level: D400: First line should end with a period (not 's') torch/nn/utils/_expanded_weights/conv_utils.py:13 in public function `conv_picker`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:23 in public function `conv_args_and_kwargs`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:31 in public function `conv_normalizer`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:35 in public function `conv_input_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:43 in public function `int_padding_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:59 in public function `conv_padding_for_same`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:66 in public function `conv_backward`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:131 in public function `conv_unfold_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:166 in public function `conv_group_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`: D401: First line should be in imperative mood (perhaps 'Extract', not 'Extracts') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:6 in public function `is_batch_first`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`: D300: Use """triple double quotes""" (found '''-quotes) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`: D400: First line should end with a period (not 'e') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`: D300: Use """triple double quotes""" (found '''-quotes) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`: D400: First line should end with a period (not ')') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:84 in public function `maybe_scale_by_batch_size`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:90 in public function `set_grad_sample_if_exists`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:108 in public function `unpack_expanded_weight_or_tensor`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`: D400: First line should end with a period (not 't') torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`: D401: First line should be in imperative mood (perhaps 'Calculate', not 'Calculates') torch/nn/utils/convert_parameters.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D400: First line should end with a period (not 'd') torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`: D401: First line should be in imperative mood; try rephrasing (found 'This') torch/nn/utils/rnn.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/rnn.py:28 in public class `PackedSequence`: D204: 1 blank line required after class docstring (found 0) torch/nn/utils/rnn.py:63 in public method `__new__`: D102: Missing docstring in public method torch/nn/utils/rnn.py:73 in public method `pin_memory`: D102: Missing docstring in public method torch/nn/utils/rnn.py:80 in public method `cuda`: D102: Missing docstring in public method torch/nn/utils/rnn.py:87 in public method `cpu`: D102: Missing docstring in public method torch/nn/utils/rnn.py:94 in public method `double`: D102: Missing docstring in public method torch/nn/utils/rnn.py:97 in public method `float`: D102: Missing docstring in public method torch/nn/utils/rnn.py:100 in public method `half`: D102: Missing docstring in public method torch/nn/utils/rnn.py:103 in public method `long`: D102: Missing docstring in public method torch/nn/utils/rnn.py:106 in public method `int`: D102: Missing docstring in public method torch/nn/utils/rnn.py:109 in public method `short`: D102: Missing docstring in public method torch/nn/utils/rnn.py:112 in public method `char`: D102: Missing docstring in public method torch/nn/utils/rnn.py:115 in public method `byte`: D102: Missing docstring in public method torch/nn/utils/rnn.py:119 in public method `to`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:119 in public method `to`: D401: First line should be in imperative mood (perhaps 'Perform', not 'Performs') torch/nn/utils/rnn.py:146 in public method `is_cuda`: D400: First line should end with a period (not 'u') torch/nn/utils/rnn.py:150 in public method `is_pinned`: D400: First line should end with a period (not 'y') torch/nn/utils/rnn.py:150 in public method `is_pinned`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/nn/utils/rnn.py:198 in public function `invert_permutation`: D103: Missing docstring in public function torch/nn/utils/rnn.py:274 in public function `pad_packed_sequence`: D401: First line should be in imperative mood (perhaps 'Pad', not 'Pads') torch/nn/utils/rnn.py:347 in public function `pad_sequence`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:347 in public function `pad_sequence`: D400: First line should end with a period (not '`') torch/nn/utils/rnn.py:408 in public function `unpad_sequence`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:408 in public function `unpad_sequence`: D400: First line should end with a period (not 's') torch/nn/utils/rnn.py:454 in public function `pack_sequence`: D400: First line should end with a period (not 's') torch/nn/utils/rnn.py:490 in public function `unpack_sequence`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/rnn.py:490 in public function `unpack_sequence`: D400: First line should end with a period (not 's') 171 ``` After: 81 ``` torch/backends/_nnapi/prepare.py:24 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/_nnapi/prepare.py:46 in public method `init`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:60 in public method `forward`: D102: Missing docstring in public method torch/backends/_nnapi/prepare.py:94 in public function `convert_model_to_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/prepare.py:153 in public function `process_for_nnapi`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:19 in public class `NNAPI_OperandCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:35 in public class `NNAPI_OperationCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:133 in public class `NNAPI_FuseCode`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:140 in public class `OperandValueSourceType`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:150 in public class `TorchScalarTypes`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:154 in public function `approx_equal`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:158 in public function `tensor_size`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:172 in public function `change_element`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:194 in public class `DimOrder`: D101: Missing docstring in public class torch/backends/_nnapi/serializer.py:225 in public method `use_nchw`: D102: Missing docstring in public method torch/backends/_nnapi/serializer.py:233 in public function `broadcast_shapes`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:260 in public function `get_conv_pool_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:284 in public function `fix_shape`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:301 in public function `reverse_map_dim`: D103: Missing docstring in public function torch/backends/_nnapi/serializer.py:312 in public function `flex_name`: D103: Missing docstring in public function torch/backends/cuda/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cuda/__init__.py:39 in public class `cuFFTPlanCacheAttrContextProp`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:42 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:46 in public method `__get__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:49 in public method `__set__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:63 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:76 in public method `clear`: D102: Missing docstring in public method torch/backends/cuda/__init__.py:91 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/cuda/__init__.py:95 in public method `__getitem__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:108 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:111 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:118 in public class `cuBLASModule`: D101: Missing docstring in public class torch/backends/cuda/__init__.py:119 in public method `__getattr__`: D105: Missing docstring in magic method torch/backends/cuda/__init__.py:128 in public method `__setattr__`: D105: Missing docstring in magic method torch/backends/cudnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/cudnn/__init__.py:99 in public function `is_acceptable`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:122 in public function `set_flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:150 in public function `flags`: D103: Missing docstring in public function torch/backends/cudnn/__init__.py:174 in public class `CudnnModule`: D101: Missing docstring in public class torch/backends/cudnn/__init__.py:175 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkl/__init__.py:42 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkl/__init__.py:45 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkl/__init__.py:54 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:1 at module level: D104: Missing docstring in public package torch/backends/mkldnn/__init__.py:48 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/mkldnn/__init__.py:51 in public method `__enter__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:60 in public method `__exit__`: D105: Missing docstring in magic method torch/backends/mkldnn/__init__.py:65 in public function `set_flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:72 in public function `flags`: D103: Missing docstring in public function torch/backends/mkldnn/__init__.py:82 in public class `MkldnnModule`: D101: Missing docstring in public class torch/backends/mkldnn/__init__.py:83 in public method `__init__`: D107: Missing docstring in __init__ torch/backends/openmp/__init__.py:1 at module level: D104: Missing docstring in public package torch/nn/utils/_expanded_weights/conv_utils.py:13 in public function `conv_picker`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:23 in public function `conv_args_and_kwargs`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:31 in public function `conv_normalizer`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:35 in public function `conv_input_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:43 in public function `int_padding_for_string_padding`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:59 in public function `conv_padding_for_same`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:66 in public function `conv_backward`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:131 in public function `conv_unfold_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/conv_utils.py:166 in public function `conv_group_weight_grad_sample`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:6 in public function `is_batch_first`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:87 in public function `maybe_scale_by_batch_size`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:93 in public function `set_grad_sample_if_exists`: D103: Missing docstring in public function torch/nn/utils/_expanded_weights/expanded_weights_utils.py:111 in public function `unpack_expanded_weight_or_tensor`: D103: Missing docstring in public function torch/nn/utils/convert_parameters.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/rnn.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/rnn.py:64 in public method `__new__`: D102: Missing docstring in public method torch/nn/utils/rnn.py:74 in public method `pin_memory`: D102: Missing docstring in public method torch/nn/utils/rnn.py:81 in public method `cuda`: D102: Missing docstring in public method torch/nn/utils/rnn.py:88 in public method `cpu`: D102: Missing docstring in public method torch/nn/utils/rnn.py:95 in public method `double`: D102: Missing docstring in public method torch/nn/utils/rnn.py:98 in public method `float`: D102: Missing docstring in public method torch/nn/utils/rnn.py:101 in public method `half`: D102: Missing docstring in public method torch/nn/utils/rnn.py:104 in public method `long`: D102: Missing docstring in public method torch/nn/utils/rnn.py:107 in public method `int`: D102: Missing docstring in public method torch/nn/utils/rnn.py:110 in public method `short`: D102: Missing docstring in public method torch/nn/utils/rnn.py:113 in public method `char`: D102: Missing docstring in public method torch/nn/utils/rnn.py:116 in public method `byte`: D102: Missing docstring in public method torch/nn/utils/rnn.py:198 in public function `invert_permutation`: D103: Missing docstring in public function 81 ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/112695 Approved by: https://github.com/mikaylagawarecki |
||
|
|
78c8a0d752 |
[quant][ao_migration] torch.nn.quantized.functional → torch.ao.nn.quantized.functional (#78712)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.
The list of the `nn.quantized` files that are being migrated:
- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
- [X] [Current PR] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
- [ ] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
- [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
- [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
- [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
- [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
- [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
- [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
- [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
- [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
- [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`
Majority of the files are just moved to the new location.
However, specific files need to be double checked:
- [Documentation](docs/source/quantization-support.rst) @vkuzo
- [Public API test list](test/allowlist_for_publicAPI.json) @peterbell10
Differential Revision: [D36792967](https://our.internmc.facebook.com/intern/diff/D36792967/)
**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36792967/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78712
Approved by: https://github.com/jerryzh168
|
||
|
|
357b7d589c |
Fix docstring inconsistencies: string -> str, boolean -> bool (#82410)
### Description Throughout the PyTorch docs and codebase, the `string` type in docstrings is referred to by two separate names. This leads to inconsistent docs, like you can see here: https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html#torch.nn.Conv3d This PR fixes this issue by ensuring that all mentions of the string type in docstrings, are using the same format that Sphinx generates hyperlinks for. ### Testing No testing should be required for this change Pull Request resolved: https://github.com/pytorch/pytorch/pull/82410 Approved by: https://github.com/jbschlosser |
||
|
|
452c26bbeb |
Fix functional.max_poolNd warning spam in the CI
Fixes https://github.com/pytorch/pytorch/issues/71257. Warnings have been removed, please see [this](https://github.com/pytorch/pytorch/pull/71258#issuecomment-1058503649) comment. cc: @Lezcano @jbschlosser @zou3519 Pull Request resolved: https://github.com/pytorch/pytorch/pull/71258 Approved by: https://github.com/Lezcano, https://github.com/jbschlosser |
||
|
|
92a85ecbab |
add a quantized hardsigmoid inplace variant (#65740)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65740 fp32 hardsigmoid supports inplace. This PR adds the inplace support to the quantized hardsigmoid function, to make the signatures match. Test Plan: ``` python test/test_quantization.py TestQuantizedOps.test_qhardsigmoid ``` Reviewed By: supriyar Differential Revision: D31992282 Pulled By: vkuzo fbshipit-source-id: f6be65d72954ab8926b36bb74a5e79d422fbac90 |
||
|
|
8b1258698e |
Improve quantization API docs (#66379)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66379 Description: Creates a quantization API reference and fixes all the docblock errors. This is #66122 to #66210 squashed together Test Plan: ``` cd docs make html python -m http.server // open webpage, inspect it, looks good ``` Reviewed By: ejguan Differential Revision: D31543172 Pulled By: vkuzo fbshipit-source-id: 9131363d6528337e9f100759654d3f34f02142a9 |
||
|
|
09c3e6002b |
Revert D31447615: Quantization docs: rewrite API reference to be more automated
Test Plan: revert-hammer
Differential Revision:
D31447615 (
|
||
|
|
7d2526ab20 |
Quantization docs: rewrite API reference to be more automated (#66201)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66201 Description: This PR switches the quantization API reference to use `autosummary` for each section. We define the sections and manually write a list of modules/functions/methods to include, and sphinx does the rest. A result is a single page where we have every quantization function and module with a quick autogenerated blurb, and user can click through to each of them for a full documentation page. This mimics how the `torch.nn` and `torch.nn.functional` doc pages are set up. In detail, for each section before this PR: * creates a new section using `autosummary` * adds all modules/functions/methods which were previously in the manual section * adds any additional modules/functions/methods which are public facing but not previously documented * deletes the old manual summary and all links to it Test Plan: ``` cd docs make html python -m http.server // renders well, links work ``` Reviewed By: jerryzh168 Differential Revision: D31447615 Pulled By: vkuzo fbshipit-source-id: 09874ad9629f9c00eeab79c406579c6abd974901 |
||
|
|
8aaca4b46a |
[reland][quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) (#48038)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48038 nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode Test Plan: Imported from OSS Imported from OSS Reviewed By: vkuzo Differential Revision: D25000462 fbshipit-source-id: e3609a3ae4a3476a42f61276619033054194a0d2 |
||
|
|
4779553921 |
Revert "[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)" (#47949)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47949
This reverts commit
|
||
|
|
8ff0b6fef8 |
[OpBenchMobile] Enable operator_benchmark to run the benchmark on mobile through AiBench (#47767)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47767 This diff implements the functionality of running benchmark on mobile on top of operator_benchmark framework. It does so through a few steps: 1. create a scripted module from existing benchmark case. 2. run mobile specific optimization pass on the scripted module 3. run the scripted module on AiBench by calling its Python API A small change in the way of writing a benchmark case is introduced so that both local and mobile run can share the same interface. The change is about having inputs as arguments of the `forward` function, so that mobile optimization pass can be run successfully (otherwise everything will be optimized away by constant propagation). Test Plan: ## local op_bench run buck run caffe2/benchmarks/operator_benchmark:benchmark_all_test -- --iterations 1 --warmup_iterations 1 buck run caffe2/benchmarks/operator_benchmark:benchmark_all_test -- --iterations 1 --warmup_iterations 1 --use_jit Exceptions: `py_module` op in `FakeQuantizePerTensorBaseOpBenchmark` and `FakeQuantizePerChannelBaseOpBenchmark` under JIT mode. These tests also failed in the base version ``` RuntimeError: Module 'FakeQuantizePerChannelOpBenchmark' has no attribute 'op_func' (This function exists as an attribute on the Python module, but we failed to compile it to a TorchScript function. The error stack is reproduced here: Python builtin <built-in method apply of FunctionMeta object at 0x619000c652a0> is currently not supported in Torchscript: File "/data/users/wangyang19/fbsource/fbcode/buck-out/dev/gen/caffe2/benchmarks/operator_benchmark/pt/quantization_test#link-tree/quantization_test.py", line 260 quant_min: int, quant_max: int ): return _LearnableFakeQuantizePerChannelOp.apply(input, scale, zero_point, axis, quant_min, quant_max, 1.0) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE : File "/data/users/wangyang19/fbsource/fbcode/buck-out/dev/gen/caffe2/benchmarks/operator_benchmark/pt/quantization_test#link-tree/quantization_test.py", line 313 axis: int, quant_min: int, quant_max: int ): return self.op_func(input, scale, zero_point, axis, quant_min, quant_max) ~~~~~~~~~~~~ <--- HERE ``` `_consume_op` typing mismatch: chunk, split, qobserver, sort in qunary. These will be fixed in D24774105 ## OSS test python3 -m benchmark_all_test --iterations 1 --warmup_iterations 1 --use_jit python3 -m benchmark_all_test --iterations 1 --warmup_iterations 1 ## saved module graph ``` module __torch__.mobile_benchmark_utils.OpBenchmarkMobile { parameters { } attributes { training = True num_iters = 1 benchmark = <__torch__.pt.add_test.___torch_mangle_4.AddBenchmark object at 0x6070001b8b50> } methods { method forward { graph(%self : __torch__.mobile_benchmark_utils.OpBenchmarkMobile): %12 : None = prim::Constant() # /data/users/wangyang19/fbsource/fbcode/buck-out/dev/gen/caffe2/benchmarks/operator_benchmark/fb/pt/mobile/benchmark_all_test_fbcode#link-tree/mobile_benchmark_utils.py:9:4 %4 : bool = prim::Constant[value=1]() # /data/users/wangyang19/fbsource/fbcode/buck-out/dev/gen/caffe2/benchmarks/operator_benchmark/fb/pt/mobile/benchmark_all_test_fbcode#link-tree/mobile_benchmark_utils.py:10:8 %1 : int = prim::GetAttr[name="num_iters"](%self) = prim::Loop(%1, %4) # /data/users/wangyang19/fbsource/fbcode/buck-out/dev/gen/caffe2/benchmarks/operator_benchmark/fb/pt/mobile/benchmark_all_test_fbcode#link-tree/mobile_benchmark_utils.py:10:8 block0(%i : int): %6 : __torch__.pt.add_test.___torch_mangle_4.AddBenchmark = prim::GetAttr[name="benchmark"](%self) %7 : __torch__.pt.add_test.___torch_mangle_4.AddBenchmark = prim::GetAttr[name="benchmark"](%self) %self.inputs_tuple : (Float(1, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu), Float(1, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu)) = prim::Constant[value=({0.48884}, {0.809042})]() %9 : Tensor, %10 : Tensor = prim::TupleUnpack(%self.inputs_tuple) %23 : int = prim::Constant[value=1]() %24 : Tensor = aten::add(%9, %10, %23) # /data/users/wangyang19/fbsource/fbcode/buck-out/dev/gen/caffe2/benchmarks/operator_benchmark/fb/pt/mobile/benchmark_all_test_fbcode#link-tree/pt/add_test.py:39:15 -> (%4) return (%12) } } submodules { module __torch__.pt.add_test.___torch_mangle_4.AddBenchmark { parameters { } attributes { mobile_optimized = True } methods { method forward { graph(%self : __torch__.pt.add_test.___torch_mangle_4.AddBenchmark, %input_one.1 : Tensor, %input_two.1 : Tensor): %3 : int = prim::Constant[value=1]() %4 : Tensor = aten::add(%input_one.1, %input_two.1, %3) # /data/users/wangyang19/fbsource/fbcode/buck-out/dev/gen/caffe2/benchmarks/operator_benchmark/fb/pt/mobile/benchmark_all_test_fbcode#link-tree/pt/add_test.py:39:15 return (%4) } method get_inputs { graph(%self : __torch__.pt.add_test.___torch_mangle_4.AddBenchmark): %self.inputs_tuple : (Float(1, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu), Float(1, 1, 1, strides=[1, 1, 1], requires_grad=0, device=cpu)) = prim::Constant[value=({0.48884}, {0.809042})]() return (%self.inputs_tuple) } } submodules { } } } } ``` Reviewed By: kimishpatel Differential Revision: D24322214 fbshipit-source-id: 335317eca4f40c4083883eb41dc47caf25cbdfd1 |
||
|
|
1478e5ec2a |
[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47415 nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode Test Plan: Imported from OSS Reviewed By: z-a-f Differential Revision: D24747035 fbshipit-source-id: b8fdf13e513a0d5f0c4c6c9835635bdf9fdc2769 |
||
|
|
c1e6592964 |
Enable type-checking of torch.nn.quantized.* modules (#43110)
Summary: Fixes https://github.com/pytorch/pytorch/issues/43029 I am not changing the following files in this PR: * `torch/nn/quantized/dynamic/modules/rnn.py` due to https://github.com/pytorch/pytorch/issues/43072 * `torch/nn/quantized/modules/conv.py` Pull Request resolved: https://github.com/pytorch/pytorch/pull/43110 Reviewed By: gchanan Differential Revision: D23963258 Pulled By: ezyang fbshipit-source-id: 0fb0fd13af283f6f7b3434e7bbf62165357d1f98 |
||
|
|
bb478810e0 |
[quant] torch.max_pool1d (#45152)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45152 Test Plan: Imported from OSS Reviewed By: jerryzh168 Differential Revision: D23846473 Pulled By: z-a-f fbshipit-source-id: 38fd611e568e4f8b39b7a00adeb42c7b99576360 |
||
|
|
37658b144b |
Remove useless py2 compatibility import __future__, part 1 (#43808)
Summary: To avoid conflicts, this PR does not remove all imports. More are coming in further PRs. Pull Request resolved: https://github.com/pytorch/pytorch/pull/43808 Reviewed By: wanchaol Differential Revision: D23436675 Pulled By: ailzhang fbshipit-source-id: ccc21a1955c244f0804277e9e47e54bfd23455cd |
||
|
|
9600ed9af3 |
typo fixes (#41632)
Summary: typo fixes Pull Request resolved: https://github.com/pytorch/pytorch/pull/41632 Reviewed By: ezyang Differential Revision: D22617827 Pulled By: mrshenli fbshipit-source-id: c2bfcb7cc36913a8dd32f13fc9adc3aa0a9b682f |
||
|
|
445e7eb01b |
Add quantized CELU operator by adding additional parameters to quantized ELU (#39199)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39199 Test Plan: Imported from OSS Differential Revision: D21771202 Pulled By: durumu fbshipit-source-id: 910de6202fa3d5780497c5bf85208568a09297dd |
||
|
|
733b8c23c4 |
Fix several quantization documentation typos (#40567)
Summary: This PR fixes several typos I noticed in the docs here: https://pytorch.org/docs/master/quantization.html. In one case there was a misspelled module [torch.nn.instrinsic.qat](https://pytorch.org/docs/master/quantization.html#torch-nn-instrinsic-qat) which I corrected and am including screenshots of below just in case. <img width="1094" alt="before" src="https://user-images.githubusercontent.com/54918401/85766765-5cdd6280-b6e5-11ea-93e6-4944cf820b71.png"> <img width="1093" alt="after" src="https://user-images.githubusercontent.com/54918401/85766769-5d75f900-b6e5-11ea-8850-0d1f5ed67b16.png"> Pull Request resolved: https://github.com/pytorch/pytorch/pull/40567 Differential Revision: D22311291 Pulled By: ezyang fbshipit-source-id: 65d1f3dd043357e38a584d9e30f31634a5b0995c |
||
|
|
c314e0deb5 |
[quant] Quantized adaptive_avg_pool3d (#40271)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40271 Closes #40244 Test Plan: Imported from OSS Reviewed By: vkuzo Differential Revision: D22134318 Pulled By: z-a-f fbshipit-source-id: 0489b6c083a3cbc21a1d81d8bfcc499372308088 |
||
|
|
9bf255573f |
quant docs: add and clean up ELU (#40377)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40377 Cleans up the docstring for quantized ELU and adds it to the quantization docs. Test Plan: * build on Mac OS and inspect Differential Revision: D22162834 Pulled By: vkuzo fbshipit-source-id: e548fd4dc8d67db27ed19cac4dbdf2a942586759 |
||
|
|
d27f8eaf92 |
quant docs: add and clean up hardtanh (#40341)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40341 Cleans up the hardtanh docstring and adds it to quantization docs. Test Plan: * build and inspect on Mac OS Differential Revision: D22152636 Pulled By: vkuzo fbshipit-source-id: c98e635199c8be332aa6958664ff23faad834908 |
||
|
|
8e74fb6a0c |
quant docs: add and clean up hardsigmoid (#40340)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40340 Adds and simplifies quantization docs for hardsigmoid Test Plan: * build docs on Mac OS * inspect Differential Revision: D22152634 Pulled By: vkuzo fbshipit-source-id: 18da273023fb00e5f0bc1e881b00536492c606d3 |
||
|
|
c4594a97ae |
quant docs: clean up hardswish (#40323)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40323 Cleans up the naming and the function param docs for quantized hardswish. Remove redundant docstrings and link to floating point modules instead. Test Plan: * build the docs on Mac OS * verify that every link works as expected Differential Revision: D22152638 Pulled By: vkuzo fbshipit-source-id: fef04874ae460b449c677424a6a1c6dd47054795 |
||
|
|
13d54c6471 |
quantized elu: require observation (#40100)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40100 ELU has a range of [-1, inf]. In the original PR which added the quantized operator we decided to pass the quantization params from the input. However, it makes more sense to require observation for this op. This PR changes the API to require observation. Next PRs in this stack will add the eager and graph mode handling. Test Plan: ``` python test/test_quantization.py TestQuantizedOps.test_qelu ``` Imported from OSS Differential Revision: D22075083 fbshipit-source-id: 0ea0fd05a00cc7a5f122a2b1de09144bbd586f32 |
||
|
|
6a75f650dd |
Implement Quantized Version of Threshold Function (#39352)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39352 In this task, the quantized backend of the kernel is implemented for the threshold function, which clamps the entries in a tensor less than or equal to a given threshold to be a specified value. The corresponding Python implementation and unit test are also added. Test Plan: 1. On a devserver, build PyTorch from source by running the command `buck build mode/dev //caffe2:torch` 2. Run the unit test throught the command `buck test mode/dev //caffe2/test:quantization -- test_qthreshold` Reviewed By: z-a-f Differential Revision: D21822446 fbshipit-source-id: e8c869664e6d4c664f0e7fa3957762992118c082 |
||
|
|
de7025fbdb |
[quant] Support for functional quantized::conv1d (#38449)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38449 Also update docs to reflect conv1d op support Test Plan: python test/test_quantization.py TestQuantizedFunctional.test_conv1d_api Imported from OSS Differential Revision: D21575921 fbshipit-source-id: 21c9f6b49ad456cd9d93e97f17cf5b8d87f0da6b |
||
|
|
70f375becf |
[quant] ConvPackedParams with TorchBind (#35923)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35923 (Note: this ignores all push blocking failures!) Test Plan: tbd Imported from OSS Differential Revision: D20957089 fbshipit-source-id: 74d8bd628ccba64e902ea6ebabc2b883924050b0 |
||
|
|
7ac98c9396 |
graph mode: refactor quantized hardswish API for easier graph handling (#37523)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37523 Makes the quantized hardswish function API more suited to graph mode handling, which will come in the next PR. Test Plan: CI Imported from OSS Differential Revision: D21310364 fbshipit-source-id: 0d438dce5b87481d558c07bcccd9fe717200b4dc |
||
|
|
7f50162d1e |
quantized activations: clean up more unneeded quantizations (#36981)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36981 Replaces unneeded quantize calls for remaining quantized activations with empty tensor creation. Should be a perf win for anyone who uses these. Test Plan: python test/quantization/test_quantized.py TestQuantizedOps Imported from OSS Differential Revision: D21185969 fbshipit-source-id: 473b2b8aa40046ea3f0665bd45b03f09e8a7d572 |
||
|
|
2773ed3082 |
hardswish: remove unnecessary quantize call (#36980)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36980 Missed this on the original diff, fixing. Create the output tensor directly instead of quantizing it. Test Plan: tests still pass microbenchmarks show a 2x performance improvment for int8: https://gist.github.com/vkuzo/3b321b428e4c38e805000961c263286b (this will depend on input size) Imported from OSS Differential Revision: D21185970 fbshipit-source-id: 5b9e93d9f9ac05a8120532bd03ad347541a132c2 |
||
|
|
78d5707041 |
Fix type annotations and make MyPy run on torch/ (#36584)
Summary: This PR fixes a couple of syntax errors in `torch/` that prevent MyPy from running, fixes simple type annotation errors (e.g. missing `from typing import List, Tuple, Optional`), and adds granular ignores for errors in particular modules as well as for missing typing in third party packages. As a result, running `mypy` in the root dir of the repo now runs on: - `torch/` - `aten/src/ATen/function_wrapper.py` (the only file already covered in CI) In CI this runs on GitHub Actions, job Lint, sub-job "quick-checks", task "MyPy typecheck". It should give (right now): `Success: no issues found in 329 source files`. Here are the details of the original 855 errors when running `mypy torch` on current master (after fixing the couple of syntax errors that prevent `mypy` from running through): <details> ``` torch/utils/tensorboard/_proto_graph.py:1: error: Cannot find implementation or library stub for module named 'tensorboard.compat.proto.node_def_pb2' torch/utils/tensorboard/_proto_graph.py:2: error: Cannot find implementation or library stub for module named 'tensorboard.compat.proto.attr_value_pb2' torch/utils/tensorboard/_proto_graph.py:3: error: Cannot find implementation or library stub for module named 'tensorboard.compat.proto.tensor_shape_pb2' torch/utils/backcompat/__init__.py:1: error: Cannot find implementation or library stub for module named 'torch._C' torch/for_onnx/__init__.py:1: error: Cannot find implementation or library stub for module named 'torch.for_onnx.onnx' torch/cuda/nvtx.py:2: error: Cannot find implementation or library stub for module named 'torch._C' torch/utils/show_pickle.py:59: error: Name 'pickle._Unpickler' is not defined torch/utils/show_pickle.py:113: error: "Type[PrettyPrinter]" has no attribute "_dispatch" torch/utils/tensorboard/_onnx_graph.py:1: error: Cannot find implementation or library stub for module named 'tensorboard.compat.proto.graph_pb2' torch/utils/tensorboard/_onnx_graph.py:2: error: Cannot find implementation or library stub for module named 'tensorboard.compat.proto.node_def_pb2' torch/utils/tensorboard/_onnx_graph.py:3: error: Cannot find implementation or library stub for module named 'tensorboard.compat.proto.versions_pb2' torch/utils/tensorboard/_onnx_graph.py:4: error: Cannot find implementation or library stub for module named 'tensorboard.compat.proto.attr_value_pb2' torch/utils/tensorboard/_onnx_graph.py:5: error: Cannot find implementation or library stub for module named 'tensorboard.compat.proto.tensor_shape_pb2' torch/utils/tensorboard/_onnx_graph.py:9: error: Cannot find implementation or library stub for module named 'onnx' torch/contrib/_tensorboard_vis.py:10: error: Cannot find implementation or library stub for module named 'tensorflow.core.util' torch/contrib/_tensorboard_vis.py:11: error: Cannot find implementation or library stub for module named 'tensorflow.core.framework' torch/contrib/_tensorboard_vis.py:12: error: Cannot find implementation or library stub for module named 'tensorflow.python.summary.writer.writer' torch/utils/hipify/hipify_python.py:43: error: Need type annotation for 'CAFFE2_TEMPLATE_MAP' (hint: "CAFFE2_TEMPLATE_MAP: Dict[<type>, <type>] = ...") torch/utils/hipify/hipify_python.py:636: error: "object" has no attribute "items" torch/nn/_reduction.py:27: error: Name 'Optional' is not defined torch/nn/_reduction.py:27: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/nn/_reduction.py:47: error: Name 'Optional' is not defined torch/nn/_reduction.py:47: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/utils/tensorboard/_utils.py:17: error: Skipping analyzing 'matplotlib.pyplot': found module but no type hints or library stubs torch/utils/tensorboard/_utils.py:17: error: Skipping analyzing 'matplotlib': found module but no type hints or library stubs torch/utils/tensorboard/_utils.py:18: error: Skipping analyzing 'matplotlib.backends.backend_agg': found module but no type hints or library stubs torch/utils/tensorboard/_utils.py:18: error: Skipping analyzing 'matplotlib.backends': found module but no type hints or library stubs torch/nn/modules/utils.py:27: error: Name 'List' is not defined torch/nn/modules/utils.py:27: note: Did you forget to import it from "typing"? (Suggestion: "from typing import List") caffe2/proto/caffe2_pb2.py:17: error: Unexpected keyword argument "serialized_options" for "FileDescriptor"; did you mean "serialized_pb"? caffe2/proto/caffe2_pb2.py:25: error: Unexpected keyword argument "serialized_options" for "EnumDescriptor" caffe2/proto/caffe2_pb2.py:31: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:35: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:39: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:43: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:47: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:51: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:55: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:59: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:63: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:67: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:71: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:75: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:102: error: Unexpected keyword argument "serialized_options" for "EnumDescriptor" caffe2/proto/caffe2_pb2.py:108: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:112: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:124: error: Unexpected keyword argument "serialized_options" for "EnumDescriptor" caffe2/proto/caffe2_pb2.py:130: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:134: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:138: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:142: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:146: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:150: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:154: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:158: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:162: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:166: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:170: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:174: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:178: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:182: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:194: error: Unexpected keyword argument "serialized_options" for "EnumDescriptor" caffe2/proto/caffe2_pb2.py:200: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:204: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:208: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:212: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:224: error: Unexpected keyword argument "serialized_options" for "EnumDescriptor" caffe2/proto/caffe2_pb2.py:230: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:234: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:238: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:242: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:246: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:250: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:254: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/caffe2_pb2.py:267: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:274: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:281: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:288: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:295: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:302: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:327: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:334: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:341: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:364: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:371: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:378: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:385: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:392: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:399: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:406: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:413: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:420: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:427: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:434: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:441: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:448: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:455: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:462: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:488: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:495: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:502: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:509: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:516: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:523: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:530: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:537: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:544: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:551: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:558: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:565: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:572: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:596: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:603: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:627: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:634: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:641: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:648: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:655: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:662: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:686: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:693: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:717: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:724: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:731: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:738: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:763: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:770: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:777: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:784: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:808: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:815: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:822: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:829: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:836: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:843: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:850: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:857: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:864: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:871: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:878: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:885: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:892: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:916: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:923: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:930: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:937: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:944: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:951: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:958: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:982: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:989: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:996: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1003: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1010: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1017: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1024: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1031: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1038: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1045: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1052: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1059: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1066: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1090: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:1097: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1104: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1128: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:1135: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1142: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1166: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:1173: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1180: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1187: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1194: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1218: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:1225: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1232: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1239: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1246: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1253: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1260: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1267: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1274: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1281: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1305: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:1312: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1319: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1326: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1333: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1340: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1347: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1354: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1361: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1368: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1375: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1382: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1389: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1396: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1420: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:1427: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1434: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1441: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1465: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:1472: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1479: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1486: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1493: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1500: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1507: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1514: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1538: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/caffe2_pb2.py:1545: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1552: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1559: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1566: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/caffe2_pb2.py:1667: error: "GeneratedProtocolMessageType" has no attribute "Segment" torch/multiprocessing/queue.py:4: error: No library stub file for standard library module 'multiprocessing.reduction' caffe2/proto/torch_pb2.py:18: error: Unexpected keyword argument "serialized_options" for "FileDescriptor"; did you mean "serialized_pb"? caffe2/proto/torch_pb2.py:27: error: Unexpected keyword argument "serialized_options" for "EnumDescriptor" caffe2/proto/torch_pb2.py:33: error: Unexpected keyword argument "serialized_options" for "EnumValueDescriptor" caffe2/proto/torch_pb2.py:50: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/torch_pb2.py:57: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:81: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/torch_pb2.py:88: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:95: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:102: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:109: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:116: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:123: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:130: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:137: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:144: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:151: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:175: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/torch_pb2.py:182: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:189: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:196: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:220: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/torch_pb2.py:227: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:234: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:241: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:265: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/torch_pb2.py:272: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:279: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:286: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:293: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:300: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:307: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:314: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:321: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:328: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:335: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:342: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:366: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/torch_pb2.py:373: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:397: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/torch_pb2.py:404: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:411: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:418: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:425: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/torch_pb2.py:432: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:17: error: Unexpected keyword argument "serialized_options" for "FileDescriptor"; did you mean "serialized_pb"? caffe2/proto/metanet_pb2.py:29: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/metanet_pb2.py:36: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:43: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:50: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:57: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:64: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:88: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/metanet_pb2.py:95: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:102: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:126: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/metanet_pb2.py:133: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:140: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:164: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/metanet_pb2.py:171: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:178: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:202: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/metanet_pb2.py:209: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:216: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:240: error: Unexpected keyword argument "serialized_options" for "Descriptor" caffe2/proto/metanet_pb2.py:247: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:254: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:261: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:268: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:275: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:282: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:289: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/metanet_pb2.py:296: error: Unexpected keyword argument "serialized_options" for "FieldDescriptor" caffe2/proto/__init__.py:13: error: Skipping analyzing 'caffe2.caffe2.fb.session.proto': found module but no type hints or library stubs torch/multiprocessing/pool.py:3: error: No library stub file for standard library module 'multiprocessing.util' torch/multiprocessing/pool.py:3: note: (Stub files are from https://github.com/python/typeshed) caffe2/python/scope.py:10: error: Skipping analyzing 'past.builtins': found module but no type hints or library stubs caffe2/python/__init__.py:7: error: Module has no attribute "CPU" caffe2/python/__init__.py:8: error: Module has no attribute "CUDA" caffe2/python/__init__.py:9: error: Module has no attribute "MKLDNN" caffe2/python/__init__.py:10: error: Module has no attribute "OPENGL" caffe2/python/__init__.py:11: error: Module has no attribute "OPENCL" caffe2/python/__init__.py:12: error: Module has no attribute "IDEEP" caffe2/python/__init__.py:13: error: Module has no attribute "HIP" caffe2/python/__init__.py:14: error: Module has no attribute "COMPILE_TIME_MAX_DEVICE_TYPES"; maybe "PROTO_COMPILE_TIME_MAX_DEVICE_TYPES"? caffe2/python/__init__.py:15: error: Module has no attribute "ONLY_FOR_TEST"; maybe "PROTO_ONLY_FOR_TEST"? caffe2/python/__init__.py:34: error: Item "_Loader" of "Optional[_Loader]" has no attribute "exec_module" caffe2/python/__init__.py:34: error: Item "None" of "Optional[_Loader]" has no attribute "exec_module" caffe2/python/__init__.py:35: error: Module has no attribute "cuda" caffe2/python/__init__.py:37: error: Module has no attribute "cuda" caffe2/python/__init__.py:49: error: Module has no attribute "add_dll_directory" torch/random.py:4: error: Cannot find implementation or library stub for module named 'torch._C' torch/_classes.py:2: error: Cannot find implementation or library stub for module named 'torch._C' torch/onnx/__init__.py:1: error: Cannot find implementation or library stub for module named 'torch._C' torch/hub.py:21: error: Skipping analyzing 'tqdm.auto': found module but no type hints or library stubs torch/hub.py:24: error: Skipping analyzing 'tqdm': found module but no type hints or library stubs torch/hub.py:27: error: Name 'tqdm' already defined (possibly by an import) torch/_tensor_str.py:164: error: Not all arguments converted during string formatting torch/_ops.py:1: error: Cannot find implementation or library stub for module named 'torch._C' torch/_linalg_utils.py:26: error: Name 'Optional' is not defined torch/_linalg_utils.py:26: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/_linalg_utils.py:26: error: Name 'Tensor' is not defined torch/_linalg_utils.py:63: error: Name 'Tensor' is not defined torch/_linalg_utils.py:63: error: Name 'Optional' is not defined torch/_linalg_utils.py:63: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/_linalg_utils.py:70: error: Name 'Optional' is not defined torch/_linalg_utils.py:70: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/_linalg_utils.py:70: error: Name 'Tensor' is not defined torch/_linalg_utils.py:88: error: Name 'Tensor' is not defined torch/_linalg_utils.py:88: error: Name 'Optional' is not defined torch/_linalg_utils.py:88: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/_linalg_utils.py:88: error: Name 'Tuple' is not defined torch/_linalg_utils.py:88: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/_jit_internal.py:17: error: Need type annotation for 'boolean_dispatched' torch/_jit_internal.py:474: error: Need type annotation for '_overloaded_fns' (hint: "_overloaded_fns: Dict[<type>, <type>] = ...") torch/_jit_internal.py:512: error: Need type annotation for '_overloaded_methods' (hint: "_overloaded_methods: Dict[<type>, <type>] = ...") torch/_jit_internal.py:648: error: Incompatible types in assignment (expression has type "FinalCls", variable has type "_SpecialForm") torch/sparse/__init__.py:11: error: Name 'Tensor' is not defined torch/sparse/__init__.py:71: error: Name 'Tensor' is not defined torch/sparse/__init__.py:71: error: Name 'Optional' is not defined torch/sparse/__init__.py:71: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/sparse/__init__.py:71: error: Name 'Tuple' is not defined torch/sparse/__init__.py:71: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/nn/init.py:109: error: Name 'Tensor' is not defined torch/nn/init.py:126: error: Name 'Tensor' is not defined torch/nn/init.py:142: error: Name 'Tensor' is not defined torch/nn/init.py:165: error: Name 'Tensor' is not defined torch/nn/init.py:180: error: Name 'Tensor' is not defined torch/nn/init.py:194: error: Name 'Tensor' is not defined torch/nn/init.py:287: error: Name 'Tensor' is not defined torch/nn/init.py:315: error: Name 'Tensor' is not defined torch/multiprocessing/reductions.py:8: error: No library stub file for standard library module 'multiprocessing.util' torch/multiprocessing/reductions.py:9: error: No library stub file for standard library module 'multiprocessing.reduction' torch/multiprocessing/reductions.py:17: error: No library stub file for standard library module 'multiprocessing.resource_sharer' torch/jit/_builtins.py:72: error: Module has no attribute "_no_grad_embedding_renorm_" torch/jit/_builtins.py:80: error: Module has no attribute "stft" torch/jit/_builtins.py:81: error: Module has no attribute "cdist" torch/jit/_builtins.py:82: error: Module has no attribute "norm" torch/jit/_builtins.py:83: error: Module has no attribute "nuclear_norm" torch/jit/_builtins.py:84: error: Module has no attribute "frobenius_norm" torch/backends/cudnn/__init__.py:8: error: Cannot find implementation or library stub for module named 'torch._C' torch/backends/cudnn/__init__.py:86: error: Need type annotation for '_handles' (hint: "_handles: Dict[<type>, <type>] = ...") torch/autograd/profiler.py:13: error: Name 'ContextDecorator' already defined (possibly by an import) torch/autograd/function.py:2: error: Cannot find implementation or library stub for module named 'torch._C' torch/autograd/function.py:2: note: See https://mypy.readthedocs.io/en/latest/running_mypy.html#missing-imports torch/autograd/function.py:109: error: Unsupported dynamic base class "with_metaclass" torch/serialization.py:609: error: "Callable[[Any], Any]" has no attribute "cache" torch/_lowrank.py:11: error: Name 'Tensor' is not defined torch/_lowrank.py:13: error: Name 'Optional' is not defined torch/_lowrank.py:13: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/_lowrank.py:14: error: Name 'Optional' is not defined torch/_lowrank.py:14: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/_lowrank.py:14: error: Name 'Tensor' is not defined torch/_lowrank.py:82: error: Name 'Tensor' is not defined torch/_lowrank.py:82: error: Name 'Optional' is not defined torch/_lowrank.py:82: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/_lowrank.py:82: error: Name 'Tuple' is not defined torch/_lowrank.py:82: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/_lowrank.py:130: error: Name 'Tensor' is not defined torch/_lowrank.py:130: error: Name 'Optional' is not defined torch/_lowrank.py:130: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/_lowrank.py:130: error: Name 'Tuple' is not defined torch/_lowrank.py:130: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/_lowrank.py:167: error: Name 'Tensor' is not defined torch/_lowrank.py:167: error: Name 'Optional' is not defined torch/_lowrank.py:167: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/_lowrank.py:167: error: Name 'Tuple' is not defined torch/_lowrank.py:167: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/quantization/observer.py:45: error: Variable "torch.quantization.observer.ABC" is not valid as a type torch/quantization/observer.py:45: note: See https://mypy.readthedocs.io/en/latest/common_issues.html#variables-vs-type-aliases torch/quantization/observer.py:45: error: Invalid base class "ABC" torch/quantization/observer.py:127: error: Name 'Tensor' is not defined torch/quantization/observer.py:127: error: Name 'Tuple' is not defined torch/quantization/observer.py:127: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/quantization/observer.py:172: error: Module has no attribute "per_tensor_symmetric" torch/quantization/observer.py:172: error: Module has no attribute "per_channel_symmetric" torch/quantization/observer.py:192: error: Name 'Tensor' is not defined torch/quantization/observer.py:192: error: Name 'Tuple' is not defined torch/quantization/observer.py:192: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/quantization/observer.py:233: error: Module has no attribute "per_tensor_symmetric" torch/quantization/observer.py:233: error: Module has no attribute "per_channel_symmetric" torch/quantization/observer.py:534: error: Name 'Tensor' is not defined torch/quantization/observer.py:885: error: Name 'Tensor' is not defined torch/quantization/observer.py:885: error: Name 'Tuple' is not defined torch/quantization/observer.py:885: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/quantization/observer.py:894: error: Cannot determine type of 'max_val' torch/quantization/observer.py:894: error: Cannot determine type of 'min_val' torch/quantization/observer.py:899: error: Cannot determine type of 'min_val' torch/quantization/observer.py:902: error: Name 'Tensor' is not defined torch/quantization/observer.py:925: error: Name 'Tensor' is not defined torch/quantization/observer.py:928: error: Cannot determine type of 'min_val' torch/quantization/observer.py:929: error: Cannot determine type of 'max_val' torch/quantization/observer.py:946: error: Argument "min" to "histc" has incompatible type "Tuple[Tensor, Tensor]"; expected "Union[int, float, bool]" torch/quantization/observer.py:946: error: Argument "max" to "histc" has incompatible type "Tuple[Tensor, Tensor]"; expected "Union[int, float, bool]" torch/quantization/observer.py:1056: error: Module has no attribute "per_tensor_symmetric" torch/quantization/observer.py:1058: error: Module has no attribute "per_channel_symmetric" torch/nn/quantized/functional.py:76: error: Name 'Tensor' is not defined torch/nn/quantized/functional.py:76: error: Name 'BroadcastingList2' is not defined torch/nn/quantized/functional.py:259: error: Name 'Tensor' is not defined torch/nn/quantized/functional.py:259: error: Name 'Optional' is not defined torch/nn/quantized/functional.py:259: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/nn/quantized/functional.py:289: error: Module has no attribute "ops" torch/nn/quantized/functional.py:290: error: Module has no attribute "ops" torch/nn/quantized/functional.py:308: error: Name 'Tensor' is not defined torch/nn/quantized/functional.py:326: error: Name 'Tensor' is not defined torch/nn/quantized/functional.py:356: error: Name 'Tensor' is not defined torch/nn/quantized/functional.py:371: error: Name 'Tensor' is not defined torch/nn/quantized/functional.py:400: error: Name 'Tensor' is not defined torch/nn/quantized/functional.py:400: error: Name 'Optional' is not defined torch/nn/quantized/functional.py:400: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/nn/quantized/functional.py:430: error: Name 'Tensor' is not defined torch/nn/quantized/functional.py:448: error: Name 'Tensor' is not defined torch/nn/quantized/modules/linear.py:26: error: Module has no attribute "ops" torch/nn/quantized/modules/linear.py:28: error: Module has no attribute "ops" torch/nn/quantized/modules/functional_modules.py:40: error: Name 'Tensor' is not defined torch/nn/quantized/modules/functional_modules.py:47: error: Name 'Tensor' is not defined torch/nn/quantized/modules/functional_modules.py:54: error: Name 'Tensor' is not defined torch/nn/quantized/modules/functional_modules.py:61: error: Name 'Tensor' is not defined torch/nn/quantized/modules/functional_modules.py:68: error: Name 'List' is not defined torch/nn/quantized/modules/functional_modules.py:68: note: Did you forget to import it from "typing"? (Suggestion: "from typing import List") torch/nn/quantized/modules/functional_modules.py:68: error: Name 'Tensor' is not defined torch/nn/quantized/modules/functional_modules.py:75: error: Name 'Tensor' is not defined torch/nn/quantized/modules/functional_modules.py:140: error: Name 'Tensor' is not defined torch/nn/quantized/modules/functional_modules.py:146: error: Name 'Tensor' is not defined torch/nn/quantized/modules/functional_modules.py:151: error: Name 'Tensor' is not defined torch/nn/quantized/modules/functional_modules.py:157: error: Name 'Tensor' is not defined torch/nn/quantized/modules/functional_modules.py:162: error: Name 'List' is not defined torch/nn/quantized/modules/functional_modules.py:162: note: Did you forget to import it from "typing"? (Suggestion: "from typing import List") torch/nn/quantized/modules/functional_modules.py:162: error: Name 'Tensor' is not defined torch/nn/quantized/modules/functional_modules.py:168: error: Name 'Tensor' is not defined torch/multiprocessing/spawn.py:9: error: Module 'torch.multiprocessing' has no attribute '_prctl_pr_set_pdeathsig' torch/multiprocessing/__init__.py:28: error: Module has no attribute "__all__" torch/jit/frontend.py:9: error: Cannot find implementation or library stub for module named 'torch._C._jit_tree_views' torch/jit/annotations.py:6: error: Module 'torch._jit_internal' has no attribute 'BroadcastingList2'; maybe "BroadcastingList1" or "BroadcastingListCls"? torch/jit/annotations.py:6: error: Module 'torch._jit_internal' has no attribute 'BroadcastingList3'; maybe "BroadcastingList1" or "BroadcastingListCls"? torch/jit/annotations.py:9: error: Cannot find implementation or library stub for module named 'torch._C' torch/distributions/distribution.py:16: error: Need type annotation for 'arg_constraints' (hint: "arg_constraints: Dict[<type>, <type>] = ...") torch/distributions/distribution.py:74: error: Name 'arg_constraints' already defined on line 16 torch/distributions/distribution.py:84: error: Name 'support' already defined on line 15 torch/functional.py:114: error: Name 'Tuple' is not defined torch/functional.py:114: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/functional.py:114: error: Name 'Optional' is not defined torch/functional.py:114: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/functional.py:189: error: Incompatible types in assignment (expression has type "None", variable has type "Tensor") torch/functional.py:200: error: Argument 1 to "_indices_product" has incompatible type "Tuple[int, ...]"; expected "List[int]" torch/functional.py:204: error: No overload variant of "__setitem__" of "list" matches argument types "Tensor", "int" torch/functional.py:204: note: Possible overload variants: torch/functional.py:204: note: def __setitem__(self, int, int) -> None torch/functional.py:204: note: def __setitem__(self, slice, Iterable[int]) -> None torch/functional.py:204: error: No overload variant of "__getitem__" of "list" matches argument type "Tensor" torch/functional.py:204: note: def __getitem__(self, int) -> int torch/functional.py:204: note: def __getitem__(self, slice) -> List[int] torch/functional.py:207: error: "Tensor" has no attribute "copy_" torch/functional.py:212: error: No overload variant of "__setitem__" of "list" matches argument types "Tensor", "int" torch/functional.py:212: note: Possible overload variants: torch/functional.py:212: note: def __setitem__(self, int, int) -> None torch/functional.py:212: note: def __setitem__(self, slice, Iterable[int]) -> None torch/functional.py:212: error: No overload variant of "__getitem__" of "list" matches argument type "Tensor" torch/functional.py:212: note: def __getitem__(self, int) -> int torch/functional.py:212: note: def __getitem__(self, slice) -> List[int] torch/functional.py:215: error: Incompatible types in assignment (expression has type "None", variable has type "Tensor") torch/functional.py:334: error: Name 'Optional' is not defined torch/functional.py:334: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/functional.py:429: error: Argument 2 to "pad" has incompatible type "Tuple[int, int]"; expected "List[int]" torch/functional.py:431: error: Module has no attribute "stft" torch/functional.py:766: error: Module has no attribute "cdist" torch/functional.py:768: error: Module has no attribute "cdist" torch/functional.py:770: error: Module has no attribute "cdist" torch/functional.py:775: error: Name 'Optional' is not defined torch/functional.py:775: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/functional.py:780: error: Name 'Optional' is not defined torch/functional.py:780: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/functional.py:780: error: Name 'number' is not defined torch/functional.py:780: error: Name 'norm' already defined on line 775 torch/functional.py:785: error: Name 'Optional' is not defined torch/functional.py:785: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/functional.py:785: error: Name 'number' is not defined torch/functional.py:785: error: Name 'norm' already defined on line 775 torch/functional.py:790: error: Name 'Optional' is not defined torch/functional.py:790: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/functional.py:790: error: Name 'norm' already defined on line 775 torch/functional.py:795: error: Name 'norm' already defined on line 775 torch/functional.py:960: error: Name 'Any' is not defined torch/functional.py:960: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Any") torch/functional.py:960: error: Name 'Tuple' is not defined torch/functional.py:960: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/functional.py:1036: error: Argument 1 to "len" has incompatible type "int"; expected "Sized" torch/functional.py:1041: error: Name 'Optional' is not defined torch/functional.py:1041: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/functional.py:1041: error: Name 'Tuple' is not defined torch/functional.py:1041: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/functional.py:1056: error: Name 'Optional' is not defined torch/functional.py:1056: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/functional.py:1056: error: Name 'Tuple' is not defined torch/functional.py:1056: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Tuple") torch/distributions/von_mises.py:87: error: Incompatible types in assignment (expression has type "_Real", base class "Distribution" defined the type as "None") torch/distributions/negative_binomial.py:25: error: Incompatible types in assignment (expression has type "_IntegerGreaterThan", base class "Distribution" defined the type as "None") torch/distributions/multivariate_normal.py:116: error: Incompatible types in assignment (expression has type "_Real", base class "Distribution" defined the type as "None") torch/distributions/laplace.py:23: error: Incompatible types in assignment (expression has type "_Real", base class "Distribution" defined the type as "None") torch/distributions/independent.py:34: error: Need type annotation for 'arg_constraints' (hint: "arg_constraints: Dict[<type>, <type>] = ...") torch/distributions/cauchy.py:28: error: Incompatible types in assignment (expression has type "_Real", base class "Distribution" defined the type as "None") torch/distributions/poisson.py:28: error: Incompatible types in assignment (expression has type "_IntegerGreaterThan", base class "Distribution" defined the type as "None") torch/distributions/one_hot_categorical.py:32: error: Incompatible types in assignment (expression has type "_Simplex", base class "Distribution" defined the type as "None") torch/distributions/normal.py:27: error: Incompatible types in assignment (expression has type "_Real", base class "Distribution" defined the type as "None") torch/distributions/lowrank_multivariate_normal.py:79: error: Incompatible types in assignment (expression has type "_Real", base class "Distribution" defined the type as "None") torch/distributions/gamma.py:30: error: Incompatible types in assignment (expression has type "_GreaterThan", base class "Distribution" defined the type as "None") torch/distributions/exponential.py:23: error: Incompatible types in assignment (expression has type "_GreaterThan", base class "Distribution" defined the type as "None") torch/distributions/fishersnedecor.py:25: error: Incompatible types in assignment (expression has type "_GreaterThan", base class "Distribution" defined the type as "None") torch/distributions/dirichlet.py:44: error: Incompatible types in assignment (expression has type "_Simplex", base class "Distribution" defined the type as "None") torch/nn/quantized/dynamic/modules/rnn.py:230: error: Incompatible types in assignment (expression has type "int", variable has type "Tensor") torch/nn/quantized/dynamic/modules/rnn.py:232: error: Incompatible types in assignment (expression has type "int", variable has type "Tensor") torch/nn/quantized/dynamic/modules/rnn.py:236: error: Incompatible return value type (got "Tuple[Any, Tensor, Any]", expected "Tuple[int, int, int]") torch/nn/quantized/dynamic/modules/rnn.py:351: error: Incompatible types in assignment (expression has type "Type[LSTM]", base class "RNNBase" defined the type as "Type[RNNBase]") torch/nn/quantized/dynamic/modules/rnn.py:381: error: Module has no attribute "quantized_lstm" torch/nn/quantized/dynamic/modules/rnn.py:385: error: Module has no attribute "quantized_lstm" torch/nn/quantized/dynamic/modules/rnn.py:414: error: Argument 1 to "forward_impl" of "LSTM" has incompatible type "PackedSequence"; expected "Tensor" torch/nn/quantized/dynamic/modules/rnn.py:416: error: Incompatible types in assignment (expression has type "PackedSequence", variable has type "Tensor") torch/nn/quantized/dynamic/modules/rnn.py:418: error: Incompatible return value type (got "Tuple[Tensor, Tuple[Tensor, Tensor]]", expected "Tuple[PackedSequence, Tuple[Tensor, Tensor]]") torch/nn/quantized/dynamic/modules/rnn.py:420: error: Argument 1 of "permute_hidden" is incompatible with supertype "RNNBase"; supertype defines the argument type as "Tensor" torch/nn/quantized/dynamic/modules/rnn.py:420: error: Return type "Tuple[Tensor, Tensor]" of "permute_hidden" incompatible with return type "Tensor" in supertype "RNNBase" torch/nn/quantized/dynamic/modules/rnn.py:426: error: Argument 2 of "check_forward_args" is incompatible with supertype "RNNBase"; supertype defines the argument type as "Tensor" torch/nn/intrinsic/qat/modules/conv_fused.py:232: error: Incompatible types in assignment (expression has type "Type[ConvBnReLU2d]", base class "ConvBn2d" defined the type as "Type[ConvBn2d]") torch/distributions/beta.py:27: error: Incompatible types in assignment (expression has type "_Interval", base class "Distribution" defined the type as "None") torch/distributions/geometric.py:31: error: Incompatible types in assignment (expression has type "_IntegerGreaterThan", base class "Distribution" defined the type as "None") torch/distributions/continuous_bernoulli.py:38: error: Incompatible types in assignment (expression has type "_Interval", base class "Distribution" defined the type as "None") torch/distributions/bernoulli.py:30: error: Incompatible types in assignment (expression has type "_Boolean", base class "Distribution" defined the type as "None") torch/quantization/fake_quantize.py:126: error: Module has no attribute "per_tensor_symmetric" torch/quantization/fake_quantize.py:132: error: Module has no attribute "per_channel_symmetric" torch/distributions/transformed_distribution.py:41: error: Need type annotation for 'arg_constraints' (hint: "arg_constraints: Dict[<type>, <type>] = ...") torch/jit/__init__.py:1: error: Cannot find implementation or library stub for module named 'torch._C' torch/jit/__init__.py:15: error: Module 'torch.utils' has no attribute 'set_module' torch/jit/__init__.py:70: error: Name 'Attribute' already defined on line 68 torch/jit/__init__.py:213: error: On Python 3 '{}'.format(b'abc') produces "b'abc'"; use !r if this is a desired behavior torch/jit/__init__.py:215: error: On Python 3 '{}'.format(b'abc') produces "b'abc'"; use !r if this is a desired behavior torch/jit/__init__.py:1524: error: Unsupported dynamic base class "with_metaclass" torch/jit/__init__.py:1869: error: Name 'ScriptModule' already defined on line 1524 torch/jit/__init__.py:1998: error: Need type annotation for '_jit_caching_layer' torch/jit/__init__.py:1999: error: Need type annotation for '_jit_function_overload_caching' torch/distributions/relaxed_categorical.py:34: error: Incompatible types in assignment (expression has type "_Real", base class "Distribution" defined the type as "None") torch/distributions/relaxed_categorical.py:108: error: Incompatible types in assignment (expression has type "_Simplex", base class "Distribution" defined the type as "None") torch/distributions/relaxed_bernoulli.py:31: error: Incompatible types in assignment (expression has type "_Real", base class "Distribution" defined the type as "None") torch/distributions/relaxed_bernoulli.py:114: error: Incompatible types in assignment (expression has type "_Interval", base class "Distribution" defined the type as "None") torch/distributions/logistic_normal.py:31: error: Incompatible types in assignment (expression has type "_Simplex", base class "Distribution" defined the type as "None") torch/distributions/log_normal.py:26: error: Incompatible types in assignment (expression has type "_GreaterThan", base class "Distribution" defined the type as "None") torch/distributions/half_normal.py:27: error: Incompatible types in assignment (expression has type "_GreaterThan", base class "Distribution" defined the type as "None") torch/distributions/half_cauchy.py:28: error: Incompatible types in assignment (expression has type "_GreaterThan", base class "Distribution" defined the type as "None") torch/distributions/gumbel.py:28: error: Incompatible types in assignment (expression has type "_Real", base class "Distribution" defined the type as "None") torch/nn/quantized/modules/conv.py:18: error: Module 'torch.nn.utils' has no attribute 'fuse_conv_bn_weights' torch/nn/quantized/modules/conv.py:209: error: Name 'Optional' is not defined torch/nn/quantized/modules/conv.py:209: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/nn/quantized/modules/conv.py:214: error: Module has no attribute "ops" torch/nn/quantized/modules/conv.py:321: error: Name 'Optional' is not defined torch/nn/quantized/modules/conv.py:321: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/nn/quantized/modules/conv.py:323: error: Module has no attribute "ops" torch/nn/quantized/modules/conv.py:447: error: Name 'Optional' is not defined torch/nn/quantized/modules/conv.py:447: note: Did you forget to import it from "typing"? (Suggestion: "from typing import Optional") torch/nn/quantized/modules/conv.py:449: error: Module has no attribute "ops" torch/nn/quantized/modules/conv.py:513: error: Name 'nn.modules.conv._ConvTransposeNd' is not defined torch/nn/quantized/modules/conv.py:525: error: Name 'List' is not defined torch/nn/quantized/modules/conv.py:525: note: Did you forget to import it from "typing"? (Suggestion: "from typing import List") torch/nn/quantized/modules/conv.py:527: error: Name 'List' is not defined torch/nn/quantized/modules/conv.py:527: note: Did you forget to import it from "typing"? (Suggestion: "from typing import List") torch/nn/intrinsic/quantized/modules/conv_relu.py:8: error: Module 'torch.nn.utils' has no attribute 'fuse_conv_bn_weights' torch/nn/intrinsic/quantized/modules/conv_relu.py:21: error: Incompatible types in assignment (expression has type "Type[ConvReLU2d]", base class "Conv2d" defined the type as "Type[Conv2d]") torch/nn/intrinsic/quantized/modules/conv_relu.py:62: error: Incompatible types in assignment (expression has type "Type[ConvReLU3d]", base class "Conv3d" defined the type as "Type[Conv3d]") torch/distributions/weibull.py:25: error: Incompatible types in assignment (expression has type "_GreaterThan", base class "Distribution" defined the type as "None") torch/distributions/kl.py:35: error: Need type annotation for '_KL_MEMOIZE' (hint: "_KL_MEMOIZE: Dict[<type>, <type>] = ...") torch/distributions/studentT.py:27: error: Incompatible types in assignment (expression has type "_Real", base class "Distribution" defined the type as "None") torch/distributions/mixture_same_family.py:48: error: Need type annotation for 'arg_constraints' (hint: "arg_constraints: Dict[<type>, <type>] = ...") torch/distributions/__init__.py:158: error: Name 'transforms' is not defined torch/onnx/utils.py:21: error: Cannot find implementation or library stub for module named 'torch._C' torch/distributed/rendezvous.py:4: error: Cannot find implementation or library stub for module named 'urlparse' torch/distributed/rendezvous.py:4: error: Name 'urlparse' already defined (possibly by an import) torch/distributed/rendezvous.py:4: error: Name 'urlunparse' already defined (possibly by an import) torch/distributed/rendezvous.py:9: error: Module 'torch.distributed' has no attribute 'FileStore' torch/distributed/rendezvous.py:9: error: Module 'torch.distributed' has no attribute 'TCPStore' torch/distributed/rendezvous.py:65: error: On Python 3 '{}'.format(b'abc') produces "b'abc'"; use !r if this is a desired behavior torch/distributed/distributed_c10d.py:11: error: Module 'torch.distributed' has no attribute 'AllreduceOptions'; maybe "ReduceOptions" or "AllreduceCoalescedOptions"? torch/distributed/distributed_c10d.py:11: error: Module 'torch.distributed' has no attribute 'AllreduceCoalescedOptions'; maybe "AllreduceOptions"? torch/distributed/distributed_c10d.py:11: error: Module 'torch.distributed' has no attribute 'AllToAllOptions' torch/distributed/distributed_c10d.py:11: error: Module 'torch.distributed' has no attribute 'BroadcastOptions' torch/distributed/distributed_c10d.py:11: error: Module 'torch.distributed' has no attribute 'GatherOptions'; maybe "ScatterOptions"? torch/distributed/distributed_c10d.py:11: error: Module 'torch.distributed' has no attribute 'ReduceOptions'; maybe "AllreduceOptions", "ReduceScatterOptions", or "ReduceOp"? torch/distributed/distributed_c10d.py:11: error: Module 'torch.distributed' has no attribute 'ReduceScatterOptions'; maybe "ScatterOptions" or "ReduceOptions"? torch/distributed/distributed_c10d.py:11: error: Module 'torch.distributed' has no attribute 'ScatterOptions'; maybe "ReduceScatterOptions" or Pull Request resolved: https://github.com/pytorch/pytorch/pull/36584 Reviewed By: seemethere, ailzhang Differential Revision: D21155985 Pulled By: ezyang fbshipit-source-id: f628d4293992576207167e7c417998fad15898d1 |
||
|
|
65df8b3886 |
hardswish: make it work in static quantization (#36545)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36545 * adds a quantized nn.module for Hardswish so we can observe activation values * modifies the hardswish op to allow specifying scale + zero_point * makes hardswish model be properly swapped in static quantization Test Plan: added tests and they pass for: * the new _out flavor of hardswish * QNNPACK changes * static quant e2e Imported from OSS Differential Revision: D21045320 fbshipit-source-id: ab7e52f0f54a7d5923ab6f58197022cc28c12354 |
||
|
|
4ef383d5db |
add type hints on recently added ops to make them scriptable (#35885)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35885 For the ops I added recently, ensure all the typehints are present, so that JIT can script them. We might want to look into a test for this in the future. Test Plan: scripting works for all of them now: https://gist.github.com/vkuzo/1d92fdea548ad596310fffcbe95e4438 Imported from OSS Differential Revision: D20818431 fbshipit-source-id: 0de61eaf70c08d625128c6fffd05788e6e5bb920 |
||
|
|
b4c4342747 |
hswish and hardsigmoid: improve docs (#35431)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35431 Resolving z-a-f's comments on earlier PRs on making the docblocks easier to read. Test Plan: render the new docblocks in http://rst.aaroniles.net/ CI Imported from OSS Differential Revision: D20658668 fbshipit-source-id: 5ea4a21d6b8dc9d744e2f4ede2f9d5d799fb902f |
||
|
|
f1efe51028 |
add quantized version of hardswish operator (#34820)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34820 Adds quantized version of hardswish, for common quantized operator coverage. Note: * we carry over scale and zero_point from the input to the output, because the range of the output is unbounded if x > 0 * we also skip the .out function to not allow the user to specify a custom scale+zp (flexible on this). Test Plan: ``` python test/test_quantized.py https://gist.github.com/vkuzo/f9b579315ed7f5fdb24839e3218d8465 ``` Imported from OSS Differential Revision: D20472905 fbshipit-source-id: 0f2a83e9f5f7b43485fa46caf30e756dc5d492a9 |
||
|
|
37b234a880 |
quantized hardsigmoid, take 2 (#34959)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34959 Adds quantized implementation of hardsigmoid. Original PR was https://github.com/pytorch/pytorch/pull/34607 and had to be reverted for a test breakage, trying again. Test Plan: tests benchmarks Imported from OSS Differential Revision: D20514212 fbshipit-source-id: cc7ae3b67757e2dde5c313c05ce60a0f2625d961 |
||
|
|
95f1cb34b9 |
Revert D20480546: adds quantized implementation of hard sigmoid
Test Plan: revert-hammer Differential Revision: D20480546 Original commit changeset: 9febcb44afd9 fbshipit-source-id: 4461b455e63448cf45237e23c988b492c3e0f1b0 |
||
|
|
58c5b6d306 |
adds quantized implementation of hard sigmoid (#34607)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34607 Adds quantized version of hardsigmoid activation. Note: not implementing the _ and .out versions is currently intended, because the implementation changes the scale and zp and it's nice to not allow the user to specify scale and zp. Lmk if we should handle this differently. Test Plan: tests benchmarks Imported from OSS Differential Revision: D20480546 fbshipit-source-id: 9febcb44afd920125ed2ca4900492f0b712078ea |
||
|
|
5d65b5cd01 |
Add the 3d upsample quantized op for video model (#34594)
Summary:
as title, we are currently missing this 3d op, which is required for video related model.
Performance benchmark:
```
import torch, time
for dtype in [torch.qint8, torch.quint8, torch.qint32]:
print('****', str(dtype), '*****')
x = torch.rand(1, 56, 64, 56, 256)
q_x = torch.quantize_per_tensor(x, 0.5, 1, dtype)
q_x = q_x.permute([0, 4, 1, 2, 3])
x = x.permute([0, 4, 1, 2, 3])
NITER = 100
s = time.time()
for i in range(NITER):
float_out = torch.nn.functional.interpolate(x, size=30, scale_factor=None, mode="nearest", align_corners=None)
time_per_iter_float = (time.time() - s) / NITER
s = time.time()
for i in range(NITER):
quant_out = torch.nn.functional.interpolate(q_x, size=30, scale_factor=None, mode="nearest", align_corners=None)
time_per_iter_quant = (time.time() - s) / NITER
ref_quantized = torch.quantize_per_tensor(float_out, 0.5, 1, dtype)
torch.testing.assert_allclose(ref_quantized.dequantize(), quant_out.dequantize())
print('time/iter ms (float)', 'time/iter ms (quant)', 'quant/float', sep='\t')
print(time_per_iter_float * 1000, time_per_iter_quant * 1000, time_per_iter_quant / time_per_iter_float, sep='\t')
bytes_float = (x.numel() + float_out.numel()) * x.element_size()
bytes_quant = (q_x.numel() + quant_out.numel()) * q_x.element_size()
float_bw_gbps = bytes_float / time_per_iter_float / 1e9
quant_bw_gbps = bytes_quant / time_per_iter_quant / 1e9
print('GB/s float', 'GB/s quant', sep='\t')
print(float_bw_gbps, quant_bw_gbps, sep='\t')
```
```
**** torch.qint8 *****
time/iter ms (float) time/iter ms (quant) quant/float
1136.8209528923035 1.294245719909668 0.0011384780660638283
GB/s float GB/s quant
0.20510608588517917 45.03953391792442
**** torch.quint8 *****
time/iter ms (float) time/iter ms (quant) quant/float
827.9890131950378 1.11464262008667 0.0013462046021426
GB/s float GB/s quant
0.28160868355034036 52.29678369508914
**** torch.qint32 *****
time/iter ms (float) time/iter ms (quant) quant/float
834.6958303451538 7.481417655944824 0.008963046638020456
GB/s float GB/s quant
0.2793459455806586 31.16640544920269
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34594
Differential Revision: D20389106
Pulled By: lly-zero-one
fbshipit-source-id: d3a8c2cac58087d8b29e9cae64822f5b2d4c03ba
|
||
|
|
43c9cc7a9c |
add quantized ELU activation (#34267)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34267 Adds quantized ELU. Test Plan: ``` python test/test_quantized.py TestQuantizedOps.test_qelu ``` still need to benchmark, saving that for after the review comments Imported from OSS Differential Revision: D20370953 fbshipit-source-id: fe941bf966f72dd9eee2c4b2ef45fe7afb50c866 |
||
|
|
2e88a78d2e |
add quantized_hardtanh (#34097)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34097 Adds quantized hardtanh. Calls the clamp kernel behind the scenes. Test Plan: ``` python test/test_quantized.py ``` Imported from OSS Differential Revision: D20208860 fbshipit-source-id: 165a6a1c22f1dcc479679e5ea0c990d0e9c3b6c5 |
||
|
|
b0479506a8 |
Add the 3d avg pool for video related model (#33339)
Summary:
```
import torch, time
for dtype in [torch.qint8, torch.quint8, torch.qint32]:
print('****', str(dtype), '*****')
x = torch.rand(1, 5, 56, 56, 256)
q_x = torch.quantize_per_tensor(x, 0.5, 1, dtype)
q_x = q_x.permute([0, 4, 1, 2, 3])
x = x.permute([0, 4, 1, 2, 3])
NITER = 10
s = time.time()
for i in range(NITER):
float_out = torch.nn.functional.avg_pool3d(x, kernel_size=3, stride=None, padding=0)
time_per_iter_float = (time.time() - s) / NITER
s = time.time()
for i in range(NITER):
quant_out = torch.nn.quantized.functional.avg_pool3d(q_x, kernel_size=3, stride=None, padding=0)
time_per_iter_quant = (time.time() - s) / NITER
print('time/iter ms (float)', 'time/iter ms (quant)', 'quant/float', sep='\t')
print(time_per_iter_float * 1000, time_per_iter_quant * 1000, time_per_iter_quant / time_per_iter_float, sep='\t')
```
```
**** torch.qint8 *****
time/iter ms (float) time/iter ms (quant) quant/float
16.286182403564453 0.7308721542358398 0.04487682479080417
**** torch.quint8 *****
time/iter ms (float) time/iter ms (quant) quant/float
15.364313125610352 0.6497383117675781 0.042288796541418254
**** torch.qint32 *****
time/iter ms (float) time/iter ms (quant) quant/float
15.649032592773438 13.879132270812988 0.8869003363966556
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33339
Differential Revision: D19900904
Pulled By: lly-zero-one
fbshipit-source-id: 4522cc6b4a0751aeda6c7edc258e0cb3f55a8fe3
|
||
|
|
b10761d890 |
fix type stub errors (#33762)
Summary:
I've been using pytorch with type hintings, and I found errors that can be easily fixed. So I'm creating this PR to fix type bugs.
I expected below code should be type-checked without any errors.
```python
import torch
from torch.nn import Linear
from torch.autograd import Variable
from torch.optim import AdamW
from torch.utils import hooks
# nn.Module should have training attribute
module = Linear(10, 20)
module.training
# torch should have dtype bfloat16
tensor2 = torch.tensor([1,2,3], dtype=torch.bfloat16)
# torch.Tensor.cuda should accept int or str value
torch.randn(5).cuda(1)
torch.tensor(5).cuda('cuda:0')
# optimizer should have default attribute
module = Linear(10, 20)
print(AdamW(module.weight).default)
# torch.Tensor should have these boolean attributes
torch.tensor([1]).is_sparse
torch.tensor([1]).is_quantized
torch.tensor([1]).is_mkldnn
# Size class should tuple of int
a, b = torch.tensor([[1,2,3]]).size()
# check modules can be accessed
torch.nn.parallel
torch.autograd.profiler
torch.multiprocessing
torch.sparse
torch.onnx
torch.jit
torch.hub
torch.random
torch.distributions
torch.quantization
torch.__config__
torch.__future__
torch.ops
torch.classes
# Variable class's constructor should return Tensor
def fn_to_test_variable(t: torch.Tensor):
return None
v = Variable(torch.tensor(1))
fn_to_test_variable(v)
# check RemovableHandle attributes can be accessed
handle = hooks.RemovableHandle({})
handle.id
handle.next_id
# check torch function hints
torch.is_grad_enabled()
```
But current master branch raises errors. (I checked with pyright)
```
$ pyright test.py
Searching for source files
Found 1 source file
test.py
12:45 - error: 'bfloat16' is not a known member of module
15:21 - error: Argument of type 'Literal[1]' cannot be assigned to parameter 'device' of type 'Optional[device]'
'int' is incompatible with 'device'
Cannot assign to 'None'
16:22 - error: Argument of type 'Literal['cuda:0']' cannot be assigned to parameter 'device' of type 'Optional[device]'
'str' is incompatible with 'device'
Cannot assign to 'None'
23:19 - error: Cannot access member 'is_sparse' for type 'Tensor'
Member 'is_sparse' is unknown
24:19 - error: Cannot access member 'is_quantized' for type 'Tensor'
Member 'is_quantized' is unknown
25:19 - error: Cannot access member 'is_mkldnn' for type 'Tensor'
Member 'is_mkldnn' is unknown
32:7 - error: 'autograd' is not a known member of module
33:7 - error: 'multiprocessing' is not a known member of module
34:7 - error: 'sparse' is not a known member of module
35:7 - error: 'onnx' is not a known member of module
36:7 - error: 'jit' is not a known member of module
37:7 - error: 'hub' is not a known member of module
38:7 - error: 'random' is not a known member of module
39:7 - error: 'distributions' is not a known member of module
40:7 - error: 'quantization' is not a known member of module
41:7 - error: '__config__' is not a known member of module
42:7 - error: '__future__' is not a known member of module
44:7 - error: 'ops' is not a known member of module
45:7 - error: 'classes' is not a known member of module
60:7 - error: 'is_grad_enabled' is not a known member of module
20 errors, 0 warnings
Completed in 1.436sec
```
and below list is not checked as errors, but I think these are errors too.
* `nn.Module.training` is not boolean
* return type of `torch.Tensor.size()` is `Tuple[Unknown]`.
---
related issues.
https://github.com/pytorch/pytorch/issues/23731, https://github.com/pytorch/pytorch/issues/32824, https://github.com/pytorch/pytorch/issues/31753
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33762
Differential Revision: D20118884
Pulled By: albanD
fbshipit-source-id: 41557d66674a11b8e7503a48476d4cdd0f278eab
|
||
|
|
a23009f98f |
Quantized leaky relu
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33004 Test Plan: Imported from OSS Differential Revision: D19740193 Pulled By: z-a-f fbshipit-source-id: 32542d5465db44190366a2f8b737305a03b5fa76 |
||
|
|
a2463cbc38 |
Adding quantized clamp kernel (#30541)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30541 ghstack-source-id: 95450749 Adding quantized clamp kernel Test Plan: Added test. buck test mode/dev //caffe2/test:quantized -- 'test_qclamp \(test_quantized\.TestQuantizedOps\)' --print-passing-details Differential Revision: D18739628 fbshipit-source-id: 38a029ab96c5b0689bb15c67dc4f274883e74975 |
||
|
|
2d6b2f39e9 |
Fix docs so that the example works (#30120)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30120 The example given for functional conv2d didn't work. This diff fixes the example in docs so that it works. Fixes https://github.com/pytorch/pytorch/issues/29649 ghstack-source-id: 94601559 Test Plan: Tried the example locally Differential Revision: D18604606 fbshipit-source-id: ff1a4f903e2843efe30d962d4ff00e5065cd1d7e |
||
|
|
bf80664515 |
Add quantized conv3d function (#29686)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29686 Add quantized conv3d function Test Plan: buck test mode/dev-nosan //caffe2/test:quauntized -- "conv" Reviewed By: hl475 Differential Revision: D18463090 fbshipit-source-id: f9c3d2920c3fc015bbb2b6a583a582c9f8397b08 |
||
|
|
bbff06ee96 |
Convert conv_prepack to conv2d_prepack and conv_unpack to conv2d_unpack (#29529)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29529 Pull Request resolved: https://github.com/pytorch/glow/pull/3771 We would like to replace `conv_prepack` with `conv2d_prepack` and `conv_unpack` with `conv2d_unpack`. This makes the naming consistent between 2D and 3D conv: ``` torch.ops.quantized.conv2d_prepack torch.ops.quantized.conv2d_unpack torch.ops.quantized.conv2d torch.ops.quantized.conv3d_prepack torch.ops.quantized.conv3d_unpack torch.ops.quantized.conv3d ``` We should do this earlier rather than later when we have more users for the quantized conv2d ops, for better engineering. The replacement bash command is as the follows: ``` find ./ -type f -exec sed -i -e 's/quantized::conv_prepack/quantized::conv2d_prepack/g' {} \; find ./ -type f -exec sed -i -e 's/quantized::conv_unpack/quantized::conv2d_unpack/g' {} \; find ./ -type f -exec sed -i -e 's/torch.ops.quantized.conv_prepack/torch.ops.quantized.conv2d_prepack/g' {} \; find ./ -type f -exec sed -i -e 's/torch.ops.quantized.conv_unpack/torch.ops.quantized.conv2d_unpack/g' {} \; ``` ghstack-source-id: 93661879 Test Plan: CI Reviewed By: jackm321 Differential Revision: D18421079 fbshipit-source-id: 17ae8b1ee79223bd2c5d4bbccd57af6580c4ab12 |
||
|
|
7b3881f68c |
Adding docstrings for nnq.functional
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27363 Test Plan: Imported from OSS Differential Revision: D17758907 Pulled By: zafartahirov fbshipit-source-id: f560f2726cf51ceebdbf22ebef2d067422340cf2 |
||
|
|
03007b3dda |
Quantized Interpolate Kernel(upsample_bilinear2d) (#26631)
Summary:
We implement the quantized upsample_bilinear2d case for interpolate kernel in this PR.
For nhwc performance improvement:
import torch, time
for dtype in [torch.qint8, torch.quint8, torch.qint32]:
print('****', str(dtype), '*****')
x = torch.rand(1, 56, 56, 256)
q_x = torch.quantize_per_tensor(x, 0.5, 1, dtype)
q_x = q_x.permute([0, 3, 1, 2])
x = x.permute([0, 3, 1, 2])
NITER = 100
s = time.time()
for i in range(NITER):
float_out = torch.nn.functional.interpolate(x, size=5, scale_factor=None, mode="bilinear", align_corners=True)
time_per_iter_float = (time.time() - s) / NITER
s = time.time()
for i in range(NITER):
quant_out = torch.nn.quantized.functional.interpolate(q_x, size=5, scale_factor=None, mode="bilinear", align_corners=True)
time_per_iter_quant = (time.time() - s) / NITER
ref_quantized = torch.quantize_per_tensor(float_out, 0.5, 1, dtype)
# torch.testing.assert_allclose(ref_quantized.dequantize(), quant_out.dequantize())
print('time/iter ms (float)', 'time/iter ms (quant)', 'quant/float', sep='\t')
print(time_per_iter_float * 1000, time_per_iter_quant * 1000, time_per_iter_quant / time_per_iter_float, sep='\t')
bytes_float = (x.numel() + float_out.numel()) * x.element_size()
bytes_quant = (q_x.numel() + quant_out.numel()) * q_x.element_size()
float_bw_gbps = bytes_float / time_per_iter_float / 1e9
quant_bw_gbps = bytes_quant / time_per_iter_quant / 1e9
print('GB/s float', 'GB/s quant', sep='\t')
print(float_bw_gbps, quant_bw_gbps, sep='\t')
===========without nhwc handling===========
**** torch.qint8 *****
time/iter ms (float) time/iter ms (quant) quant/float
1.999044418334961 2.5860953330993652 1.2936657681940702
GB/s float GB/s quant
1.6192056416115257 0.3129103516188541
**** torch.quint8 *****
time/iter ms (float) time/iter ms (quant) quant/float
2.02730655670166 2.6061582565307617 1.2855274639721328
GB/s float GB/s quant
1.596632728927902 0.3105014816242217
**** torch.qint32 *****
time/iter ms (float) time/iter ms (quant) quant/float
2.0180463790893555 2.4047350883483887 1.1916153728010588
GB/s float GB/s quant
1.603959172365819 1.3460376636426636
===========with nhwc handling===========
**** torch.qint8 *****
time/iter ms (float) time/iter ms (quant) quant/float
2.0913314819335938 0.09696483612060547 0.04636512047863123
GB/s float GB/s quant
1.5477527249803915 8.345458337015
**** torch.quint8 *****
time/iter ms (float) time/iter ms (quant) quant/float
2.1065664291381836 0.09959936141967773 0.04728042754408879
GB/s float GB/s quant
1.5365591871338384 8.124710725706763
**** torch.qint32 *****
time/iter ms (float) time/iter ms (quant) quant/float
2.044203281402588 0.6003522872924805 0.29368521846837126
GB/s float GB/s quant
1.5834354779917448 5.391607675216635
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26631
Differential Revision: D17521498
Pulled By: llyfacebook
fbshipit-source-id: 385ae0f77777cd8bee385cafb80e492127b7d103
|