Commit Graph

93 Commits

Author SHA1 Message Date
Xuehai Pan
596b418391 [BE][PYFMT] migrate PYFMT for {torch,test}/{nn,optim}/** to ruff format (#144548)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144548
Approved by: https://github.com/ezyang
2025-06-14 11:27:04 +00:00
Xuehai Pan
b5c006acac [BE][Easy] enable UFMT for torch/nn/ (#128865)
Part of #123062

- #123062

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128865
Approved by: https://github.com/ezyang
2024-07-25 02:48:42 +00:00
NVS Abhilash
eb5487361d docs: fix docstring errors in quantized modules and others (#112695)
Fixes #112632

Before: 171
```
torch/backends/_nnapi/prepare.py:24 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/_nnapi/prepare.py:46 in public method `init`:
        D102: Missing docstring in public method
torch/backends/_nnapi/prepare.py:60 in public method `forward`:
        D102: Missing docstring in public method
torch/backends/_nnapi/prepare.py:94 in public function `convert_model_to_nnapi`:
        D103: Missing docstring in public function
torch/backends/_nnapi/prepare.py:153 in public function `process_for_nnapi`:
        D103: Missing docstring in public function
torch/backends/_nnapi/prepare.py:177 in private nested class `ShapeComputeModule`:
        D400: First line should end with a period (not 'n')
torch/backends/_nnapi/serializer.py:19 in public class `NNAPI_OperandCode`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:35 in public class `NNAPI_OperationCode`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:133 in public class `NNAPI_FuseCode`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:140 in public class `OperandValueSourceType`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:150 in public class `TorchScalarTypes`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:154 in public function `approx_equal`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:158 in public function `tensor_size`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:172 in public function `change_element`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:194 in public class `DimOrder`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:225 in public method `use_nchw`:
        D102: Missing docstring in public method
torch/backends/_nnapi/serializer.py:233 in public function `broadcast_shapes`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:260 in public function `get_conv_pool_shape`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:284 in public function `fix_shape`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:301 in public function `reverse_map_dim`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:312 in public function `flex_name`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:1337 in private method `_do_add_binary`:
        D400: First line should end with a period (not 's')
torch/backends/_nnapi/serializer.py:1337 in private method `_do_add_binary`:
        D401: First line should be in imperative mood; try rephrasing (found 'Helper')
torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`:
        D202: No blank lines allowed after function docstring (found 1)
torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`:
        D205: 1 blank line required between summary line and description (found 0)
torch/backends/_nnapi/serializer.py:2180 in public function `serialize_model`:
        D400: First line should end with a period (not ':')
torch/backends/cuda/__init__.py:1 at module level:
        D104: Missing docstring in public package
torch/backends/cuda/__init__.py:30 in public function `is_built`:
        D205: 1 blank line required between summary line and description (found 0)
torch/backends/cuda/__init__.py:30 in public function `is_built`:
        D209: Multi-line docstring closing quotes should be on a separate line
torch/backends/cuda/__init__.py:30 in public function `is_built`:
        D400: First line should end with a period (not 's')
torch/backends/cuda/__init__.py:30 in public function `is_built`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/backends/cuda/__init__.py:37 in public class `cuFFTPlanCacheAttrContextProp`:
        D101: Missing docstring in public class
torch/backends/cuda/__init__.py:40 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:44 in public method `__get__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:47 in public method `__set__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:54 in public class `cuFFTPlanCache`:
        D205: 1 blank line required between summary line and description (found 0)
torch/backends/cuda/__init__.py:54 in public class `cuFFTPlanCache`:
        D400: First line should end with a period (not 'e')
torch/backends/cuda/__init__.py:60 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:73 in public method `clear`:
        D102: Missing docstring in public method
torch/backends/cuda/__init__.py:78 in public class `cuFFTPlanCacheManager`:
        D205: 1 blank line required between summary line and description (found 0)
torch/backends/cuda/__init__.py:78 in public class `cuFFTPlanCacheManager`:
        D400: First line should end with a period (not ',')
torch/backends/cuda/__init__.py:89 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:93 in public method `__getitem__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:106 in public method `__getattr__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:109 in public method `__setattr__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:116 in public class `cuBLASModule`:
        D101: Missing docstring in public class
torch/backends/cuda/__init__.py:117 in public method `__getattr__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:126 in public method `__setattr__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:147 in public function `preferred_linalg_library`:
        D202: No blank lines allowed after function docstring (found 1)
torch/backends/cuda/__init__.py:204 in public class `SDPBackend`:
        D204: 1 blank line required after class docstring (found 0)
torch/backends/cudnn/__init__.py:1 at module level:
        D104: Missing docstring in public package
torch/backends/cudnn/__init__.py:81 in public function `version`:
        D400: First line should end with a period (not 'N')
torch/backends/cudnn/__init__.py:81 in public function `version`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/backends/cudnn/__init__.py:95 in public function `is_available`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/backends/cudnn/__init__.py:99 in public function `is_acceptable`:
        D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:122 in public function `set_flags`:
        D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:150 in public function `flags`:
        D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:174 in public class `CudnnModule`:
        D101: Missing docstring in public class
torch/backends/cudnn/__init__.py:175 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/mkl/__init__.py:1 at module level:
        D104: Missing docstring in public package
torch/backends/mkl/__init__.py:5 in public function `is_available`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/backends/mkl/__init__.py:14 in public class `verbose`:
        D205: 1 blank line required between summary line and description (found 0)
torch/backends/mkl/__init__.py:14 in public class `verbose`:
        D400: First line should end with a period (not 'y')
torch/backends/mkl/__init__.py:41 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/mkl/__init__.py:44 in public method `__enter__`:
        D105: Missing docstring in magic method
torch/backends/mkl/__init__.py:53 in public method `__exit__`:
        D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:1 at module level:
        D104: Missing docstring in public package
torch/backends/mkldnn/__init__.py:9 in public function `is_available`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/backends/mkldnn/__init__.py:19 in public class `verbose`:
        D205: 1 blank line required between summary line and description (found 0)
torch/backends/mkldnn/__init__.py:19 in public class `verbose`:
        D400: First line should end with a period (not 'y')
torch/backends/mkldnn/__init__.py:47 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/mkldnn/__init__.py:50 in public method `__enter__`:
        D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:59 in public method `__exit__`:
        D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:64 in public function `set_flags`:
        D103: Missing docstring in public function
torch/backends/mkldnn/__init__.py:71 in public function `flags`:
        D103: Missing docstring in public function
torch/backends/mkldnn/__init__.py:81 in public class `MkldnnModule`:
        D101: Missing docstring in public class
torch/backends/mkldnn/__init__.py:82 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/openmp/__init__.py:1 at module level:
        D104: Missing docstring in public package
torch/backends/openmp/__init__.py:5 in public function `is_available`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/nn/intrinsic/qat/modules/conv_fused.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/intrinsic/qat/modules/linear_fused.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/intrinsic/qat/modules/linear_relu.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/qat/__init__.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/qat/dynamic/__init__.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/qat/dynamic/modules/linear.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/qat/modules/__init__.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/qat/modules/conv.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/qat/modules/embedding_ops.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/qat/modules/linear.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantizable/modules/activation.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantizable/modules/rnn.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/__init__.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/conv.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/linear.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/rnn.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/sparse.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/_reference/modules/utils.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/dynamic/modules/__init__.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/dynamic/modules/conv.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/dynamic/modules/linear.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/dynamic/modules/rnn.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/functional.py:1 at module level:
        D400: First line should end with a period (not 'l')
torch/nn/quantized/modules/__init__.py:1 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/modules/activation.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/modules/batchnorm.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/modules/conv.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/modules/dropout.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/modules/embedding_ops.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/modules/functional_modules.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/modules/linear.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/modules/normalization.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/modules/rnn.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/quantized/modules/utils.py:2 at module level:
        D400: First line should end with a period (not 's')
torch/nn/utils/_expanded_weights/conv_utils.py:13 in public function `conv_picker`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:23 in public function `conv_args_and_kwargs`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:31 in public function `conv_normalizer`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:35 in public function `conv_input_for_string_padding`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:43 in public function `int_padding_for_string_padding`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:59 in public function `conv_padding_for_same`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:66 in public function `conv_backward`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:131 in public function `conv_unfold_weight_grad_sample`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:166 in public function `conv_group_weight_grad_sample`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`:
        D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`:
        D205: 1 blank line required between summary line and description (found 0)
torch/nn/utils/_expanded_weights/conv_utils.py:189 in public function `unfold3d`:
        D401: First line should be in imperative mood (perhaps 'Extract', not 'Extracts')
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:6 in public function `is_batch_first`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`:
        D205: 1 blank line required between summary line and description (found 0)
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`:
        D300: Use """triple double quotes""" (found '''-quotes)
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:19 in public function `standard_kwargs`:
        D400: First line should end with a period (not 'e')
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`:
        D205: 1 blank line required between summary line and description (found 0)
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`:
        D300: Use """triple double quotes""" (found '''-quotes)
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:28 in public function `forward_helper`:
        D400: First line should end with a period (not ')')
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:84 in public function `maybe_scale_by_batch_size`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:90 in public function `set_grad_sample_if_exists`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:108 in public function `unpack_expanded_weight_or_tensor`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`:
        D205: 1 blank line required between summary line and description (found 0)
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`:
        D400: First line should end with a period (not 't')
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:123 in public function `sum_over_all_but_batch_and_last_n`:
        D401: First line should be in imperative mood (perhaps 'Calculate', not 'Calculates')
torch/nn/utils/convert_parameters.py:1 at module level:
        D100: Missing docstring in public module
torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`:
        D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`:
        D205: 1 blank line required between summary line and description (found 0)
torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`:
        D400: First line should end with a period (not 'd')
torch/nn/utils/convert_parameters.py:57 in private function `_check_param_device`:
        D401: First line should be in imperative mood; try rephrasing (found 'This')
torch/nn/utils/rnn.py:1 at module level:
        D100: Missing docstring in public module
torch/nn/utils/rnn.py:28 in public class `PackedSequence`:
        D204: 1 blank line required after class docstring (found 0)
torch/nn/utils/rnn.py:63 in public method `__new__`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:73 in public method `pin_memory`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:80 in public method `cuda`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:87 in public method `cpu`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:94 in public method `double`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:97 in public method `float`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:100 in public method `half`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:103 in public method `long`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:106 in public method `int`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:109 in public method `short`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:112 in public method `char`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:115 in public method `byte`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:119 in public method `to`:
        D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/rnn.py:119 in public method `to`:
        D401: First line should be in imperative mood (perhaps 'Perform', not 'Performs')
torch/nn/utils/rnn.py:146 in public method `is_cuda`:
        D400: First line should end with a period (not 'u')
torch/nn/utils/rnn.py:150 in public method `is_pinned`:
        D400: First line should end with a period (not 'y')
torch/nn/utils/rnn.py:150 in public method `is_pinned`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/nn/utils/rnn.py:198 in public function `invert_permutation`:
        D103: Missing docstring in public function
torch/nn/utils/rnn.py:274 in public function `pad_packed_sequence`:
        D401: First line should be in imperative mood (perhaps 'Pad', not 'Pads')
torch/nn/utils/rnn.py:347 in public function `pad_sequence`:
        D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/rnn.py:347 in public function `pad_sequence`:
        D400: First line should end with a period (not '`')
torch/nn/utils/rnn.py:408 in public function `unpad_sequence`:
        D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/rnn.py:408 in public function `unpad_sequence`:
        D400: First line should end with a period (not 's')
torch/nn/utils/rnn.py:454 in public function `pack_sequence`:
        D400: First line should end with a period (not 's')
torch/nn/utils/rnn.py:490 in public function `unpack_sequence`:
        D202: No blank lines allowed after function docstring (found 1)
torch/nn/utils/rnn.py:490 in public function `unpack_sequence`:
        D400: First line should end with a period (not 's')
171
```

After: 81
```
torch/backends/_nnapi/prepare.py:24 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/_nnapi/prepare.py:46 in public method `init`:
        D102: Missing docstring in public method
torch/backends/_nnapi/prepare.py:60 in public method `forward`:
        D102: Missing docstring in public method
torch/backends/_nnapi/prepare.py:94 in public function `convert_model_to_nnapi`:
        D103: Missing docstring in public function
torch/backends/_nnapi/prepare.py:153 in public function `process_for_nnapi`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:19 in public class `NNAPI_OperandCode`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:35 in public class `NNAPI_OperationCode`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:133 in public class `NNAPI_FuseCode`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:140 in public class `OperandValueSourceType`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:150 in public class `TorchScalarTypes`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:154 in public function `approx_equal`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:158 in public function `tensor_size`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:172 in public function `change_element`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:194 in public class `DimOrder`:
        D101: Missing docstring in public class
torch/backends/_nnapi/serializer.py:225 in public method `use_nchw`:
        D102: Missing docstring in public method
torch/backends/_nnapi/serializer.py:233 in public function `broadcast_shapes`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:260 in public function `get_conv_pool_shape`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:284 in public function `fix_shape`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:301 in public function `reverse_map_dim`:
        D103: Missing docstring in public function
torch/backends/_nnapi/serializer.py:312 in public function `flex_name`:
        D103: Missing docstring in public function
torch/backends/cuda/__init__.py:1 at module level:
        D104: Missing docstring in public package
torch/backends/cuda/__init__.py:39 in public class `cuFFTPlanCacheAttrContextProp`:
        D101: Missing docstring in public class
torch/backends/cuda/__init__.py:42 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:46 in public method `__get__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:49 in public method `__set__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:63 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:76 in public method `clear`:
        D102: Missing docstring in public method
torch/backends/cuda/__init__.py:91 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/cuda/__init__.py:95 in public method `__getitem__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:108 in public method `__getattr__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:111 in public method `__setattr__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:118 in public class `cuBLASModule`:
        D101: Missing docstring in public class
torch/backends/cuda/__init__.py:119 in public method `__getattr__`:
        D105: Missing docstring in magic method
torch/backends/cuda/__init__.py:128 in public method `__setattr__`:
        D105: Missing docstring in magic method
torch/backends/cudnn/__init__.py:1 at module level:
        D104: Missing docstring in public package
torch/backends/cudnn/__init__.py:99 in public function `is_acceptable`:
        D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:122 in public function `set_flags`:
        D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:150 in public function `flags`:
        D103: Missing docstring in public function
torch/backends/cudnn/__init__.py:174 in public class `CudnnModule`:
        D101: Missing docstring in public class
torch/backends/cudnn/__init__.py:175 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/mkl/__init__.py:1 at module level:
        D104: Missing docstring in public package
torch/backends/mkl/__init__.py:42 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/mkl/__init__.py:45 in public method `__enter__`:
        D105: Missing docstring in magic method
torch/backends/mkl/__init__.py:54 in public method `__exit__`:
        D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:1 at module level:
        D104: Missing docstring in public package
torch/backends/mkldnn/__init__.py:48 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/mkldnn/__init__.py:51 in public method `__enter__`:
        D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:60 in public method `__exit__`:
        D105: Missing docstring in magic method
torch/backends/mkldnn/__init__.py:65 in public function `set_flags`:
        D103: Missing docstring in public function
torch/backends/mkldnn/__init__.py:72 in public function `flags`:
        D103: Missing docstring in public function
torch/backends/mkldnn/__init__.py:82 in public class `MkldnnModule`:
        D101: Missing docstring in public class
torch/backends/mkldnn/__init__.py:83 in public method `__init__`:
        D107: Missing docstring in __init__
torch/backends/openmp/__init__.py:1 at module level:
        D104: Missing docstring in public package
torch/nn/utils/_expanded_weights/conv_utils.py:13 in public function `conv_picker`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:23 in public function `conv_args_and_kwargs`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:31 in public function `conv_normalizer`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:35 in public function `conv_input_for_string_padding`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:43 in public function `int_padding_for_string_padding`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:59 in public function `conv_padding_for_same`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:66 in public function `conv_backward`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:131 in public function `conv_unfold_weight_grad_sample`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/conv_utils.py:166 in public function `conv_group_weight_grad_sample`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:6 in public function `is_batch_first`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:87 in public function `maybe_scale_by_batch_size`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:93 in public function `set_grad_sample_if_exists`:
        D103: Missing docstring in public function
torch/nn/utils/_expanded_weights/expanded_weights_utils.py:111 in public function `unpack_expanded_weight_or_tensor`:
        D103: Missing docstring in public function
torch/nn/utils/convert_parameters.py:1 at module level:
        D100: Missing docstring in public module
torch/nn/utils/rnn.py:1 at module level:
        D100: Missing docstring in public module
torch/nn/utils/rnn.py:64 in public method `__new__`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:74 in public method `pin_memory`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:81 in public method `cuda`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:88 in public method `cpu`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:95 in public method `double`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:98 in public method `float`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:101 in public method `half`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:104 in public method `long`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:107 in public method `int`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:110 in public method `short`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:113 in public method `char`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:116 in public method `byte`:
        D102: Missing docstring in public method
torch/nn/utils/rnn.py:198 in public function `invert_permutation`:
        D103: Missing docstring in public function
81
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112695
Approved by: https://github.com/mikaylagawarecki
2023-11-07 23:52:16 +00:00
Sergii Dymchenko
a65b88d516 Import forgotten pack_weight_bias in rnn.py (#84315)
`pack_weight_bias` is exported in `__all__`, but the actual import was lot during migration in https://github.com/pytorch/pytorch/pull/78714.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84315
Approved by: https://github.com/seemethere, https://github.com/malfet
2022-09-01 22:57:50 +00:00
zaf
d32a762147 [quant][ao_migration] torch.nn.quantized.dynamictorch.ao.nn.quantized.dynamic (#78714)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [X] [Current PR] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- [Documentation](docs/source/quantization-support.rst) @vkuzo
- [Public API test list](test/allowlist_for_publicAPI.json) @peterbell10
- [BC test](test/quantization/bc/test_backward_compatibility.py) @vkuzo
- [IR emitter](torch/csrc/jit/frontend/ir_emitter.cpp) @jamesr66a
- [JIT serialization](torch/csrc/jit/serialization/import_source.cpp) @IvanKobzarev @jamesr66a

Differential Revision: [D36860660](https://our.internmc.facebook.com/intern/diff/D36860660/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36860660/)!

Differential Revision: [D36860660](https://our.internmc.facebook.com/intern/diff/D36860660)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78714
Approved by: https://github.com/jerryzh168
2022-08-25 16:50:34 +00:00
zaf
c92e5ac95b [quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- Documentation @vkuzo
  - docs/source/conf.py
  - docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
  - torch/csrc/jit/passes/hoist_conv_packed_params.cpp
  - torch/csrc/jit/passes/quantization/helper.h
  - torch/csrc/jit/serialization/import_source.cpp

Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012/)

Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-25 16:50:33 +00:00
PyTorch MergeBot
6a9c02339d Revert "[quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)"
This reverts commit 432f037498.

Reverted https://github.com/pytorch/pytorch/pull/78713 on behalf of https://github.com/janeyx99 due to Reverting for breaking (trunk-only) ios build
2022-08-22 07:32:37 +00:00
PyTorch MergeBot
b1a7b67529 Revert "[quant][ao_migration] torch.nn.quantized.dynamictorch.ao.nn.quantized.dynamic (#78714)"
This reverts commit e6fb97d8ae.

Reverted https://github.com/pytorch/pytorch/pull/78714 on behalf of https://github.com/janeyx99 due to sorry, reverting so https://github.com/pytorch/pytorch/pull/78713 could be cleanly reverted
2022-08-22 07:30:48 +00:00
zaf
e6fb97d8ae [quant][ao_migration] torch.nn.quantized.dynamictorch.ao.nn.quantized.dynamic (#78714)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [X] [Current PR] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- [Documentation](docs/source/quantization-support.rst) @vkuzo
- [Public API test list](test/allowlist_for_publicAPI.json) @peterbell10
- [BC test](test/quantization/bc/test_backward_compatibility.py) @vkuzo
- [IR emitter](torch/csrc/jit/frontend/ir_emitter.cpp) @jamesr66a
- [JIT serialization](torch/csrc/jit/serialization/import_source.cpp) @IvanKobzarev @jamesr66a

Differential Revision: [D36860660](https://our.internmc.facebook.com/intern/diff/D36860660/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36860660/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78714
Approved by: https://github.com/jerryzh168
2022-08-22 05:22:00 +00:00
zaf
432f037498 [quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- Documentation @vkuzo
  - docs/source/conf.py
  - docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
  - torch/csrc/jit/passes/hoist_conv_packed_params.cpp
  - torch/csrc/jit/passes/quantization/helper.h
  - torch/csrc/jit/serialization/import_source.cpp

Differential Revision: [D36860145](https://our.internmc.facebook.com/intern/diff/D36860145/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-22 01:38:55 +00:00
joncrall
4618371da5 Integrate xdoctest - Rebased (#82797)
This is a new version of #15648 based on the latest master branch.

Unlike the previous PR where I fixed a lot of the doctests in addition to integrating xdoctest, I'm going to reduce the scope here. I'm simply going to integrate xdoctest, and then I'm going to mark all of the failing tests as "SKIP". This will let xdoctest run on the dashboards, provide some value, and still let the dashboards pass. I'll leave fixing the doctests themselves to another PR.

In my initial commit, I do the bare minimum to get something running with failing dashboards. The few tests that I marked as skip are causing segfaults. Running xdoctest results in 293 failed, 201 passed tests. The next commits will be to disable those tests. (unfortunately I don't have a tool that will insert the `#xdoctest: +SKIP` directive over every failing test, so I'm going to do this mostly manually.)

Fixes https://github.com/pytorch/pytorch/issues/71105

@ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82797
Approved by: https://github.com/ezyang
2022-08-12 02:08:01 +00:00
anjali411
f68f77610a Add __all__ to torch.nn.quantized, fx.passes, ao.nn and amp submodules (#80376)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80376
Approved by: https://github.com/albanD
2022-06-27 21:36:27 +00:00
Jerry Zhang
7ddf212f33 [quant][fx] Fully align convert with the reference model design and simplify the implementation (#73863)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73863

This PR fully aligns the convert function with the design: https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md
and simplifies the implementation of convert function by always produce a reference quantized model (with reference patterns) first,
and then lower the model to a quantized model that is runnable with PyTorch native backend (fbgemm/qnnpack).

This PR makes the convert.py much easier to understand than the previous implementation, and we are able to remove majority of code
in quantization_patterns.py as well (in followup PRs).

Test Plan:
```
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
python test/test_quantization.py TestFXNumericSuiteCoreAPIs
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels
```
and other internal/oss regression tests

Imported from OSS

Reviewed By: andrewor14

Differential Revision: D34778506

fbshipit-source-id: 0678b66addf736039a8749b352f6f569caca962b
(cherry picked from commit 33ec9caf23f3ab373d827117efbd9db0668b2437)
2022-03-11 17:11:30 +00:00
Steven Troxler
33b7e6ff23 Convert type comments to annotations in torch/nn (#72662)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72662

This commit was produced by running
```
python -m libcst.tool codemod --no-format --jobs=1 convert_type_comments.ConvertTypeComments caffe2/torch/nn/ --no-quote-annotations
```
and then manually fixing unreadable lines by breaking up very
long function defintiion (unfortuantely
it's very difficult to fully automate tranforms of code that
isn't autoformatted).

Test Plan:
Wait for CI. This should be safe, the types all appear to be valid - but it's
always good to let the jit tests run, in some cases we find typing errors that
crash tests.

Reviewed By: jbschlosser, albanD

Differential Revision: D34147388

fbshipit-source-id: 40701228837a927b54239ab87699b4b3169546b7
(cherry picked from commit 05a900c43f)
2022-02-11 06:35:42 +00:00
Charles David Hernandez
83b45fe166 [ao] disabling dynamic conv/convT ops (#71110)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71110

as mentioned in https://github.com/pytorch/pytorch/issues/70480 the dynamic conv ops are currently missing a key feature to bring their performance in line with other dynamic ops, this diff disables conv/convT from being automatically quantized with convert dynamic

Test Plan: buck test //caffe2/test:quantization --test-selectors test_quantized_module#TestDynamicQuantizedModule

Reviewed By: vkuzo

Differential Revision: D33511152

fbshipit-source-id: 50618fbe734c898664c390f896e70c68f1df3208
2022-01-13 11:28:02 -08:00
Charles David Hernandez
09615cd0b0 Adding Dynamic Conv and ConvT ops/modules (#68176)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68176

it should be noted that for the modules, reduce_range is set to
true by default in a similar fashion to linear_dynamic.

Test Plan:
python test/test_quantization.py TestDynamicQuantizedModule
python test/test_quantization.py TestDynamicQuantizedConv
python test/test_quantization.py TestQuantizedConv

Imported from OSS

Reviewed By: kimishpatel

Differential Revision: D32374003

fbshipit-source-id: 011562bd0f4d817387d53bb113df2600aa60a7a3
2021-11-15 16:42:25 -08:00
andrewor
4a8f27445d [Quant] Add dynamic QAT Linear module (#67325)
Summary:
**Summary:** This commit adds the `torch.nn.qat.dynamic.modules.Linear`
module, the dynamic counterpart to `torch.nn.qat.modules.Linear`.
Functionally these are very similar, except the dynamic version
expects a memoryless observer and is converted into a dynamically
quantized module before inference.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67325

Test Plan:
`python3 test/test_quantization.py TestQuantizationAwareTraining.test_dynamic_qat_linear`

**Reviewers:** Charles David Hernandez, Jerry Zhang

**Subscribers:** Charles David Hernandez, Supriya Rao, Yining Lu

**Tasks:** 99696812

**Tags:** pytorch

Reviewed By: malfet, jerryzh168

Differential Revision: D32178739

Pulled By: andrewor14

fbshipit-source-id: 5051bdd7e06071a011e4e7d9cc7769db8d38fd73
2021-11-08 10:24:25 -08:00
Vasiliy Kuznetsov
8b1258698e Improve quantization API docs (#66379)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66379

Description:

Creates a quantization API reference and fixes all the docblock errors.

This is #66122 to #66210 squashed together

Test Plan:
```
cd docs
make html
python -m http.server
// open webpage, inspect it, looks good
```

Reviewed By: ejguan

Differential Revision: D31543172

Pulled By: vkuzo

fbshipit-source-id: 9131363d6528337e9f100759654d3f34f02142a9
2021-10-11 18:46:11 -07:00
Mike Ruberry
09c3e6002b Revert D31447615: Quantization docs: rewrite API reference to be more automated
Test Plan: revert-hammer

Differential Revision:
D31447615 (7d2526ab20)

Original commit changeset: 09874ad9629f

fbshipit-source-id: 0963c9f5118e243cd299f8cded2bf7b0848a7105
2021-10-10 01:51:05 -07:00
Vasiliy Kuznetsov
7d2526ab20 Quantization docs: rewrite API reference to be more automated (#66201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66201

Description:

This PR switches the quantization API reference to use `autosummary`
for each section.  We define the sections and manually write a list
of modules/functions/methods to include, and sphinx does the rest.
A result is a single page where we have every quantization function
and module with a quick autogenerated blurb, and user can click
through to each of them for a full documentation page.

This mimics how the `torch.nn` and `torch.nn.functional` doc
pages are set up.

In detail, for each section before this PR:
* creates a new section using `autosummary`
* adds all modules/functions/methods which were previously in the manual section
* adds any additional modules/functions/methods which are public facing but not previously documented
* deletes the old manual summary and all links to it

Test Plan:
```
cd docs
make html
python -m http.server
// renders well, links work
```

Reviewed By: jerryzh168

Differential Revision: D31447615

Pulled By: vkuzo

fbshipit-source-id: 09874ad9629f9c00eeab79c406579c6abd974901
2021-10-09 06:46:02 -07:00
Zafar Takhirov
b23709df03 [ao_migration] torch/nn/quantized: torch.quantization -> torch.ao.quantization (#65900)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65900

This changes the imports in the `caffe2/torch/nn/quantized` to include the new import locations.

```
codemod -d torch/nn/quantized --extensions py 'torch.quantization' 'torch.ao.quantization'
```

Test Plan: `python test/run_test.py`

Reviewed By: jerryzh168

Differential Revision: D31301193

fbshipit-source-id: 58efb1ad51a8b441e2a3bd5b91af11eab6b9331f
2021-10-08 16:19:53 -07:00
Supriya Rao
c7027f19ef [quant][fx] Add support for dynamic linear + relu fusion (INT8) (#63799)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63799

Add a new module that can be used for module swap with the nni.LinearReLU module in convert function.
Supports INT8 currently (since FP16 op doesn't have relu fusion yet).

Fixes #55393

Test Plan:
python test/test_quantization.py test_dynamic_fusion

Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D30502812

fbshipit-source-id: 3668e4f001a0626d469e17ac323acf582ee28a51
2021-08-26 21:10:46 -07:00
Basil Hosmer
58d1b3639b fix nn.MHA scriptability (#58727)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58727

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28593830

Pulled By: bhosmer

fbshipit-source-id: 37dee9efededaea9985a2bf040df1ba4b46f6580
2021-05-26 15:29:49 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
Sam Estep
e3900d2ba5 Add lint for unqualified noqa (#56272)
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.

Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27:            print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28:            print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:

- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
  ```
  test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
  test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
  ```

I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2365189927

Reviewed By: janeyx99

Differential Revision: D27830127

Pulled By: samestep

fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
2021-04-19 13:16:18 -07:00
Sam Estep
4753100a3b Un-ignore F403 in .flake8 (#55838)
Summary:
Generally wildcard imports are bad for the reasons described here: https://www.flake8rules.com/rules/F403.html

This PR replaces wildcard imports with an explicit list of imported items where possible, and adds a `# noqa: F403` comment in the other cases (mostly re-exports in `__init__.py` files).

This is a prerequisite for https://github.com/pytorch/pytorch/issues/55816, because currently [`tools/codegen/dest/register_dispatch_key.py` simply fails if you sort its imports](https://github.com/pytorch/pytorch/actions/runs/742505908).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55838

Test Plan: CI. You can also run `flake8` locally.

Reviewed By: jbschlosser

Differential Revision: D27724232

Pulled By: samestep

fbshipit-source-id: 269fb09cb4168f8a51fd65bfaacc6cda7fb87c34
2021-04-13 09:24:07 -07:00
Zafar
e12008d110 [quant] Mapping for the _LinearWithBias (#49964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49964

`torch.nn.modules.linear._LinearWithBias` is only used in the transformers, and is completely identical to the `torch.nn.Linear`.
This PR creates a mapping so that this module would be treated the same as the Linear.

Test Plan:
```
python test/test_quantization.py TestDynamicQuantizedModule TestStaticQuantizedModule
```

Differential Revision: D25731589

Reviewed By: jerryzh168

Pulled By: z-a-f

fbshipit-source-id: 1b2697014e250e97d3010cdb542f9d130b71fbc3
2021-01-07 13:57:29 -08:00
Alex Henrie
5f2ec6293d Unused variables in neural net classes and functions (#50100)
Summary:
These unused variables were identified by [pyflakes](https://pypi.org/project/pyflakes/). They can be safely removed to simplify the code and possibly improve performance.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50100

Reviewed By: ezyang

Differential Revision: D25797764

Pulled By: smessmer

fbshipit-source-id: ced341aee692f429d2dcc3a4ef5c46c8ee99cabb
2021-01-06 08:16:57 -08:00
Raghuraman Krishnamoorthi
f7a085af98 Dynamic GRU quantization support (#49448)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49448

ghstack-source-id: 118982171

Test Plan:
buck test caffe2/test:quantization --  'test_qlstmGRU \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --print-passing-details
buck test caffe2/test:quantization --  'test_quantized_rnn \(quantization\.test_quantize\.TestPostTrainingDynamic\)' --print-passing-details
buck test caffe2/test:quantization --  'test_qrnncell \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --run-disabled --print-passing-details

Reviewed By: vkuzo

Differential Revision: D25579815

fbshipit-source-id: 413cc8888eb8058230b94c9576d2fa54b0ed1416
2020-12-21 12:36:59 -08:00
Jerry Zhang
be2e3dd2a1 [quant][graphmode][fx][fix] Linear work with float_qparam_dynamic_qconfig (#47068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47068

Filter the dtype config before performing the quantization in linear

Test Plan: Imported from OSS

Reviewed By: supriyar

Differential Revision: D24627907

fbshipit-source-id: 162fa47b3fcf6648049f8bc0438e41ee97ac19e9
2020-11-02 16:28:33 -08:00
Supriya Rao
646ffd4886 [quant] Move EmbeddingBag eager quantization to static (#44217)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44217

Move the tests to static ones as well

Test Plan:
python test/test_quantization.py TestStaticQuantizedModule.test_embedding_bag_api

Imported from OSS

Reviewed By: raghuramank100

Differential Revision: D23547386

fbshipit-source-id: 41f81c31e1613098ecf6a7eff601c7dcd4b09c76
2020-09-08 19:05:02 -07:00
Nikita Shulga
b60ffcdfdd Enable typechecks for torch.nn.quantized.modules.linear (#44154)
Summary:
Also import `Optional` directly from `typing` rather than from `_jit_internal`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44154

Reviewed By: seemethere

Differential Revision: D23511833

Pulled By: malfet

fbshipit-source-id: f78c5fd679c002b218e4d287a9e56fa198171981
2020-09-03 19:52:49 -07:00
Gao, Xiang
37658b144b Remove useless py2 compatibility import __future__, part 1 (#43808)
Summary:
To avoid conflicts, this PR does not remove all imports. More are coming in further PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43808

Reviewed By: wanchaol

Differential Revision: D23436675

Pulled By: ailzhang

fbshipit-source-id: ccc21a1955c244f0804277e9e47e54bfd23455cd
2020-09-02 19:15:11 -07:00
Guilherme Leobas
63a0bb0ab9 Add typing annotations for torch.nn.quantized.dynamic.modules.rnn (#43186)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/43185

xref: [gh-43072](https://github.com/pytorch/pytorch/issues/43072)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43186

Reviewed By: ezyang

Differential Revision: D23441259

Pulled By: malfet

fbshipit-source-id: 80265ae7f3a70f0087e620969dbd4aa8ca17c317
2020-09-01 10:25:10 -07:00
Supriya Rao
3293fdfa80 [quant] Enable from_float for quantized Embedding_Bag (#43176)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43176

Convert floating point nn.EmbeddingBag module to
nn.quantized.dynamic.EmbeddingBag module

Test Plan:
python test/test_quantization.py TestDynamicQuantizedModule.test_embedding_bag_api
python test/test_quantization.py TestPostTrainingDynamic.test_embedding_quantization

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23200196

fbshipit-source-id: 090f47dbf7aceab9c719cbf282fad20fe3e5a983
2020-08-21 11:46:03 -07:00
Supriya Rao
b354b422ee [quant] Make offsets an optional argument (#43090)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43090

To match the floating point module

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23167518

fbshipit-source-id: 29db596e10731be4cfed7efd18f33a0b3dbd0ca7
2020-08-21 11:46:00 -07:00
Supriya Rao
4db8ca1129 [quant] Create nn.quantized.dynamic.EmbeddingBag (#43088)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43088

Create quantized module that the user can use to perform embedding bag quantization
The module uses the EmbeddingPackedParams to store the weights which can be serialized /deserialized
using TorchBind custom classes (C++ get/setstate code)
Following PR will add support for `from_float` to convert from float to quantized module

Test Plan:
python test/test_quantization.py TestDynamicQuantizedModule.test_embedding_bag_api

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23167519

fbshipit-source-id: 029d7bb44debf78c4ef08bfebf267580ed94d033
2020-08-21 11:45:02 -07:00
Yanan Cao
bdcf320bed Support custom exception message (#41907)
Summary:
Raise and assert used to have a hard-coded error message "Exception". User provided error message was ignored. This PR adds support to represent user's error message in TorchScript.

This breaks backward compatibility because now we actually need to script the user's error message, which can potentially contain unscriptable expressions. Such programs can break when scripting, but saved models can still continue to work.

Increased an op count in test_mobile_optimizer.py because now we need aten::format to form the actual exception message.

This is built upon an WIP PR:  https://github.com/pytorch/pytorch/pull/34112 by driazati

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41907

Reviewed By: ngimel

Differential Revision: D22778301

Pulled By: gmagogsfm

fbshipit-source-id: 2b94f0db4ae9fe70c4cd03f4048e519ea96323ad
2020-08-01 13:03:45 -07:00
Raghuraman Krishnamoorthi
480851ad2c Docstring changes for dynamic quantized classes (#40931)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40931

Fix docstrings for dynamic quantized Linear/LSTM and associated classes
ghstack-source-id: 107064446

Test Plan: Docs show up in correctly

Differential Revision: D22360787

fbshipit-source-id: 8e357e081dc59ee42fd7f12ea5079ce5d0cc9df2
2020-07-03 21:04:12 -07:00
Raghuraman Krishnamoorthi
d7d75e37bb Add state dict for LSTM and RNNCell and helper functions for accessing weights and bias (#40333)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40333

Add state_dict support for dynamic quantized LSTM/GRU/RNNCell.

Add helper functions get_weight and get_bias for LSTM and RNNCells
ghstack-source-id: 106364749

(Note: this ignores all push blocking failures!)

Test Plan:
buck test caffe2/test:quantization -- 'test_lstm_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details

buck test caffe2/test:quantization -- 'test_cell_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details

Differential Revision: D22151020

fbshipit-source-id: 2eb54062f6c6a35ffe4dbe8e8cfbf7ede0e92ba1
2020-06-22 17:41:07 -07:00
Raghuraman Krishnamoorthi
3258cb61b1 Dynamic quantization support for LSTMCell, RNNCell and GRUCell [Remove randomness in weights] (#40102)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40102

Enable dynamic quantization for LSTMCell, RNNCell and GRUCell
ghstack-source-id: 105997236

(Note: this ignores all push blocking failures!)

Test Plan: buck test caffe2/test:quantization -- 'test_quantized_rnn_cell \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

Differential Revision: D22071017

fbshipit-source-id: 3fe1eac39db9c1e0566838eb8b969bbb1fa983c9
2020-06-16 21:29:50 -07:00
Raghuraman Krishnamoorthi
15758bca55 Refactor LSTM tests, [Remove randomness in weights] (#40101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40101

Create three tests for LSTMs:
1. test_qlstm: Test to check numerics of quantized LSTM operator.
2. test_lstm_api: To check the LSTM module and compare
it with the quantized LSTM op
3. test_quantized_rnn: Check the dynamic quantization workflow, scriptability and serialization of quantized
LSTM
ghstack-source-id: 105997268

(Note: this ignores all push blocking failures!)

Test Plan:
buck test caffe2/test:quantization -- 'test_lstm_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details

buck test caffe2/test:quantization -- 'test_quantized_rnn \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

buck test caffe2/test:quantization -- 'test_qlstm \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --print-passing-details

Differential Revision: D22070826

fbshipit-source-id: 46c333e19b9eab8fa5cab6f132e89b80a635791a
2020-06-16 17:24:07 -07:00
Raghuraman Krishnamoorthi
5add2e861c Revert D21628596: Refactor LSTM tests
Test Plan: revert-hammer

Differential Revision:
D21628596

Original commit changeset: 4aeda899f2e5

fbshipit-source-id: ab6544b87404863e054172aa9ec7ada51fad8e5e
2020-06-16 10:14:15 -07:00
Raghuraman Krishnamoorthi
e55e0cb1a9 Revert D20978736: Dynamic quantization support for LSTMCell, RNNCell and GRUCell
Test Plan: revert-hammer

Differential Revision:
D20978736

Original commit changeset: 8f303ba1d7f8

fbshipit-source-id: bcd300819616d6536f582fcd3c90decd543c4657
2020-06-16 10:11:32 -07:00
Raghuraman Krishnamoorthi
48db06e39a Dynamic quantization support for LSTMCell, RNNCell and GRUCell (#37159)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37159

Enable dynamic quantization for LSTMCell, RNNCell and GRUCell
ghstack-source-id: 105946183

(Note: this ignores all push blocking failures!)

Test Plan: buck test caffe2/test:quantization -- 'test_quantized_rnn_cell \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

Differential Revision: D20978736

fbshipit-source-id: 8f303ba1d7f8e0c646ac73e862d2c1e735b7ff61
2020-06-16 09:14:59 -07:00
Raghuraman Krishnamoorthi
655f1ea176 Refactor LSTM tests (#38851)
Summary:
Create three tests for LSTMs:
1. test_qlstm: Test to check numerics of quantized LSTM operator.
2. test_lstm_api: To check the LSTM module and compare
it with the quantized LSTM op
3. test_quantized_rnn: Check the dynamic quantization workflow, scriptability and serialization of quantized
LSTM
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38851

ghstack-source-id: 105945574

(Note: this ignores all push blocking failures!)

Test Plan:
buck test caffe2/test:quantization -- 'test_lstm_api \(quantization\.test_quantized_module\.TestDynamicQuantizedModule\)' --print-passing-details

buck test caffe2/test:quantization -- 'test_quantized_rnn \(quantization\.test_quantize\.TestPostTrainingDynamic\)'

buck test caffe2/test:quantization -- 'test_qlstm \(quantization\.test_quantized_op\.TestDynamicQuantizedRNNOp\)' --print-passing-details

Differential Revision: D21628596

fbshipit-source-id: 4aeda899f2e5f14bfbe3d82096cb4ce89c725fa1
2020-06-16 00:41:24 -07:00
Supriya Rao
e1392922f2 [quant] Enable per-channel quantization for LSTM Modules (#39666)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39666

Test Plan:
python test/test_quantization.py TestPostTrainingDynamic.test_per_channel_lstm_quantize

Imported from OSS

Differential Revision: D21977601

fbshipit-source-id: 1333259e75782e54864ab444e05397b86cd9b9aa
2020-06-10 23:19:08 -07:00
Supriya Rao
425927bb2b [quant] Add reduce_range params for quantized_lstm (#39604)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39604

This change preserves BC for older models that are saved with reduce_range set to false.
Newer models will use the version information in RNN module to toggle reduce_range parameter

Internally this is implemented using a new CellParams type that calls the linear functions with reduce_range option set to true.
New models serialized will use the CellParams struct for the `__getstate__` and `__setstate__` calls. Older models using QuantizedCellParamsDynamic will continue to use their original serialization/de-serialization methods

tested using LSTM BC test and test_quantized_rnn

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D21977600

fbshipit-source-id: 0cb0e098b87207b537574d3beeab1f341c41c0d2
2020-06-10 23:16:57 -07:00
Supriya Rao
25a6c5f60f [quant] Dynamic Linear module to use reduce_range (#39125)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39125

switch to setting reduce_range to true for version > 3.
Models serialized with older state_dict will have version <=3 so will be run with reduce_range=false

Verified with backward compatibility tests (works with no changes to these tests)

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D21769689

fbshipit-source-id: 131f2ae736e31705222e82bdc77480f2f1826fe8
2020-05-29 18:21:57 -07:00
Supriya Rao
1d1f16079d [quant] Add save/load state_dict to quantized dynamic RNNs (#39105)
Summary:
Previously dynamic LSTM modules weren't able to save/load from state_dict since PackedParameter used in RNNs isn't serializable from python
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39105

Test Plan: python test/test_quantization.py TestSerialization

Reviewed By: jerryzh168

Differential Revision: D21752256

Pulled By: supriyar

fbshipit-source-id: ef82cf21ce21a3a1304d147ed0da538c639f952d
2020-05-28 10:37:38 -07:00