pytorch/torch/utils
Sergii Dymchenko f51f6aa387 Fix non-existing parameters in docstrings (#90505)
Continuation after https://github.com/pytorch/pytorch/pull/90163.

Here is a script I used to find all the non-existing arguments in the docstrings (the script can give false positives in presence of *args/**kwargs or decorators):

_Edit:_
I've realized that the indentation is wrong for the last `break` in the script, so the script only gives output for a function if the first docstring argument is wrong. I'll create a separate PR if I find more issues with corrected script.

``` python
import ast
import os
import docstring_parser

for root, dirs, files in os.walk('.'):
    for name in files:
        if root.startswith("./.git/") or root.startswith("./third_party/"):
            continue
        if name.endswith(".py"):
            full_name = os.path.join(root, name)
            with open(full_name, "r") as source:
                tree = ast.parse(source.read())
                for node in ast.walk(tree):
                    if isinstance(node, ast.FunctionDef):
                        all_node_args = node.args.args
                        if node.args.vararg is not None:
                            all_node_args.append(node.args.vararg)
                        if node.args.kwarg is not None:
                            all_node_args.append(node.args.kwarg)
                        if node.args.posonlyargs is not None:
                            all_node_args.extend(node.args.posonlyargs)
                        if node.args.kwonlyargs is not None:
                            all_node_args.extend(node.args.kwonlyargs)
                        args = [a.arg for a in all_node_args]
                        docstring = docstring_parser.parse(ast.get_docstring(node))
                        doc_args = [a.arg_name for a in docstring.params]
                        clean_doc_args = []
                        for a in doc_args:
                            clean_a = ""
                            for c in a.split()[0]:
                                if c.isalnum() or c == '_':
                                    clean_a += c
                            if clean_a:
                                clean_doc_args.append(clean_a)
                        doc_args = clean_doc_args
                        for a in doc_args:
                            if a not in args:
                                print(full_name, node.lineno, args, doc_args)
                            break

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90505
Approved by: https://github.com/malfet, https://github.com/ZainRizvi
2022-12-09 21:43:09 +00:00
..
backcompat
benchmark Fix typos in messages under torch (#89049) 2022-11-17 04:18:14 +00:00
bottleneck Fix use-dict-literal lint (#83718) 2022-08-24 00:26:46 +00:00
data Fix: Make __len__ of datapipes dynamic (#88302) 2022-12-09 19:15:53 +00:00
hipify Introduce CUDA Device Assertions Infrastructure (#84609) 2022-12-08 01:26:07 +00:00
jit
model_dump [fix] MathBits: serialization (#88182) 2022-11-09 17:15:12 +00:00
tensorboard Fix non-existing parameters in docstrings (#90505) 2022-12-09 21:43:09 +00:00
__init__.py Reland "add an API for external backends to register custom device names (#86992)" (#87453) 2022-10-21 16:51:36 +00:00
_cpp_extension_versioner.py
_crash_handler.py
_cuda_trace.py Add synchronize hooks (#84427) 2022-09-09 13:56:59 +00:00
_freeze.py
_mode_utils.py [Modes] remove enable and rewrite mode stack (squashed) (#84774) 2022-09-27 01:04:35 +00:00
_python_dispatch.py Disable Current Modes when printing Tensor (#88344) 2022-11-04 00:45:35 +00:00
_pytree.py Make nested TreeSpec printing nicer (#46538) (#86546) 2022-10-18 16:50:39 +00:00
_zip.py Use multipy.package in multipy/runtime (#111) (#82690) 2022-08-03 19:11:12 +00:00
backend_registration.py Reland "add an API for external backends to register custom device names (#86992)" (#87453) 2022-10-21 16:51:36 +00:00
bundled_inputs.py Deprecate TypedStorage, its derived classes, and all of their public methods (#85303) 2022-11-08 18:11:01 +00:00
checkpoint.py Address feedback from previous PR (#86622) 2022-10-10 18:53:41 +00:00
collect_env.py Sets CUDA_MODULE_LOADING to LAZY when not set by the user (#85692) 2022-10-13 14:03:01 +00:00
cpp_backtrace.py Expose cpp_backtrace to python binding (#84896) 2022-09-27 14:59:08 +00:00
cpp_extension.py Migrate PyTorch to C++17 (#85969) 2022-12-08 02:27:48 +00:00
dlpack.py Optimize __dlpack_device__ performance (#86665) 2022-10-11 19:03:46 +00:00
file_baton.py
hooks.py Allow Module forward-pre and forward hooks to take kwargs (#89389) 2022-11-23 02:43:32 +00:00
mkldnn.py
mobile_optimizer.py [Vulkan] Add support for Optimization Blocklist to Vulkan Rewrite (#87431) 2022-10-31 14:15:51 +00:00
model_zoo.py
show_pickle.py Add __all__ to torch.utils submodules (#85331) 2022-09-27 14:45:26 +00:00
throughput_benchmark.py More doctest refinements. (#83317) 2022-08-22 20:07:26 +00:00