pytorch/torch/ao/quantization
Sergii Dymchenko f51f6aa387 Fix non-existing parameters in docstrings (#90505)
Continuation after https://github.com/pytorch/pytorch/pull/90163.

Here is a script I used to find all the non-existing arguments in the docstrings (the script can give false positives in presence of *args/**kwargs or decorators):

_Edit:_
I've realized that the indentation is wrong for the last `break` in the script, so the script only gives output for a function if the first docstring argument is wrong. I'll create a separate PR if I find more issues with corrected script.

``` python
import ast
import os
import docstring_parser

for root, dirs, files in os.walk('.'):
    for name in files:
        if root.startswith("./.git/") or root.startswith("./third_party/"):
            continue
        if name.endswith(".py"):
            full_name = os.path.join(root, name)
            with open(full_name, "r") as source:
                tree = ast.parse(source.read())
                for node in ast.walk(tree):
                    if isinstance(node, ast.FunctionDef):
                        all_node_args = node.args.args
                        if node.args.vararg is not None:
                            all_node_args.append(node.args.vararg)
                        if node.args.kwarg is not None:
                            all_node_args.append(node.args.kwarg)
                        if node.args.posonlyargs is not None:
                            all_node_args.extend(node.args.posonlyargs)
                        if node.args.kwonlyargs is not None:
                            all_node_args.extend(node.args.kwonlyargs)
                        args = [a.arg for a in all_node_args]
                        docstring = docstring_parser.parse(ast.get_docstring(node))
                        doc_args = [a.arg_name for a in docstring.params]
                        clean_doc_args = []
                        for a in doc_args:
                            clean_a = ""
                            for c in a.split()[0]:
                                if c.isalnum() or c == '_':
                                    clean_a += c
                            if clean_a:
                                clean_doc_args.append(clean_a)
                        doc_args = clean_doc_args
                        for a in doc_args:
                            if a not in args:
                                print(full_name, node.lineno, args, doc_args)
                            break

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90505
Approved by: https://github.com/malfet, https://github.com/ZainRizvi
2022-12-09 21:43:09 +00:00
..
backend_config [ao] backend_config moving all to top (#88391) 2022-12-09 05:39:29 +00:00
experimental [quant] Remove unneeded lines from APoT linear (#82909) 2022-08-08 11:30:24 +00:00
fx Fix non-existing parameters in docstrings (#90505) 2022-12-09 21:43:09 +00:00
__init__.py Revert "[ao] making _is_activation_post_process private (#87520)" 2022-11-21 16:48:26 +00:00
_correct_bias.py [ao] public vs private for ao.quantization._X (#88392) 2022-12-09 05:39:29 +00:00
_equalize.py [ao] public vs private for ao.quantization._X (#88392) 2022-12-09 05:39:29 +00:00
_learnable_fake_quantize.py [ao] public vs private for ao.quantization._X (#88392) 2022-12-09 05:39:29 +00:00
fake_quantize.py [ao] correctly set public v private for fake_quantize.py (#86022) 2022-10-05 19:30:50 +00:00
fuse_modules.py Fix fuse_func method overwrite (#87791) (#88193) 2022-11-03 20:32:54 +00:00
fuser_method_mappings.py [ao] fuser_method_mappings.py fixing public v private (#87516) 2022-11-10 21:37:31 +00:00
observer.py quantization: deprecate observer compute_dtype and replace with is_dynamic (#85431) 2022-11-24 07:07:34 +00:00
pattern.md
qconfig_mapping.py [Quant] Remove explicitly default QConfigMapping settings (#90066) 2022-12-02 23:33:47 +00:00
qconfig.py quantization: make x86 as default backend (#88799) 2022-12-01 02:09:54 +00:00
quant_type.py [ao] quant_type.py fixing public v private (#87519) 2022-11-15 15:42:31 +00:00
quantization_mappings.py [quant][ao_migration] nn.intrinsic.quantized migration to ao (#86172) 2022-10-08 00:01:38 +00:00
quantize_fx.py Reland "Add heirachical module names to torchFX graph.node" (#90205) 2022-12-09 06:20:31 +00:00
quantize_jit.py [ao] fixing public v private for quantize_jit.py (#86024) 2022-10-05 22:11:43 +00:00
quantize.py Revert "[ao] making _is_activation_post_process private (#87520)" 2022-11-21 16:48:26 +00:00
stubs.py quantization: fix bug in QuantWrapper with DeQuant qconfig (#73671) 2022-03-03 15:31:53 +00:00
utils.py quantization: deprecate observer compute_dtype and replace with is_dynamic (#85431) 2022-11-24 07:07:34 +00:00