Commit Graph

43 Commits

Author SHA1 Message Date
Xuehai Pan
758a0a88a2 [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200)
This PR removes unnecessary `pass` statement. This is semanticly safe because the bytecode for the Python code does not change.

Note that if there is a docstring in the function, a empty function does not need a `pass` statement as placeholder.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133200
Approved by: https://github.com/malfet, https://github.com/eqy, https://github.com/kit1980
2024-08-15 15:50:19 +00:00
PyTorch MergeBot
945bf78894 Revert "[BE] typing for decorators - fx/_compatibility (#131568)"
This reverts commit 193f62fde9.

Reverted https://github.com/pytorch/pytorch/pull/131568 on behalf of https://github.com/clee2000 due to same as https://github.com/pytorch/pytorch/pull/131572#issuecomment-2254328359 but I clicked the wrong link by accident.  This is where it actually starts ([comment](https://github.com/pytorch/pytorch/pull/131568#issuecomment-2254330781))
2024-07-28 03:43:39 +00:00
Aaron Orenstein
193f62fde9 [BE] typing for decorators - fx/_compatibility (#131568)
See #131429

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131568
Approved by: https://github.com/justinchuby, https://github.com/oulgen, https://github.com/zou3519
2024-07-25 22:24:19 +00:00
Aaron Orenstein
5a0068cc69 [BE] mypy: disallow untyped decorators (#131428)
Untyped decorators strip the types from their decorated function so even if the underlying function is fully typed then callers to it don't get any benefit from type annotations.

Step 1 - Enable the error and override in all the offending files.

#131429

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131428
Approved by: https://github.com/justinchuby, https://github.com/oulgen
2024-07-23 21:50:55 +00:00
Andrea Frittoli
447173198b Add docstring for the torch.fx.operator_schemas.create_type_hint func… (#128139)
Fixes: #127916

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128139
Approved by: https://github.com/SherlockNoMad
2024-06-11 22:42:11 +00:00
Aaron Orenstein
038b927590 Flip default value for mypy disallow_untyped_defs [7/11] (#127844)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127844
Approved by: https://github.com/oulgen
ghstack dependencies: #127842, #127843
2024-06-08 18:49:45 +00:00
Edward Z. Yang
da4b4d961e Support printing storage while FakeTensorMode is enabled (#118780)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118780
Approved by: https://github.com/thiagocrepaldi, https://github.com/eellison
2024-01-31 23:10:47 +00:00
Peter Bell
bfbc2e3ca8 [fx] Cache _torchscript_schema_to_signature (#112327)
This function is called in `normalize_function` which is in a fairly hot path for
`FakeTensor` dispatch. In this simple benchmark I see `normalize_function`
improve from 92 us to 17 us just by caching this signature object.

```python
import torch
from torch._subclasses import FakeTensorMode
from torch.fx.operator_schemas import normalize_function
aten = torch._ops.ops.aten
%timeit normalize_function(
    aten.empty_strided.default, args=((100, 100), (100, 1)),
    kwargs=dict(device="cuda"), normalize_to_only_use_kwargs=True)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112327
Approved by: https://github.com/lezcano
2023-10-30 03:38:52 +00:00
Justin Chu
79c9e82e27 Fix flake8 lint errors reported by ruff - take 2 (#99798)
Replaces #99784. This PR is pure autofix.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99798
Approved by: https://github.com/Skylion007, https://github.com/kit1980
2023-04-23 23:09:51 +00:00
Kazuaki Ishizaki
105ef68f72 Fix typos under torch/fx directory (#97596)
This PR fixes typos in comments and messages of `.py` files under `torch/fx` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97596
Approved by: https://github.com/dagitses, https://github.com/kit1980
2023-04-10 21:57:36 +00:00
Aaron Gokaslan
1e2d82b8e4 [BE] Merge isinstance calls together (#94419)
Simplify and speeds up isinstance calls by checking for multiple types at the same time.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94419
Approved by: https://github.com/ezyang
2023-02-09 00:47:26 +00:00
Nikita Shulga
6c7e6d9689 Make torch.fx compatible with Python-3.11 (#92895)
In 3.11 bytecode size is not constant, so in order to get from `f_lasti` to opcode index, one need to search for the closes offset in disassembled instructions.

Update `_patch_function` to construct code with all the properties that exist in 3.11 runtime.
Update `_torchscript_schema_to_signature` to mark `from` named arg as positional argument only, as this is a reserved keyword in Python and as such checked by `inspect` package in 3.11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92895
Approved by: https://github.com/albanD
2023-01-24 22:11:50 +00:00
Sergii Dymchenko
f51f6aa387 Fix non-existing parameters in docstrings (#90505)
Continuation after https://github.com/pytorch/pytorch/pull/90163.

Here is a script I used to find all the non-existing arguments in the docstrings (the script can give false positives in presence of *args/**kwargs or decorators):

_Edit:_
I've realized that the indentation is wrong for the last `break` in the script, so the script only gives output for a function if the first docstring argument is wrong. I'll create a separate PR if I find more issues with corrected script.

``` python
import ast
import os
import docstring_parser

for root, dirs, files in os.walk('.'):
    for name in files:
        if root.startswith("./.git/") or root.startswith("./third_party/"):
            continue
        if name.endswith(".py"):
            full_name = os.path.join(root, name)
            with open(full_name, "r") as source:
                tree = ast.parse(source.read())
                for node in ast.walk(tree):
                    if isinstance(node, ast.FunctionDef):
                        all_node_args = node.args.args
                        if node.args.vararg is not None:
                            all_node_args.append(node.args.vararg)
                        if node.args.kwarg is not None:
                            all_node_args.append(node.args.kwarg)
                        if node.args.posonlyargs is not None:
                            all_node_args.extend(node.args.posonlyargs)
                        if node.args.kwonlyargs is not None:
                            all_node_args.extend(node.args.kwonlyargs)
                        args = [a.arg for a in all_node_args]
                        docstring = docstring_parser.parse(ast.get_docstring(node))
                        doc_args = [a.arg_name for a in docstring.params]
                        clean_doc_args = []
                        for a in doc_args:
                            clean_a = ""
                            for c in a.split()[0]:
                                if c.isalnum() or c == '_':
                                    clean_a += c
                            if clean_a:
                                clean_doc_args.append(clean_a)
                        doc_args = clean_doc_args
                        for a in doc_args:
                            if a not in args:
                                print(full_name, node.lineno, args, doc_args)
                            break

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90505
Approved by: https://github.com/malfet, https://github.com/ZainRizvi
2022-12-09 21:43:09 +00:00
Ram Rachum
77f9b2e8bf Fix exception causes in fx, nn and onnx packages (#90134)
This is a continuation of #90118

@kit1980
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90134
Approved by: https://github.com/kit1980
2022-12-06 04:34:58 +00:00
anjali411
a6c0442cce Add __all__ to torch.{autograd, fx, cuda} submodules (#85343)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85343
Approved by: https://github.com/albanD
2022-10-09 14:46:54 +00:00
Kurt Mohler
8b4fee5912 Remove unnecessary import warnings (#82760)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82760
Approved by: https://github.com/albanD
2022-08-04 17:12:17 +00:00
Elias Ellison
023aafbcd7 Fix for normalizing signature for op overloads (#77182)
Previously, we were taking the `.op` from OpOverload/OpOverloadPacket and looking for a mapping in `_jit_builtins` for their signature. Those will only exist for operators on the public api, not the overload packets, e.g. `torch.resize_as_` not `torch.ops.aten.resize_as_` (as least in this case, and im pretty sure generally). The OpOverloads/OpOverloadPackets have schemas stored on them so we can just use those directly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77182
Approved by: https://github.com/anjali411
2022-05-10 23:36:26 +00:00
Jordan Fix
1c5a66c2aa [FX] Fix operator_schemas normalize_function to consider OpOverloads (#76469)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76469

Broken by Original commit changeset: 450e86c4e08a

Original Phabricator Diff: D35874477

Test Plan: Added unit test coverage to test_fx_experimental

Reviewed By: albanD

Differential Revision: D35978105

fbshipit-source-id: f22670b3b00a86777a26feaf4cb911595d150a17
(cherry picked from commit 91868b1e872c19d58d96a6c80a5e78dc6ffe4c7b)
2022-04-28 01:38:16 +00:00
Peter Bell
7a80fc2ce7 [fx] Don't use __module__ to test if a function is bound from C++
The new `test_public_bindings.py` test means `__module__` will be set
correctly in future, even for functions bound from C++. Instead, just
test directly that the function is of the `BuiltinFunctionType` which
only passes for functions exported with the CPython API.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75896

Approved by: https://github.com/albanD
2022-04-23 00:10:22 +00:00
Jordan Fix
b0a327d1f4 [fx/operator_schemas] Bring back check for OpOverload (#73978)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73978

Partial backout of D34727831 (beda4e8b2f)

Test Plan: CI

Reviewed By: openrichardfb

Differential Revision: D34761511

fbshipit-source-id: 14cdfdb080efb223f20cac2e550b75baf99d2f2f
(cherry picked from commit 42ee1936f1e06a9626df3b15039153f5d82b2646)
2022-03-10 00:16:53 +00:00
anjali411
beda4e8b2f Fix fx tracing for OpOverload (#73940)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73940

Test Plan: Imported from OSS

Reviewed By: zhxchen17

Differential Revision: D34727831

Pulled By: anjali411

fbshipit-source-id: 26e7044a1d5ba9ee0854bda784633b134971074b
(cherry picked from commit 69685e19b3de5ea3f494464eddcce44e93cb0f4d)
2022-03-08 21:47:55 +00:00
anjali411
086645ad77 Update __torch_dispatch__ to return op overload instead of the opoverload packet function (#72673)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72673

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D34627164

Pulled By: anjali411

fbshipit-source-id: 3cb6406a392d530bf9da36b4d8e0a62b30e6497e
(cherry picked from commit 65b85a0a67df4d0f16ac8964e2b685d478a610fb)
2022-03-07 22:38:42 +00:00
Anjali Chourdia
a1383a9cfa Reland torch.ops API change machinery with the core functionality disabled (#71785)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71785

see https://github.com/pytorch/pytorch/pull/67254
ghstack-source-id: 147648699

Test Plan: github CI

Reviewed By: albanD

Differential Revision: D33777229

fbshipit-source-id: 517b36be9743025eb40d708d380dae62e3663184
(cherry picked from commit a637e69569)
2022-02-02 16:06:29 +00:00
Yan Li
6964aa2ced backout D33469839 (#71443)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71443

cogwheel test inline_cvr_infer_canary_pyper_model_publish is timing out.

The convert_fx call takes > 20 mins for local and local_ro sub modules, which used to take ~ 2 mins.

Test Plan:
Fblearn flow run
* the following cmd took 1113 seconds before the diff and 5002 seconds after.
    flow-cli clone-locally 320014219  --run-as-secure-group pytorch_at_scale  --operators pyper_model_publish_workflow.pyper_model_publish_workflow.process_torch_package_model_files.process_non_sparse_parameters[0]

Cogwheel test
* Cogwheel test with packages in B3588 (the last good run) took 4694.48s
* Cogwheel test with packages in B3590 (the first timeout) took 13975.83s
* Cogwheel test with the following packages took 4535.04s
  * all packages in B3588 except the model publish
  * the model publish built with D33469839 (043e84b3d2) reversed (created D33633570)

Reviewed By: albanD, jerryzh168

Differential Revision: D33633570

fbshipit-source-id: dc5e777c48a90c551641a3f79126461f6a60449e
(cherry picked from commit 03ab65023a)
2022-01-18 23:51:51 +00:00
anjali411
043e84b3d2 Per-overload torch.ops API (#67254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67254

Fixes https://github.com/pytorch/pytorch/issues/65997

BC breaking:
`output = torch.ops._test.leaky_relu(self=torch.tensor(-1.0))` now fails with the error `TypeError: __call__() got multiple values for argument 'self'` since we call into `OpOverloadBundle`'s `__call__` method that has `self` bound to it as its first argument.

Follow up work:
1. disallow `default` as an overload name for aten operators.
2. Add a method to obtain a list of all overloads (exclude the ones registered by JIT)
3. Add methods/properties to `OpOverload` to access more schema information (types of input and output args etc)

cc ezyang gchanan

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D33469839

Pulled By: anjali411

fbshipit-source-id: c3fc43460f1c7c9651c64b4d46337be21c400621
2022-01-10 17:29:06 -08:00
Michael Suo
402f2934bf Revert D33262228: Per-overload torch.ops API
Test Plan: revert-hammer

Differential Revision:
D33262228 (8e6d1738a4)

Original commit changeset: 600dbf511514

Original Phabricator Diff: D33262228 (8e6d1738a4)

fbshipit-source-id: 238fa88ea9c4f26c7511334765c07452fbca9655
2022-01-05 22:10:11 -08:00
anjali411
8e6d1738a4 Per-overload torch.ops API (#67254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67254

Fixes https://github.com/pytorch/pytorch/issues/65997

TODO: disallow `default` as an overload name for aten operators.

BC breaking:
`output = torch.ops._test.leaky_relu(self=torch.tensor(-1.0))` now fails with the error `TypeError: __call__() got multiple values for argument 'self'` since we call into `OpOverloadBundle`'s `__call__` method that has `self` bound to it as its first argument.

cc ezyang gchanan

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D33262228

Pulled By: anjali411

fbshipit-source-id: 600dbf511514ea9b41aea3e6b1bc1102dab08909
2022-01-05 15:17:41 -08:00
James Reed
e1c3e5f830 [resubmit][FX] Prototype for guarding against mutable operations in tracing (#64467)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64467

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D30744870

Pulled By: jamesr66a

fbshipit-source-id: fc652f8b17748f90dbeb83fabf3bd5bb57d6ff1a
2021-09-02 21:13:21 -07:00
Eli Uriegas
32a93c2424 Revert D30675780: [FX] Prototype for guarding against mutable operations in tracing
Test Plan: revert-hammer

Differential Revision:
D30675780 (795387477f)

Original commit changeset: b2116b51dcc8

fbshipit-source-id: d4f1173f4989556ea54974f4c2739ef85a705fae
2021-09-02 16:07:29 -07:00
James Reed
795387477f [FX] Prototype for guarding against mutable operations in tracing (#64295)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64295

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D30675780

Pulled By: jamesr66a

fbshipit-source-id: b2116b51dcc87357f0c84192c4c336680875e27a
2021-09-02 15:17:04 -07:00
James Reed
538647fe1f [WIP][FX] BC guarantees for 1.10 (#63888)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63888

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D30523133

Pulled By: jamesr66a

fbshipit-source-id: b04cc0d842a74862f42ecba98b757310cd2ec7b0
2021-08-30 19:56:46 -07:00
Horace He
e117d94e21 Wrapped create_type_hint in try/except block so that NormalizeArgs doesn't fail if create_type_hint fails (#61524)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61524

Test Plan: Imported from OSS

Reviewed By: bhosmer

Differential Revision: D29746106

Pulled By: Chillee

fbshipit-source-id: d08c0030f40b504e8f7a61fc0ee432f1515a0e6d
2021-07-17 16:13:17 -07:00
Sam Estep
3a0801f960 [skip ci] Fix "arugment" typos (#61459)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/61455.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61459

Reviewed By: soulitzer

Differential Revision: D29636559

Pulled By: samestep

fbshipit-source-id: 9ad65265c0491d9e81bb303abe3a07c6843bfa4a
2021-07-15 15:20:18 -07:00
Horace He
565b034237 changed parametric type error in normalize to a warning (#57183)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57183

Previously, if it was unable to support matching against a type, it would throw an error.

However, this exposes the user to arbitrary Torchscript schemas, which may or may not be problematic. Although we may support these in the future, for now we just return False (which will simply eliminate that schema from the candidates).

Test Plan: T89661626 and T89664016

Reviewed By: spaugh, khabinov

Differential Revision: D28072018

fbshipit-source-id: 83017d1e96d19912163edc74a5e43b2816783218
2021-04-28 22:33:44 -07:00
Horace He
786b0a8091 [FX] fix normalization issues with lists of tensors (#57004)
Summary:
Fixes issue with lists of tensors not being normalized correctly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57004

Reviewed By: jamesr66a

Differential Revision: D28034559

Pulled By: Chillee

fbshipit-source-id: f935f0b73a8356acd8a2ae93fcfc0417f0eab224
2021-04-27 20:02:00 -07:00
Andrew Millspaugh
a0483cd06b Back out "fx: Fix type_matches for Optional[List[int]] arguments" (#56991)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56991

Original commit changeset: c5aa5f61a215

Diff: D27987746 (267b554b6f)

Test Plan: `buck test` under the glow-buck target is the target that this reversion is intended to fix

Reviewed By: jfix71

Differential Revision: D28019659

fbshipit-source-id: 37584ff404fc9195b309a5a6afdb4edbc2b4f088
2021-04-27 00:15:15 -07:00
Peter Bell
267b554b6f fx: Fix type_matches for Optional[List[int]] arguments (#56790)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56790

If the argument doesn't match `List[int]`, this code falls through to
`issubclass(argument_type, List[int])` which is invalid and raises a
`TypeError`. If this happens during the processing of a `Union` (e.g.
`Optional`), the other union types aren't given the chance to match against the
signature.

This also stop normalize_function from indescriminately swallowing exceptions,
which let this bug go unnoticed.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27987746

Pulled By: mruberry

fbshipit-source-id: c5aa5f61a215f0f39925e7053f33bff4b5d5acc2
2021-04-25 20:28:37 -07:00
Jordan Fix
4ef8205104 [fx][normalize] Allow for args to be left as args (#55995)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55995

Normalization is kind of broken currently. But making default arguments visible still appears to work, and is nice functionality to still be able to rely on/use. Adds an option to `NormalizeArgs`'s `__init__` called `normalize_to_only_use_kwargs` which defaults to true, which if set to false will keep using the same signature as provided, but additionally set kwargs in kwargs.

Test Plan: Added test to `test_fx_experimental`.

Reviewed By: 842974287

Differential Revision: D27759448

fbshipit-source-id: 620061fcf46d8549ac70b62aede8b6740aee3778
2021-04-24 08:15:17 -07:00
Nikita Shulga
47d2edd597 Fix quick-checks for operator-schemas (#56692)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56692

Reviewed By: heitorschueroff

Differential Revision: D27939830

Pulled By: malfet

fbshipit-source-id: 67a054de5c58832fcd7d0df0dd37faf1ea1406fd
2021-04-22 08:11:29 -07:00
Horace He
0df239e550 [FX] Make arg normalization a method on Node and not a pass (also augment tests to be exhaustive) (#55992)
Summary:
Commandeered from https://github.com/pytorch/pytorch/pull/54563

Primary changes from first PR:
1. Refactored primary `normalize_function` logic into `operator_schemas.py` so that non-FX users can use it.
2. Refactored tests a bit, and added a path to call `normalize_function` directly.
3. Moved check for `boolean_dispatch` so that `torch.lu` also gets properly handled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55992

Reviewed By: mruberry

Differential Revision: D27774396

Pulled By: Chillee

fbshipit-source-id: 7f65632e1d608e4abd55aec5ccbfdc3f67f52b8e
2021-04-22 03:53:41 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
James Reed
2d8795c552 [FX] Normalize torch. namespace ops (#53832)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53832

Test Plan: Imported from OSS

Reviewed By: jfix71, Chillee

Differential Revision: D26982801

Pulled By: jamesr66a

fbshipit-source-id: 96ac8efe2b3c644cfb7328168f6db089d3756aa2
2021-03-17 23:34:29 -07:00
James Reed
255b103c1b [WIP] Function to retrieve inspect.Signature instances for PyTorch ops (#53830)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53830

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26982802

Pulled By: jamesr66a

fbshipit-source-id: 18fddc9f3f34b09e173de59f2fe886f8eedd000e
2021-03-17 20:41:27 -07:00