Commit Graph

1635 Commits

Author SHA1 Message Date
PyTorch MergeBot
50e90d7203 Revert "[dynamo] simplify implementation for functools.reduce (#133778)"
This reverts commit 6c0b15e382.

Reverted https://github.com/pytorch/pytorch/pull/133778 on behalf of https://github.com/ZainRizvi due to Sorry, but this breaks internal tests because of using functools ([comment](https://github.com/pytorch/pytorch/pull/133778#issuecomment-2310445169))
2024-08-26 15:16:17 +00:00
PyTorch MergeBot
472c7cf962 Revert "[dynamo] simplify implementation for builtins.sum (#133779)"
This reverts commit 8d90392fb0.

Reverted https://github.com/pytorch/pytorch/pull/133779 on behalf of https://github.com/ZainRizvi due to Sorry, but this breaks internal tests because of using functools ([comment](https://github.com/pytorch/pytorch/pull/133778#issuecomment-2310445169))
2024-08-26 15:16:17 +00:00
PyTorch MergeBot
e1fc4362fb Revert "[dynamo] simplify implementation for os.fspath (#133801)"
This reverts commit c5f6b72041.

Reverted https://github.com/pytorch/pytorch/pull/133801 on behalf of https://github.com/ZainRizvi due to Sorry, but this breaks internal tests because of using functools ([comment](https://github.com/pytorch/pytorch/pull/133778#issuecomment-2310445169))
2024-08-26 15:16:17 +00:00
Xuehai Pan
c5f6b72041 [dynamo] simplify implementation for os.fspath (#133801)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133801
Approved by: https://github.com/anijain2305
ghstack dependencies: #133769, #133778, #133779, #133771
2024-08-26 07:12:15 +00:00
Yidi Wu
4ff1a4dd0f [export] support set_grad_enabled hop in dynamo to enable re-tracing (#134281)
As titled. We added dynamo support for wrap_with_set_grad_enabled hop to support re-trace an exported program.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134281
Approved by: https://github.com/tugsbayasgalan
2024-08-24 00:35:53 +00:00
PyTorch MergeBot
7fd3b69886 Revert "[dynamo][super] Improve handling of getattr on super (#134039)"
This reverts commit 1da3a049da.

Reverted https://github.com/pytorch/pytorch/pull/134039 on behalf of https://github.com/jeanschmidt due to broke internal torchrec signals, see [D61670727](https://www.internalfb.com/diff/D61670727) ([comment](https://github.com/pytorch/pytorch/pull/134039#issuecomment-2307151643))
2024-08-23 13:57:04 +00:00
Xuehai Pan
8d90392fb0 [dynamo] simplify implementation for builtins.sum (#133779)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133779
Approved by: https://github.com/jansel
ghstack dependencies: #133769, #133778
2024-08-23 10:10:19 +00:00
Xuehai Pan
6c0b15e382 [dynamo] simplify implementation for functools.reduce (#133778)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133778
Approved by: https://github.com/jansel
ghstack dependencies: #133769
2024-08-23 09:10:44 +00:00
Xuehai Pan
cc3a76edba [dynamo] simplify polyfill registration for builtins.all and builtins.any (#133769)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133769
Approved by: https://github.com/jansel
2024-08-23 09:05:24 +00:00
Xuehai Pan
25b2e46573 [dynamo] add max iterator limit while inlining generators (#134233)
Related:

- #133879

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134233
Approved by: https://github.com/jansel
2024-08-23 07:03:31 +00:00
Animesh Jain
fee677eeb6 [fbode-testing][dynamo][reland][inline-inbuilt-nn-modules] Mark attri… (#134136)
Shuai wants to test this internally before https://github.com/pytorch/pytorch/pull/133713 can go in. Creating a separate PR for ghmport.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134136
Approved by: https://github.com/yanboliang
2024-08-22 17:54:58 +00:00
Xuehai Pan
b6abac68ec [BE][dynamo] reorganize polyfill module hierarchy (#133977)
Changes:

1. Move `polyfill.py` -> `polyfills/__init__.py`. It can be used as `polyfill.xxx` -> `polyfills.xxx`.
2. Move submodule loading from `polyfills/__init__.py` to `polyfills/loader.py`.

Merge `polyfill.py` and `polyfills/` packages. Each polyfill module have its own namespace for better code organization.

The ultimate goal is make `polyfills/__init__.py` empty and all polyfill functions move to its own namespace.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133977
Approved by: https://github.com/jansel
2024-08-22 16:42:29 +00:00
Avik Chaudhuri
0d7ac1966a kill sharing of constraints (#134045)
Summary:
Previously, reuse of the same `Dim` was encoded by "sharing" internal constraints among constraint targets. This kind of sharing, implemented using `shared` fields between `_Constraint`s, was originally motivated by `dynamic_dim`, specifically to support `==` between `dynamic_dim`s, but we no longer need to maintain this overcomplicated structure: we can simply use names of `Dims` to directly encode sharing information.

Thus this PR vastly simplifies the structure of `_Constraint` by removing `shared` fields. As a result, both `_Constraint` and its moral subclass, `_DerivedConstraint`, are 1-1 with `Dim` and its moral subclass, `DerivedDim`.

Note that this will break `==` over `dynamic_dim`, so an immediate follow-up will be to remove `dynamic_dim` entirely from our public API. (It's been more than 6 months since the deprecation warning anyway.) I just didn't want to deal with that process in the same PR.

Test Plan: existing

Differential Revision: D61559413

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134045
Approved by: https://github.com/pianpwk
2024-08-22 04:40:47 +00:00
Yanbo Liang
7868b65c4d [Dynamo] Support dict.setdefault (#134083)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134083
Approved by: https://github.com/williamwen42
2024-08-22 01:57:33 +00:00
Animesh Jain
1da3a049da [dynamo][super] Improve handling of getattr on super (#134039)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134039
Approved by: https://github.com/yanboliang
ghstack dependencies: #133742, #134016
2024-08-21 16:50:35 +00:00
PyTorch MergeBot
68425e68fe Revert "[dynamo][reland][inline-inbuilt-nn-modules] Mark attributes of nn mod… (#133714)"
This reverts commit e8d3c4be36.

Reverted https://github.com/pytorch/pytorch/pull/133714 on behalf of https://github.com/anijain2305 due to fails internally ([comment](https://github.com/pytorch/pytorch/pull/133714#issuecomment-2302171472))
2024-08-21 14:21:06 +00:00
Xuehai Pan
539be0a769 [dynamo] support ClassMethodDescriptorType (#133862)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133862
Approved by: https://github.com/jansel
2024-08-21 12:56:19 +00:00
Animesh Jain
bd0db490bf [dynamo][set] Fix EQUALS_MATCH guard for constant sets and lists (#134016)
Fixes https://github.com/pytorch/pytorch/issues/133509

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134016
Approved by: https://github.com/laithsakka, https://github.com/jansel
ghstack dependencies: #133742
2024-08-21 12:41:52 +00:00
Xuehai Pan
c929e1e11f [dynamo] fix polyfill for user defined constructor __new__ (#133822)
In `cls->tp_call`, if `cls->tp_new` does not return an instance of class `cls`, then `cls->tp_init` is not called on the new instance.

Related PR:

- #132977

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133822
Approved by: https://github.com/jansel
2024-08-21 12:41:19 +00:00
Xuehai Pan
022cd7c9aa [RFC][dynamo] add decorator to register polyfill for unsupported C++ function to avoid graph break (#133712)
Add decorator `torch.compiler.substitute_in_graph` to register polyfill for unsupported C++ function to avoid graph break. This API provides an official way to add support for dynamo for third-party C extensions. Also, it can be used to simplify our implementation for `torch._dynamo.polyfill`.

5ee070266f/torch/_dynamo/variables/builtin.py (L97-L107)

Example:

```python
>>> import operator
>>> operator.indexOf([1, 2, 3, 4, 5], 3)
2

>>> torch.compile(operator.indexOf, fullgraph=True)([1, 2, 3, 4, 5], 3)
Unsupported: ...

>>> @torch.compiler.substitute_in_graph(operator.indexOf)
... def indexOf(sequence, x):
...     for i, item in enumerate(sequence):
...         if item is x or item == x:
...             return i
...     raise ValueError("sequence.index(x): x not in sequence")

>>> torch.compile(operator.indexOf, fullgraph=True)([1, 2, 3, 4, 5], 3)
2
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133712
Approved by: https://github.com/jansel
2024-08-21 06:36:41 +00:00
Animesh Jain
1ae5d5bb62 [dynamo][user-defined] Improve getattr_static for user_defined objects (#133742)
Fixes https://github.com/pytorch/pytorch/issues/133607

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133742
Approved by: https://github.com/Skylion007, https://github.com/jansel
2024-08-20 21:51:03 +00:00
PyTorch MergeBot
15b5a0b67f Revert "[RFC][dynamo] add decorator to register polyfill for unsupported C++ function to avoid graph break (#133712)"
This reverts commit 71dd52f51a.

Reverted https://github.com/pytorch/pytorch/pull/133712 on behalf of https://github.com/ZainRizvi due to breaking main windows cpu tests - this stack still causes that windows test to fail ([comment](https://github.com/pytorch/pytorch/pull/133712#issuecomment-2299776241))
2024-08-20 21:14:45 +00:00
PyTorch MergeBot
88ead0afc6 Revert "[dynamo] simplify polyfill registration for builtins.all and builtins.any (#133769)"
This reverts commit 178e8563b8.

Reverted https://github.com/pytorch/pytorch/pull/133769 on behalf of https://github.com/ZainRizvi due to breaking main windows cpu tests - this stack still causes that windows test to fail ([comment](https://github.com/pytorch/pytorch/pull/133712#issuecomment-2299776241))
2024-08-20 21:14:45 +00:00
PyTorch MergeBot
3fa874abbe Revert "[dynamo] simplify implementation for functools.reduce (#133778)"
This reverts commit 37b4bc60a4.

Reverted https://github.com/pytorch/pytorch/pull/133778 on behalf of https://github.com/ZainRizvi due to breaking main windows cpu tests - this stack still causes that windows test to fail ([comment](https://github.com/pytorch/pytorch/pull/133712#issuecomment-2299776241))
2024-08-20 21:14:45 +00:00
PyTorch MergeBot
98e6a1d8ff Revert "[dynamo] simplify implementation for builtins.sum (#133779)"
This reverts commit 3f58a8051a.

Reverted https://github.com/pytorch/pytorch/pull/133779 on behalf of https://github.com/ZainRizvi due to breaking main windows cpu tests - this stack still causes that windows test to fail ([comment](https://github.com/pytorch/pytorch/pull/133712#issuecomment-2299776241))
2024-08-20 21:14:44 +00:00
Xuehai Pan
b03381cac2 [dynamo] support cls.__flags__ (#133970)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133970
Approved by: https://github.com/jansel
ghstack dependencies: #133969
2024-08-20 20:03:31 +00:00
Xuehai Pan
5229b52bf2 [dynamo] support cls.__base__ (#133969)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133969
Approved by: https://github.com/jansel
2024-08-20 20:03:31 +00:00
Xuehai Pan
3f58a8051a [dynamo] simplify implementation for builtins.sum (#133779)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133779
Approved by: https://github.com/jansel
ghstack dependencies: #133712, #133769, #133778
2024-08-20 19:48:57 +00:00
Xuehai Pan
37b4bc60a4 [dynamo] simplify implementation for functools.reduce (#133778)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133778
Approved by: https://github.com/jansel
ghstack dependencies: #133712, #133769
2024-08-20 19:48:57 +00:00
Xuehai Pan
178e8563b8 [dynamo] simplify polyfill registration for builtins.all and builtins.any (#133769)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133769
Approved by: https://github.com/jansel
ghstack dependencies: #133712
2024-08-20 19:48:57 +00:00
Xuehai Pan
71dd52f51a [RFC][dynamo] add decorator to register polyfill for unsupported C++ function to avoid graph break (#133712)
Add decorator `torch.compiler.substitute_in_graph` to register polyfill for unsupported C++ function to avoid graph break. This API provides an official way to add support for dynamo for third-party C extensions. Also, it can be used to simplify our implementation for `torch._dynamo.polyfill`.

5ee070266f/torch/_dynamo/variables/builtin.py (L97-L107)

Example:

```python
>>> import operator
>>> operator.indexOf([1, 2, 3, 4, 5], 3)
2

>>> torch.compile(operator.indexOf, fullgraph=True)([1, 2, 3, 4, 5], 3)
Unsupported: ...

>>> @torch.compiler.substitute_in_graph(operator.indexOf)
... def indexOf(sequence, x):
...     for i, item in enumerate(sequence):
...         if item is x or item == x:
...             return i
...     raise ValueError("sequence.index(x): x not in sequence")

>>> torch.compile(operator.indexOf, fullgraph=True)([1, 2, 3, 4, 5], 3)
2
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133712
Approved by: https://github.com/jansel
2024-08-20 19:48:57 +00:00
Animesh Jain
33f1ee036e [dynamo][user-defined] Simplify call_hasattr (#133935)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133935
Approved by: https://github.com/williamwen42, https://github.com/jansel
ghstack dependencies: #133745, #133747, #133746, #133799, #133800
2024-08-20 16:27:44 +00:00
PyTorch MergeBot
2bd02e0c82 Revert "[RFC][dynamo] add decorator to register polyfill for unsupported C++ function to avoid graph break (#133712)"
This reverts commit 641724ed1d.

Reverted https://github.com/pytorch/pytorch/pull/133712 on behalf of https://github.com/jeanschmidt due to breaking main windows cpu tests - reverting them all, so we can identify the culprit with more calmness ([comment](https://github.com/pytorch/pytorch/pull/133712#issuecomment-2298528797))
2024-08-20 10:34:41 +00:00
PyTorch MergeBot
91fd270535 Revert "[dynamo] simplify polyfill registration for builtins.all and builtins.any (#133769)"
This reverts commit 59ca56e56c.

Reverted https://github.com/pytorch/pytorch/pull/133769 on behalf of https://github.com/jeanschmidt due to breaking main windows cpu tests - reverting them all, so we can identify the culprit with more calmness ([comment](https://github.com/pytorch/pytorch/pull/133712#issuecomment-2298528797))
2024-08-20 10:34:41 +00:00
PyTorch MergeBot
5109c5ef23 Revert "[dynamo] simplify implementation for functools.reduce (#133778)"
This reverts commit ff9be0eda9.

Reverted https://github.com/pytorch/pytorch/pull/133778 on behalf of https://github.com/jeanschmidt due to breaking main windows cpu tests - reverting them all, so we can identify the culprit with more calmness ([comment](https://github.com/pytorch/pytorch/pull/133712#issuecomment-2298528797))
2024-08-20 10:34:41 +00:00
PyTorch MergeBot
08b5e07e6c Revert "[dynamo] simplify implementation for builtins.sum (#133779)"
This reverts commit 1fdeb4e329.

Reverted https://github.com/pytorch/pytorch/pull/133779 on behalf of https://github.com/jeanschmidt due to breaking main windows cpu tests ([comment](https://github.com/pytorch/pytorch/pull/133779#issuecomment-2298285206))
2024-08-20 08:33:29 +00:00
Michael Lazos
626acaeb16 [Dynamo] Support torch function stack len (#133133)
Adds support for `torch._C._len_torch_function_stack()`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133133
Approved by: https://github.com/williamwen42
ghstack dependencies: #133130, #133729, #133131, #133132
2024-08-20 07:14:52 +00:00
Michael Lazos
d1fdf984c3 [Dynamo] Support push torch function mode stack (#133132)
This PR adds support `torch._C._push_on_torch_function_stack()` by updating `torch.py` to push onto the symbolic torch function mode stack when a push is encountered. The same side effects infra used in the previous PR is used to track the mutation of the torch function mode stack and add bytecode to update it if it is mutated.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133132
Approved by: https://github.com/williamwen42
ghstack dependencies: #133130, #133729, #133131
2024-08-20 07:14:47 +00:00
Michael Lazos
c0b4aaa8c5 [Dynamo] Support pop torch function mode stack (#133131)
This PR adds support for tracing `torch._C._pop_torch_function_stack()` without graph breaking and in order to verify the state change also adds replay of mutations to the torch function mode stack via side_effects appending supplemental bytecode as we do for other python mutable objects.

Details:
To represent the torch function mode stack symbolically a deque field is added to the instruction translator. When the InstructionTranslator is initialized, all modes are read from the current torch function mode stack, and stashed in a global weak ref for later access (using existing sources) without needing to push/pop the python/cpp torch function mode stack.

During tracing, when `_pop_torch_function_stack` is encountered a value is popped from this deque and the variable tracker representing the mode is returned. To ensure the true torch function mode stack matches this state, `TorchFunctionModeStackVariable`, a singleton, is marked as mutated, this adds it to side effects, where during final codegen, side effects will codegen a call to a python helper which will update the python torch function mode stack.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133131
Approved by: https://github.com/jansel
ghstack dependencies: #133130, #133729
2024-08-20 07:14:42 +00:00
Michael Lazos
f147349568 Fix DeviceContext bug (#133729)
Fixes https://github.com/pytorch/pytorch/issues/133666

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133729
Approved by: https://github.com/bdhirsh
ghstack dependencies: #133130
2024-08-20 07:14:37 +00:00
Michael Lazos
09e366cb57 [Dynamo] Add torch function mode stack guard to dynamo (#133130)
This PR adds a guard on the torch function mode stack state at the beginning of tracing. The way this is implemented is via a new leaf guard which is passed the initial stack state at construction and compares it to the stack state at the time the guard is run.

Details:
The stack state is extracted via popping all modes, appending them to a list, and pushing all modes back. This list is stored on the output graph and read during guard construction to pass to the stack mode guard. There the length and types of the modes are recorded. Next time the guard is run it compares this recorded state to the current mode stack state.

To implement this in python a helper function was added to utils.py and this is used if cpp guards are not enabled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133130
Approved by: https://github.com/anijain2305
2024-08-20 07:14:33 +00:00
Animesh Jain
e8d3c4be36 [dynamo][reland][inline-inbuilt-nn-modules] Mark attributes of nn mod… (#133714)
Relands https://github.com/pytorch/pytorch/pull/132539
Relands https://github.com/pytorch/pytorch/pull/132736

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133714
Approved by: https://github.com/jansel
2024-08-20 05:57:52 +00:00
Bob Ren
f08d484702 Add itertools.islice support in dynamo (#133893)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133893
Approved by: https://github.com/oulgen
2024-08-20 05:55:53 +00:00
Yidi Wu
2ec95ffe57 [cond] support unbacked symbool inputs (#133589)
Fixes https://github.com/pytorch/pytorch/issues/133577.

In dynamo, when received an unbacked symbool input, we create an unbacked symint to replace it.

The alternative approach of `not realizing the pred LazyVariable in cond` doesn't work because we need to get the proxy of the symbool input.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133589
Approved by: https://github.com/ezyang
2024-08-19 23:36:48 +00:00
William Wen
2b95007d12 [dynamo] support random.Random (#133725)
Fixes the observed graph breaks in https://github.com/pytorch/pytorch/issues/121349 and https://github.com/pytorch/pytorch/issues/121350.

But there are still graph breaks since a random output is being used as a seed, e.g.
```python
import random
import torch

def fn(x):
    seed = random.randint(0, 100)
    rand = random.Random(seed)
    return x + rand.randrange(10)

opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
opt_fn(torch.ones(1))
```

fails with
```
torch._dynamo.exc.InternalTorchDynamoError: UnspecializedPythonVariable() is not a constant
```

when tracing the line
```
rand = random.Random(seed)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133725
Approved by: https://github.com/jansel
2024-08-19 22:34:44 +00:00
Xuehai Pan
1fdeb4e329 [dynamo] simplify implementation for builtins.sum (#133779)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133779
Approved by: https://github.com/jansel
ghstack dependencies: #133712, #133769, #133778
2024-08-19 22:14:34 +00:00
Xuehai Pan
ff9be0eda9 [dynamo] simplify implementation for functools.reduce (#133778)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133778
Approved by: https://github.com/jansel
ghstack dependencies: #133712, #133769
2024-08-19 22:14:33 +00:00
Xuehai Pan
59ca56e56c [dynamo] simplify polyfill registration for builtins.all and builtins.any (#133769)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133769
Approved by: https://github.com/jansel
ghstack dependencies: #133712
2024-08-19 22:14:33 +00:00
Xuehai Pan
641724ed1d [RFC][dynamo] add decorator to register polyfill for unsupported C++ function to avoid graph break (#133712)
Add decorator `torch.compiler.substitute_in_graph` to register polyfill for unsupported C++ function to avoid graph break. This API provides an official way to add support for dynamo for third-party C extensions. Also, it can be used to simplify our implementation for `torch._dynamo.polyfill`.

5ee070266f/torch/_dynamo/variables/builtin.py (L97-L107)

Example:

```python
>>> import operator
>>> operator.indexOf([1, 2, 3, 4, 5], 3)
2

>>> torch.compile(operator.indexOf, fullgraph=True)([1, 2, 3, 4, 5], 3)
Unsupported: ...

>>> @torch.compiler.substitute_in_graph(operator.indexOf)
... def indexOf(sequence, x):
...     for i, item in enumerate(sequence):
...         if item is x or item == x:
...             return i
...     raise ValueError("sequence.index(x): x not in sequence")

>>> torch.compile(operator.indexOf, fullgraph=True)([1, 2, 3, 4, 5], 3)
2
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133712
Approved by: https://github.com/jansel
2024-08-19 22:14:33 +00:00
Animesh Jain
6ca68357b3 [dynamo] Save class vt in UserDefinedObjectVariable (#133800)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133800
Approved by: https://github.com/jansel
ghstack dependencies: #133745, #133747, #133746, #133799
2024-08-19 17:21:48 +00:00
Animesh Jain
08f14d5492 [refactor][dynamo][side-effects] Helper function for __new__ for user defined class (#133799)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133799
Approved by: https://github.com/jansel
ghstack dependencies: #133745, #133747, #133746
2024-08-19 17:21:48 +00:00
PyTorch MergeBot
35f36363ec Revert "[dtensor] move DTensor to public namespace (#133113)"
This reverts commit 2ee6b97464.

Reverted https://github.com/pytorch/pytorch/pull/133113 on behalf of https://github.com/wanchaol due to looks like it break some internal type imports ([comment](https://github.com/pytorch/pytorch/pull/133113#issuecomment-2295670911))
2024-08-19 05:00:19 +00:00
Animesh Jain
fed6096e73 [dynamo] Support object.__new__ call (#133746)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133746
Approved by: https://github.com/Skylion007, https://github.com/jansel
ghstack dependencies: #133745, #133747
2024-08-18 07:18:52 +00:00
Animesh Jain
d56a395971 [dynamo] Support os.fspath (#133747)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133747
Approved by: https://github.com/yanboliang, https://github.com/Skylion007, https://github.com/jansel
ghstack dependencies: #133745
2024-08-18 07:18:52 +00:00
Animesh Jain
4dc9795ebf [refactor][easy] Directly call var_getattr method for PythonModuleVariable (#133745)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133745
Approved by: https://github.com/yanboliang
2024-08-17 05:30:01 +00:00
Wanchao Liang
2ee6b97464 [dtensor] move DTensor to public namespace (#133113)
Moving DTensor to be in the public namespace, to formally add the
documentation page that includes all the public APIs. This includes:

* many path renames and path import fixes
* a dedicated doc page without too much content yet (adding in the next
  PRs)
* To preserve the BC for users still using the `torch.distributed._tensor`,
  I added a shim script to redirect old path calls to the new module

The BC preserving is evidented by the fact that all DTensor tests are still
working without changing the public imports. So it's safe to land the
changes

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133113
Approved by: https://github.com/XilunWu
ghstack dependencies: #133305, #133306
2024-08-17 05:09:52 +00:00
Li, Xingyuan
dcfa415e6e [Inductor UT] Reuse inductor UT for intel GPU test/inductor/test_compiled_optimizers.py (#133083)
[Inductor UT] Reuse Inductor test case for Intel GPU.
Reuse `test/inductor/test_compiled_optimizers.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133083
Approved by: https://github.com/etaf, https://github.com/jansel, https://github.com/mlazos
2024-08-17 01:15:26 +00:00
Will Feng
f57b00704e [Traceable FSDP2][Dynamo] Support reconstructing CUDA event object within Dynamo graph (#133635)
`torch.cuda.Event` objects are different from `torch.cuda.Stream` in that events are not pooled, meaning we can't look up a previously created CUDA event object by ID. This prevents CUDA event object created outside of the Dynamo graph from being used within the graph (since Dynamo needs a way to emit a `call_function` line in the graph that does the retrieval of the event object for downstream op use). This PR adds a simple object pool within Dynamo utility, to support looking up CUDA event object by ID from within the Dynamo graph.

After this PR, if a user creates a CUDA event object outside of the graph and use that event within the graph, the behavior will exactly match eager.

Test commands:
- `pytest -rA test/dynamo/test_ctx_manager.py::CtxManagerTests::test_cuda_event_created_outside_of_graph`
- `pytest -rA test/dynamo/test_ctx_manager.py::CtxManagerTests::test_cuda_event_across_graph_break`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133635
Approved by: https://github.com/yifuwang
ghstack dependencies: #133532, #133531, #133636
2024-08-16 20:40:46 +00:00
Yanbo Liang
770086fe39 [Dynamo] Support torch.cuda.device ctx manager (#133385)
Fixes #128059

I'm not sure if this is the right way, since Inductor doesn't always respect the device id set by users, so probably we should just wrap it as null context manager and print a warning. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @jansel @anijain2305 @mlazos @williamwen42

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133385
Approved by: https://github.com/jansel
2024-08-16 17:05:55 +00:00
Animesh Jain
8a2b064236 [dynamo][user_defined][stable-diffusion] Raise ObservedAttributeError on UserDefinedObject var_getattr (#132806)
Fixes https://github.com/pytorch/pytorch/issues/132551

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132806
Approved by: https://github.com/williamwen42
2024-08-16 04:30:06 +00:00
Animesh Jain
8a5708ba3d [dynamo] Support object creation of classes with custom __new__ (#132977)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132977
Approved by: https://github.com/jansel
2024-08-16 03:09:23 +00:00
Edward Z. Yang
90d2593b3e Revert #132806, #132736, #132539, #132487 (#133570)
This reverts commit 25df063f04.
This reverts commit de00c79583.
This reverts commit 419b76c4ac.
This reverts commit bc57d5b6ff.

Differential Revision: [D61335013](https://our.internmc.facebook.com/intern/diff/D61335013)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133570
Approved by: https://github.com/albanD, https://github.com/jansel, https://github.com/anijain2305
2024-08-15 20:54:21 +00:00
Xuehai Pan
758a0a88a2 [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200)
This PR removes unnecessary `pass` statement. This is semanticly safe because the bytecode for the Python code does not change.

Note that if there is a docstring in the function, a empty function does not need a `pass` statement as placeholder.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133200
Approved by: https://github.com/malfet, https://github.com/eqy, https://github.com/kit1980
2024-08-15 15:50:19 +00:00
Isuru Fernando
e554f71d7e Implement filter in dynamo (#131674)
Fixes https://github.com/pytorch/pytorch/issues/128944

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131674
Approved by: https://github.com/amjames, https://github.com/jansel
2024-08-14 14:54:13 +00:00
Edward Z. Yang
b5711297a0 Add support for SetVariable.discard (#133317)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133317
Approved by: https://github.com/Skylion007
2024-08-14 09:10:36 +00:00
Will Feng
1206958d89 [Dynamo] add EventVariable reconstruct (#133236)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133236
Approved by: https://github.com/yifuwang
2024-08-14 02:56:11 +00:00
Edward Z. Yang
80ed3e9ccd s/dipatch/dispatch/g (#133192)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133192
Approved by: https://github.com/albanD
2024-08-12 20:26:58 +00:00
Huanyu He
de48d54042 [TorchRec] Add Support for FakeProcessGroup (#133039)
Summary:
# context
* use FakeProcessGroup to mimic the multi-process tests
* can use `_test_compile_fake_pg_fn` as the single-process VB compile test
```
from torchrec.distributed.tests.test_pt2_multiprocess import _test_compile_fake_pg_fn
_test_compile_fake_pg_fn(
    rank=0,
    world_size=2,
)
```

reference: D59637444

Test Plan:
# run test
* run command and results: P1519228952, [tlparse](https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/.tmpwMCK1E/index.html)
```
TORCH_TRACE=/var/tmp/tt TORCH_SHOW_CPP_STACKTRACES=1 TORCH_LOGS="+all" buck2 run fbcode//mode/opt fbcode//torchrec/distributed/tests:test_pt2_multiprocess
```

Differential Revision: D56124045

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133039
Approved by: https://github.com/ezyang
2024-08-10 01:10:47 +00:00
Tom Ritchford
6beb2be2ed Fix _dynamo.variables.torch_function.global_mangled_class_name (#132744)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132744
Approved by: https://github.com/zou3519
2024-08-09 22:19:01 +00:00
Yiming Zhou
7b8ab7eb3e [dynamo] Partially support random.Random class (#133037)
This partially fixes the graph break issue when instantiating a `random.Random` class in Python.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133037
Approved by: https://github.com/anijain2305
2024-08-09 07:15:42 +00:00
xinyu-intel
5ae979ab10 [Dynamo] Support torch.autograd._is_checkpoint_valid (#132611)
Hi, we got `torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor bool call_function <function _is_checkpoint_valid at 0x7f0b0d22e290>` while tracing activation [checkpointing function in deepspeed](324ee65cb0/deepspeed/runtime/activation_checkpointing/checkpointing.py (L630)). Consider to add it to constant_folding list which is similar with https://github.com/pytorch/pytorch/pull/126196

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132611
Approved by: https://github.com/anijain2305, https://github.com/williamwen42
2024-08-08 04:05:08 +00:00
Yiming Zhou
c69b2d24e3 [dynamo] Support remove method of set (#132943)
Fixes https://github.com/pytorch/pytorch/issues/132800

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132943
Approved by: https://github.com/anijain2305
2024-08-08 02:43:19 +00:00
Animesh Jain
194ec49d27 [dynamo][lists][stable diffusion] Do not add source on list slice (#132912)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132912
Approved by: https://github.com/williamwen42
ghstack dependencies: #132806, #132899
2024-08-08 02:23:07 +00:00
Animesh Jain
acad2050c1 [easy][dynamo] Add tx as an arg in getitem_const (#132899)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132899
Approved by: https://github.com/yanboliang
ghstack dependencies: #132806
2024-08-07 21:35:41 +00:00
Animesh Jain
25df063f04 [dynamo][user_defined][stable-diffusion] Raise ObservedAttributeError on UserDefinedObject var_getattr (#132806)
Fixes https://github.com/pytorch/pytorch/issues/132551

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132806
Approved by: https://github.com/williamwen42
2024-08-07 18:19:49 +00:00
Joel Schlosser
fb146fc3c6 Only store necessary tensor_dict fields in node meta (#132805)
Fixes #132290

This PR attempts a more invasive / complete solution than the one from #132338, which removes immediate tensor fields from the `tensor_dict` copy stored in node meta. The approach taken here is to store only those fields of the `tensor_dict` which are absolutely utilized somewhere else.

So far, this appears to be limited to:
* `_dynamo_static_input_type`
* `tag` (at least in the tests). Discussion at #94080 appears to indicate this is depended on for export

(CI may point out more)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132805
Approved by: https://github.com/mlazos
2024-08-07 13:35:16 +00:00
xinyu-intel
8333ecf085 Support hasattr tracing for more PythonModuleVariable (#132731)
Fixes #132237

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132731
Approved by: https://github.com/EikanWang, https://github.com/yanboliang
2024-08-07 09:15:17 +00:00
Apurva Jain
8bc5ef563e Grouped Query Attention (#132689)
### Approach: Using the current function declaration

**Constraint:** Q_Heads % KV_Heads == 0

**Major change:**
- Added a new argument enable_gqa: bool to sdpa function call
- It adds a meaning to the last third dimension.

Sample use cases this would enable:
LLama3

```
# LLama3 8b call to SDPA
query = torch.rand(batch, 32, seq_len_q, D)
key = torch.rand(batch, 8, seq_len_kv, D)
value = torch.rand(batch, 8, seq_len_kv, D)

output = scaled_dot_product_attention(query, key, value, is_causal=True, enable_gqa=True)

# Output Shape
(batch, 32, seq_len_q, D)
```

### Design Choice:

- Check if Query.size(-3) == Key.size(-3) == Value.size(-3) or, Query.size(-3) % Key.size(-3) == 0
- The function adjusts the key and value tensors to match the query tensor's head dimension by using repeat_interleave if their number of heads are not equal, facilitating correct and efficient computation in attention mechanisms.
- By default the enable_gqa flag is set to False, which ensures that regular sdpa functionality remains unchanged.

### Benchmarks:

- **sdpa.py: #130634**
For different batch sizes enable_gqa=True shows a substansial improvement in the run_time of sdpa

 | batch_size | q_num_heads | kv_num_heads | q_seq_len | kv_seq_len | embed_dim | forward_time when enable_gqa=True   |   forward_time when enable_gqa=False    |
| ------------ | ------------- | -------------- | ----------- | ------------ | ----------- | ----------- | ---------------- |
|     1      |     32      |      8       |   2048    |    2048    |   2048    |   100.71  |  119.70  |
|     8      |     32      |      8       |   2048    |    2048    |   2048    |   539.78  |  628.83  |
|     16     |     32      |      8       |   2048    |    2048    |   2048    |   1056.81  |  1225.48  |
|     32      |     32      |      8       |   2048    |    2048    |   2048    |   2099.54  |  2440.45  |

![Screenshot 2024-07-25 at 9 07 40 PM](https://github.com/user-attachments/assets/a3e5f716-c39f-4096-9e6c-82a735e57b7b)

- **TorchTitan: https://github.com/pytorch/torchtitan/pull/458**

Differential Revision: D60772086

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132689
Approved by: https://github.com/drisspg
2024-08-07 05:35:36 +00:00
Animesh Jain
de00c79583 [dynamo][inline_inbuilt_nn_modules] Mark nn module tensor static for cudagraphs (#132736)
Fixes https://github.com/pytorch/pytorch/issues/132714

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132736
Approved by: https://github.com/mlazos
ghstack dependencies: #132538
2024-08-06 20:13:28 +00:00
Brian Hirsh
e6eee04875 dynamo: use equality guards instead of id guards for Placement/DeviceMesh (#124401)
After talking to @anijain2305, we probably can't land this since it won't work for C++ guards. But we should still be able to do better than ID_MATCH

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124401
Approved by: https://github.com/anijain2305
2024-08-06 17:14:44 +00:00
Michael Lazos
a8f0979962 Add cudagraph static inputs logging (#132726)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132726
Approved by: https://github.com/anijain2305
2024-08-06 12:01:20 +00:00
Aart Bik
a8490a0762 [traced-graph][sparse] propagate sparsity in fx graph (#131920)
This PR proceeds with implementing the feature request #117188 by generalizing more cases that already work with COO to work with the compressed sparse formats as well.

Feature request:
https://github.com/pytorch/pytorch/issues/117188

Rebranch of older PRs (for history):
https://github.com/pytorch/pytorch/pull/131474
https://github.com/pytorch/pytorch/pull/128549

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131920
Approved by: https://github.com/ezyang
2024-08-05 15:49:53 +00:00
William Wen
01cdcbf7c8 [dynamo] revert map/zip iterator related changes (#132528)
Need to revert due to internal hangs: S437700

This reverts commit b6c1490cc0.

Revert "[dynamo] implement IteratorVariable and polyfill fallbacks for enumerate (#131725)"

This reverts commit 2576dbbc35.

Revert "[dynamo] add itertools repeat/count bytecode reconstruction (#131716)"

This reverts commit 35b4de32fa.

Revert "[dynamo] add lazy IteratorVariable implementations for map and zip (#131413)"

This reverts commit 7d282d8755.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132528
Approved by: https://github.com/ZainRizvi
2024-08-04 18:46:55 +00:00
Oguz Ulgen
6e79932543 Add basic mypy annotations to dynamo (#132415)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132415
Approved by: https://github.com/XuehaiPan, https://github.com/jamesjwu
2024-08-04 18:43:36 +00:00
PyTorch MergeBot
3558a8cf4a Revert "Add basic mypy annotations to dynamo (#132415)"
This reverts commit 71e22e0959.

Reverted https://github.com/pytorch/pytorch/pull/132415 on behalf of https://github.com/ZainRizvi due to Sorry, this PR has entered a weird state in the diff train. Trying to revert it to skip it, and then we can try relanding it ([comment](https://github.com/pytorch/pytorch/pull/132415#issuecomment-2267631785))
2024-08-04 18:39:29 +00:00
PyTorch MergeBot
0a25666f92 Revert "[dynamo] revert map/zip iterator related changes (#132528)"
This reverts commit e81e74ca6c.

Reverted https://github.com/pytorch/pytorch/pull/132528 on behalf of https://github.com/ZainRizvi due to This stack entered a weird state in the diff train. Reverting and relanding to clean the state ([comment](https://github.com/pytorch/pytorch/pull/132528#issuecomment-2267628475))
2024-08-04 18:26:09 +00:00
Animesh Jain
06581c277a [dynamo][stable-diffusion] Support dict(obj) on constrained subclasses of dict and OrderedDict (#132558)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132558
Approved by: https://github.com/jansel
2024-08-03 06:31:00 +00:00
Yanbo Liang
373e9be457 [Inductor][FlexAttention] Add kwarg to top level for users to specify kernel params (#132015)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132015
Approved by: https://github.com/Chillee
2024-08-03 02:27:02 +00:00
Animesh Jain
419b76c4ac [dynamo] Reland 132308, 132314, 132318, 132334 - Make builtin nn modules attributes static (#132539)
Relanding 4 PRs ending at https://github.com/pytorch/pytorch/pull/132334

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132539
Approved by: https://github.com/Skylion007, https://github.com/yanboliang, https://github.com/mlazos
2024-08-03 02:08:22 +00:00
William Wen
f379bbd46d [dynamo] support inspect.signature.bind (#132330)
Fixes https://github.com/pytorch/pytorch/issues/93760.

This was not that small of a task...

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132330
Approved by: https://github.com/jansel
ghstack dependencies: #132329
2024-08-02 20:37:05 +00:00
William Wen
e81e74ca6c [dynamo] revert map/zip iterator related changes (#132528)
Need to revert due to internal hangs: S437700

This reverts commit b6c1490cc0.

Revert "[dynamo] implement IteratorVariable and polyfill fallbacks for enumerate (#131725)"

This reverts commit 2576dbbc35.

Revert "[dynamo] add itertools repeat/count bytecode reconstruction (#131716)"

This reverts commit 35b4de32fa.

Revert "[dynamo] add lazy IteratorVariable implementations for map and zip (#131413)"

This reverts commit 7d282d8755.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132528
Approved by: https://github.com/ZainRizvi
2024-08-02 19:40:57 +00:00
PyTorch MergeBot
bcb4f7c172 Revert "Grouped Query Attention (#128898)"
This reverts commit 6b28af1b79.

Reverted https://github.com/pytorch/pytorch/pull/128898 on behalf of https://github.com/ZainRizvi due to Sorry, this broke a bunch of tests internally. See D60638265 ([comment](https://github.com/pytorch/pytorch/pull/128898#issuecomment-2265961038))
2024-08-02 18:58:46 +00:00
PyTorch MergeBot
24d0a32f98 Revert "[dynamo] Wrap unspecialized nn module getattr with UnspecializedNNModuleSource (#132308)"
This reverts commit aa0ed2496f.

Reverted https://github.com/pytorch/pytorch/pull/132308 on behalf of https://github.com/anijain2305 due to broke internal tests ([comment](https://github.com/pytorch/pytorch/pull/132308#issuecomment-2265959993))
2024-08-02 18:55:51 +00:00
PyTorch MergeBot
e696f17467 Revert "[dynamo] Track builtin nn modules with UnspecializedBuiltinNNModuleVariable (#132314)"
This reverts commit d6a82ce39b.

Reverted https://github.com/pytorch/pytorch/pull/132314 on behalf of https://github.com/anijain2305 due to broke internal tests ([comment](https://github.com/pytorch/pytorch/pull/132314#issuecomment-2265953367))
2024-08-02 18:52:38 +00:00
PyTorch MergeBot
193a19ee91 Revert "[dynamo] Treat attr of unspecialized buiitin nn modules as static (#132318)"
This reverts commit 7b816d7d6d.

Reverted https://github.com/pytorch/pytorch/pull/132318 on behalf of https://github.com/anijain2305 due to broke internal tests ([comment](https://github.com/pytorch/pytorch/pull/132318#issuecomment-2265945433))
2024-08-02 18:43:32 +00:00
PyTorch MergeBot
b8f7019df0 Revert "[dynamo] Track params/buffers and mark them as static (#132334)"
This reverts commit babb249a89.

Reverted https://github.com/pytorch/pytorch/pull/132334 on behalf of https://github.com/anijain2305 due to broke internal tests ([comment](https://github.com/pytorch/pytorch/pull/132334#issuecomment-2265942261))
2024-08-02 18:41:19 +00:00
Animesh Jain
56f2917bef [dynamo] Bugfix for recently added str handler (#132461)
There is probably more work to improve support. But this is hot fix to not fail on `.__func__`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132461
Approved by: https://github.com/williamwen42
ghstack dependencies: #132425
2024-08-02 13:16:39 +00:00
Michael Lazos
d2e9a8bf6d [Reland] Fix inlining module-scoped store global (#132439)
Reland https://github.com/pytorch/pytorch/pull/132224

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132439
Approved by: https://github.com/anijain2305
2024-08-02 09:13:52 +00:00
Animesh Jain
babb249a89 [dynamo] Track params/buffers and mark them as static (#132334)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132334
Approved by: https://github.com/ezyang, https://github.com/mlazos
2024-08-02 08:55:43 +00:00
Yanbo Liang
5ea0f51187 [Dynamo] Support abc.MutableMapping.get (#132363)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132363
Approved by: https://github.com/anijain2305, https://github.com/mlazos
2024-08-02 04:17:35 +00:00
Animesh Jain
6c4ce4331c [dynamo][exception] Raise Observed KeyError exception for dict __getitem__ (#132425)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132425
Approved by: https://github.com/yanboliang, https://github.com/Skylion007
2024-08-02 02:58:31 +00:00
Chen Haifeng
50ed6ce277 Support built-in id function for TensorVariable on parameters (#130100)
Fixes #130087

This patch tries to provide a built-in id function implementation for TensorVariable when the id function is called on tensors like module parameters. The id function call on intermediate tensors is not supported.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130100
Approved by: https://github.com/anijain2305
2024-08-02 01:19:25 +00:00
William Wen
625af2d27c [dynamo] fix add_push_null callsites with CALL_FUNCTION_EX (#132329)
Also fix a bug in `PyCodegen.add_push_null` where in Python <= 3.12, we may accidentally duplicate a NULL instead of the object on the stack before it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132329
Approved by: https://github.com/anijain2305
2024-08-02 00:29:21 +00:00
Oguz Ulgen
71e22e0959 Add basic mypy annotations to dynamo (#132415)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132415
Approved by: https://github.com/XuehaiPan, https://github.com/jamesjwu
2024-08-01 20:14:25 +00:00
PyTorch MergeBot
40c8f73099 Revert "Fix inlining module-scoped store global (#132224)"
This reverts commit c3a31d90e7.

Reverted https://github.com/pytorch/pytorch/pull/132224 on behalf of https://github.com/ZainRizvi due to Looks like the new import mock_store_global_crossfile_inline fails internally. Please see D60567756 for details ([comment](https://github.com/pytorch/pytorch/pull/132224#issuecomment-2263768729))
2024-08-01 19:06:36 +00:00
Animesh Jain
7b816d7d6d [dynamo] Treat attr of unspecialized buiitin nn modules as static (#132318)
This fixes the huge increase in compile time with +dynamic with inline_inbuilt_nn_modules.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132318
Approved by: https://github.com/yanboliang, https://github.com/mlazos, https://github.com/ezyang
ghstack dependencies: #132302, #132304, #132312, #132308, #132314
2024-08-01 17:11:18 +00:00
Yiming Zhou
ee09d066d3 [dynamo] Add line number to _warn_capture_scalar_outputs() (#132333)
Fixes #127667.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132333
Approved by: https://github.com/anijain2305
2024-08-01 16:11:21 +00:00
Oguz Ulgen
72d2dba992 Add None return type to init (#132335)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132335
Approved by: https://github.com/albanD
2024-08-01 15:26:45 +00:00
Animesh Jain
d6a82ce39b [dynamo] Track builtin nn modules with UnspecializedBuiltinNNModuleVariable (#132314)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132314
Approved by: https://github.com/yanboliang
ghstack dependencies: #132302, #132304, #132312, #132308
2024-08-01 06:21:05 +00:00
Animesh Jain
aa0ed2496f [dynamo] Wrap unspecialized nn module getattr with UnspecializedNNModuleSource (#132308)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132308
Approved by: https://github.com/yanboliang
ghstack dependencies: #132302, #132304, #132312
2024-08-01 06:21:05 +00:00
Animesh Jain
e772547d70 [dynamo][rename/refactor] Rename guard_source NN_MODULE to SPECIALIZED_NN_MODULE (#132302)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132302
Approved by: https://github.com/yanboliang
2024-08-01 04:35:43 +00:00
jainapurva
6b28af1b79 Grouped Query Attention (#128898)
### Approach: Using the current function declaration

**Constraint:** Q_Heads % KV_Heads == 0

**Major change:**
- Added a new argument enable_gqa: bool to sdpa function call
- It adds a meaning to the last third dimension.

Sample use cases this would enable:
LLama3

```
# LLama3 8b call to SDPA
query = torch.rand(batch, 32, seq_len_q, D)
key = torch.rand(batch, 8, seq_len_kv, D)
value = torch.rand(batch, 8, seq_len_kv, D)

output = scaled_dot_product_attention(query, key, value, is_causal=True, enable_gqa=True)

# Output Shape
(batch, 32, seq_len_q, D)
```

### Design Choice:

- Check if Query.size(-3) == Key.size(-3) == Value.size(-3) or, Query.size(-3) % Key.size(-3) == 0
- The function adjusts the key and value tensors to match the query tensor's head dimension by using repeat_interleave if their number of heads are not equal, facilitating correct and efficient computation in attention mechanisms.
- By default the enable_gqa flag is set to False, which ensures that regular sdpa functionality remains unchanged.

### Benchmarks:

- **sdpa.py: #130634**
For different batch sizes enable_gqa=True shows a substansial improvement in the run_time of sdpa

 | batch_size | q_num_heads | kv_num_heads | q_seq_len | kv_seq_len | embed_dim | forward_time when enable_gqa=True   |   forward_time when enable_gqa=False    |
| ------------ | ------------- | -------------- | ----------- | ------------ | ----------- | ----------- | ---------------- |
|     1      |     32      |      8       |   2048    |    2048    |   2048    |   100.71  |  119.70  |
|     8      |     32      |      8       |   2048    |    2048    |   2048    |   539.78  |  628.83  |
|     16     |     32      |      8       |   2048    |    2048    |   2048    |   1056.81  |  1225.48  |
|     32      |     32      |      8       |   2048    |    2048    |   2048    |   2099.54  |  2440.45  |

![Screenshot 2024-07-25 at 9 07 40 PM](https://github.com/user-attachments/assets/a3e5f716-c39f-4096-9e6c-82a735e57b7b)

- **TorchTitan: https://github.com/pytorch/torchtitan/pull/458**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128898
Approved by: https://github.com/drisspg
2024-07-31 22:58:51 +00:00
Xuehai Pan
e74ba1b34a [BE][Easy][15/19] enforce style for empty lines in import segments in torch/_d*/ (#129767)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129767
Approved by: https://github.com/anijain2305
2024-07-31 21:18:11 +00:00
Michael Lazos
c3a31d90e7 Fix inlining module-scoped store global (#132224)
Fixes https://github.com/pytorch/pytorch/issues/132165

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132224
Approved by: https://github.com/anijain2305
2024-07-31 17:37:43 +00:00
datagero
bdd7a0322d [Dynamo] Fix - str handler for UserDefinedObjectVariable (#130506)
Fixes #130301

Adjusted the call_str method to handle str conversion for UserDefinedObjectVariable.
Attempt in a clean branch for unrelated test errors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130506
Approved by: https://github.com/oulgen, https://github.com/anijain2305
2024-07-31 16:39:59 +00:00
Luca Wehrstedt
f4f7aba75d Expose function to probe whether PyTorch was built with FlashAttention (#131894)
This is needed by downstream projects (e.g., xFormers) to determine whether they can count on FlashAttention in PyTorch or whether they need to build it themselves.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131894
Approved by: https://github.com/drisspg, https://github.com/eqy
2024-07-31 11:33:09 +00:00
ekamiti
9e473fd868 Make adding Buffers more like adding Parameters (#125971)
Add similar semantics for creating a buffer object similar to creating a parameter. This is done by introducing a new Buffer class that can be used for type disambiguation. The underlying functionality of registering a buffer remains the same as the register_buffer method has not been changed. The persistent parameter in the Buffer type is to indicate whether a buffer object should be persistent or not. Other non-test changes have to do with getting the new Buffer type recognized by inductor and dynamo. Remaining changes are test changes to make sure that the Buffer type can be used as a drop in replacement for register_buffer as it just leads to register_buffer being called. The addition of this new functionality still allows for normal tensors to be used as buffers so these changes are intended to be backwards compatible.

Fixes #35735

Co-authored-by: Mikayla Gawarecki <mikaylagawarecki@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125971
Approved by: https://github.com/albanD, https://github.com/anijain2305, https://github.com/mlazos
2024-07-31 10:32:40 +00:00
rzou
19db4f6014 [capture_triton] fix special kwargs path (#132143)
I didn't test this path when creating the orchestrator. This PR fixes
that path to work in the capture_triton path. The problem is that we are
handling a value that is an int (in the capture_triton path) and a
ConstantVariable (in the Dynamo triton path) so we abstract that out in
the orchestrator.

Test Plan:
- new tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132143
Approved by: https://github.com/oulgen
2024-07-30 20:30:40 +00:00
Guilherme Leobas
a843178529 Let dynamo inline functional_call (#128646)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128646
Approved by: https://github.com/zou3519
2024-07-30 14:22:23 +00:00
PyTorch MergeBot
499ead96ff Revert "Grouped Query Attention (#128898)"
This reverts commit d039b14207.

Reverted https://github.com/pytorch/pytorch/pull/128898 on behalf of https://github.com/albanD due to Broken test on main ([comment](https://github.com/pytorch/pytorch/pull/128898#issuecomment-2258314481))
2024-07-30 13:11:24 +00:00
Animesh Jain
03e058189e [dynamo] Support dict unpack of MutableMapping objects (#131961)
Fixes https://github.com/pytorch/pytorch/issues/128067

The basic functionality was alredy introduced earlier. This just ensures
that we support UserDefinedObjectVariable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131961
Approved by: https://github.com/williamwen42, https://github.com/mlazos, https://github.com/yanboliang
ghstack dependencies: #131827, #131956
2024-07-30 05:49:58 +00:00
Animesh Jain
13457d1da0 [dynamo][log] Suggest to use pytree when graph-break on optree (#131827)
Discovered while working on https://github.com/pytorch/pytorch/issues/121369
On the model above, the log looks like this

~~~
/home/anijain/local/pytorch2/torch/_dynamo/variables/functions.py:698: UserWarning: Graph break for an optree C/C++ function optree._C.PyCapsule.flatten. Consider using torch._utils.pytree - https://github.com/pytorch/pytorch/blob/main/torch/utils/_pytree.py.
  torch._dynamo.utils.warn_once(msg)
/home/anijain/local/pytorch2/torch/_dynamo/variables/functions.py:698: UserWarning: Graph break for an optree C/C++ function optree.PyCapsule.unflatten. Consider using torch._utils.pytree - https://github.com/pytorch/pytorch/blob/main/torch/utils/_pytree.py.
  torch._dynamo.utils.warn_once(msg)
  ~~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131827
Approved by: https://github.com/zou3519, https://github.com/mlazos
2024-07-30 05:49:58 +00:00
William Wen
b6c1490cc0 [dynamo] make more unpack_var_sequence calls forced (#132069)
Fixes [T197204962](https://www.internalfb.com/intern/tasks/?t=197204962) (example failure: https://www.internalfb.com/intern/testinfra/diagnostics/11540474088277914.281475138576374.1722221031/)

Added tests contain a simple repro for the observed failure (`test_map_unpack_vars`).

Also fixes https://github.com/pytorch/pytorch/issues/132044

Differential Revision: [D60420335](https://our.internmc.facebook.com/intern/diff/D60420335)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132069
Approved by: https://github.com/anijain2305
2024-07-30 02:30:08 +00:00
jainapurva
d039b14207 Grouped Query Attention (#128898)
### Approach: Using the current function declaration

**Constraint:** Q_Heads % KV_Heads == 0

**Major change:**
- Added a new argument enable_gqa: bool to sdpa function call
- It adds a meaning to the last third dimension.

Sample use cases this would enable:
LLama3

```
# LLama3 8b call to SDPA
query = torch.rand(batch, 32, seq_len_q, D)
key = torch.rand(batch, 8, seq_len_kv, D)
value = torch.rand(batch, 8, seq_len_kv, D)

output = scaled_dot_product_attention(query, key, value, is_causal=True, enable_gqa=True)

# Output Shape
(batch, 32, seq_len_q, D)
```

### Design Choice:

- Check if Query.size(-3) == Key.size(-3) == Value.size(-3) or, Query.size(-3) % Key.size(-3) == 0
- The function adjusts the key and value tensors to match the query tensor's head dimension by using repeat_interleave if their number of heads are not equal, facilitating correct and efficient computation in attention mechanisms.
- By default the enable_gqa flag is set to False, which ensures that regular sdpa functionality remains unchanged.

### Benchmarks:

- **sdpa.py: #130634**
For different batch sizes enable_gqa=True shows a substansial improvement in the run_time of sdpa

 | batch_size | q_num_heads | kv_num_heads | q_seq_len | kv_seq_len | embed_dim | forward_time when enable_gqa=True   |   forward_time when enable_gqa=False    |
| ------------ | ------------- | -------------- | ----------- | ------------ | ----------- | ----------- | ---------------- |
|     1      |     32      |      8       |   2048    |    2048    |   2048    |   100.71  |  119.70  |
|     8      |     32      |      8       |   2048    |    2048    |   2048    |   539.78  |  628.83  |
|     16     |     32      |      8       |   2048    |    2048    |   2048    |   1056.81  |  1225.48  |
|     32      |     32      |      8       |   2048    |    2048    |   2048    |   2099.54  |  2440.45  |

![Screenshot 2024-07-25 at 9 07 40 PM](https://github.com/user-attachments/assets/a3e5f716-c39f-4096-9e6c-82a735e57b7b)

- **TorchTitan: https://github.com/pytorch/torchtitan/pull/458**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128898
Approved by: https://github.com/drisspg
2024-07-29 21:49:06 +00:00
PyTorch MergeBot
f72266ecea Revert "Let dynamo inline functional_call (#128646)"
This reverts commit 5aab1acc84.

Reverted https://github.com/pytorch/pytorch/pull/128646 on behalf of https://github.com/clee2000 due to the newly added test dynamo/test_higher_order_ops.py::FuncTorchHigherOrderOpTests::test_functional_call_sequential_params_and_buffers [GH job link](https://github.com/pytorch/pytorch/actions/runs/10147452270/job/28058682000) [HUD commit link](5aab1acc84) is broken, probably a landrace since it passed on PR ([comment](https://github.com/pytorch/pytorch/pull/128646#issuecomment-2256375501))
2024-07-29 16:26:50 +00:00
Guilherme Leobas
5aab1acc84 Let dynamo inline functional_call (#128646)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128646
Approved by: https://github.com/zou3519
ghstack dependencies: #129091, #130490
2024-07-29 15:41:03 +00:00
Guilherme Leobas
1e9cdf7d91 Relax constraints for creating a GenericContextWrappingVariable (#129091)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129091
Approved by: https://github.com/yanboliang, https://github.com/zou3519
2024-07-29 15:40:59 +00:00
Chengji Yao
d47c470f47 [dynamo] implement var_getattr in UserFunctionVariable (#130413)
This PR addresses the `getattr` of  UserFunctionVariable. Although this usage is uncommon, it does appear in [Megatron's code](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/core/tensor_parallel/layers.py#L635).

```
def linear_with_grad_accumulation_and_async_allreduce(...):
    ....
    if not linear_with_grad_accumulation_and_async_allreduce.warned:
        ....
    ....

linear_with_grad_accumulation_and_async_allreduce.warned = False
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130413
Approved by: https://github.com/yanboliang
2024-07-29 08:29:59 +00:00
Oguz Ulgen
75c8d59ea1 Remove mypy ignore from torch/_dynamo/variables/lazy.py (#131785)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131785
Approved by: https://github.com/aorenste, https://github.com/zou3519
ghstack dependencies: #131786, #131870
2024-07-28 17:13:53 +00:00
Oguz Ulgen
96c1862e0b Remove mypy ignore from torch/_dynamo/variables/__init__.py (#131784)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131784
Approved by: https://github.com/aorenste, https://github.com/zou3519, https://github.com/Skylion007
2024-07-27 05:07:33 +00:00
Vishwa Raj Singh
cd53698df0 Add hpu backend support for dynamo torchVariable _in_graph_classes() function (#129948)
Fixes #ISSUE_NUMBER

Recent change from PR#
f657b2b1f8 (diff-4a52059570bb96333d8383ce6a9d01bbb114c5e34aff6028f820899ca39b5a26R80)  , has hard coded flow to cuda stream in ingraph function. For non cuda backend (hpu in our case), it breaks the graph.

As part of this PR change adding hpu backend support to dynamo variables function _in_graph_classes().

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129948
Approved by: https://github.com/yanboliang
2024-07-26 18:38:03 +00:00
William Wen
2576dbbc35 [dynamo] implement IteratorVariable and polyfill fallbacks for enumerate (#131725)
Fixes https://github.com/pytorch/pytorch/issues/112794.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131725
Approved by: https://github.com/anijain2305
ghstack dependencies: #131413, #131716
2024-07-26 17:17:09 +00:00
William Wen
35b4de32fa [dynamo] add itertools repeat/count bytecode reconstruction (#131716)
Also fix bugs in the count iterator variable implementation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131716
Approved by: https://github.com/anijain2305
ghstack dependencies: #131413
2024-07-26 17:17:09 +00:00
Brian Hirsh
8bb9aa93a7 dynamo: mutations on .data should be invisible to autograd (#131403)
Fixes https://github.com/pytorch/pytorch/issues/121353

our handle for `.data` in dynamo today basically just converts `y = x.data` into `y = x.detach()`. The semantics of these two ops are not quite the same, because:

(1) any future mutations on `x.data` will be fully ignored by autograd
(2) any mutations on `x.detach()` will bump x's version counter

the linked model does a .data mutation that is hidden from autograd in eager, but ends up erroring during AOTDispatcher tracing.

I updated dynamo's handling so that:

(1) when dynamo sees a call to `getattr(tensor, "data")` and calls `.detach()` we set a flag on the returned `TensorVariable` indicating it came from `.data`

(2) on any tensor method that we call with an input `TensorVariable` with this flag turned on, we proxy autograd's `preserve_version_counter` logic into the graph, to properly reset the VC after the op is run.

One thing to note is that I don't actually do this on every op that we pass the tensor to: I only do it for tensor methods that appear to be mutations (by checking for a trailing underscore). My thought was that:

(1) I didn't want to do this for **every** op that you pass `y` into, since that will e.g. triple the number of nodes in the graph, and could cause compile time regressions if you use .data

(2) this situation is pretty rare in general, and I'm hoping that "tensor method mutations" cover most reasonable mutation cases. If we manage to miss a case, you will get a loud error during tracing anyway, so there is not a safety issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131403
Approved by: https://github.com/anijain2305, https://github.com/zou3519
2024-07-26 14:22:20 +00:00
Yanbo Liang
e76e566cfb [Dynamo] Support zip_longest (#131497)
Fixes #121348

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131497
Approved by: https://github.com/mlazos, https://github.com/jansel, https://github.com/zou3519
2024-07-26 14:06:10 +00:00
William Wen
7d282d8755 [dynamo] add lazy IteratorVariable implementations for map and zip (#131413)
Fixes https://github.com/pytorch/pytorch/issues/130750.

Repro of lazy/eager `map` discrepancy without `islice`:
```python
    def fn(a, b):
        y = 1

        def f(x):
            nonlocal y
            y += 1
            return x

        l = list(zip([a, b], map(f, [1, 2, 3, 4])))
        return a + y
```

The major change is that we implement `MapVariable` and `ZipVariable` based on `IteratorVariable`. Before, `map` and `zip` were being traced by immediately unpacking the result as a `TupleVariable`, which is wrong in cases such as the example above.

`MapVariable`s are not allowed to be unpacked while `ZipVariable`s can only be unpacked if all of its iterables can also be unpacked.

We also add new `[has_]force_unpack_var_sequence` methods to `VariableTracker` for the case where it is safe to unpack the entire sequence lazily, e.g., when building a list from a map (i.e. `list(map(f, ...))`).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131413
Approved by: https://github.com/anijain2305
2024-07-26 10:47:38 +00:00
Animesh Jain
a617919541 [dynamo] Do not guard on keys for _forward_hooks and _forward_pre_hooks (#131682)
Fixes https://github.com/pytorch/pytorch/issues/125836

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131682
Approved by: https://github.com/bdhirsh
2024-07-26 04:39:54 +00:00
Animesh Jain
2a4ca5ccc4 [dynamo] Pop the exception stack on handling the StopIteration natively (#131801)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131801
Approved by: https://github.com/yanboliang
ghstack dependencies: #131795
2024-07-25 23:33:19 +00:00
Michael Lazos
51f4f87718 [Reland] Ensure staticmethods can be allowed in graph (#131789)
Fixes https://github.com/pytorch/pytorch/issues/124735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131789
Approved by: https://github.com/anijain2305
2024-07-25 22:54:18 +00:00
Yidi Wu
ffc6bf8149 [dynamo] lazily guard and specialize on the symint when used in f-string. (#131529)
Fixes https://github.com/pytorch/pytorch/issues/103602.

This PR implements the idea of "if someone creates a string and then ends up not using it, we would prefer to NOT have specialized." mentioned in above issue. Specifically, we create a lazy variable tracker instead of ConstantVariable when we're in FORMAT_VALUE, and when the lazy variable tracker is realized (i.e. it's going to be used), we create a ConstantVariable and the specialization/guarding happens at the time of realization.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131529
Approved by: https://github.com/ezyang
2024-07-25 16:16:34 +00:00
Oguz Ulgen
7a42470bcb Annotate all InstructionTranslator (#131509)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131509
Approved by: https://github.com/zou3519
2024-07-24 23:45:53 +00:00
PyTorch MergeBot
236e06f9f9 Revert "Ensure staticmethods can be allowed in graph (#130882)"
This reverts commit 93fdd0237d.

Reverted https://github.com/pytorch/pytorch/pull/130882 on behalf of https://github.com/clee2000 due to torchrec test still broken internally D59945836 ([comment](https://github.com/pytorch/pytorch/pull/130882#issuecomment-2249003059))
2024-07-24 22:32:41 +00:00
PyTorch MergeBot
5db5865614 Revert "Annotate all InstructionTranslator (#131509)"
This reverts commit eafbd20f23.

Reverted https://github.com/pytorch/pytorch/pull/131509 on behalf of https://github.com/clee2000 due to sorry need to revert this to revert something else, I think you only need to rebase and remerge ([comment](https://github.com/pytorch/pytorch/pull/131509#issuecomment-2249000843))
2024-07-24 22:29:49 +00:00
Oguz Ulgen
b56939dae1 Annotate more InstructionTranslator (#131680)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131680
Approved by: https://github.com/zou3519
ghstack dependencies: #131676
2024-07-24 22:14:29 +00:00
Edward Z. Yang
0c6f1ca064 Introduce torch._dynamo.config.enable_compiler_collectives for syncing compilation across ranks (#130935)
This PR implements an opt-in configuration option for synchronizing compilation across all ranks at the end of Dynamo tracing (and potentially, other places in the future). There are two pieces to this PR:

1. Implementing infrastructure for compiler collectives (DistributedState/LocalState, the actual collective)
2. Using this infrastructure to synchronize automatic dynamic choices across all ranks

The infrastructure in part one can be used for other purposes, just add more (serializable) fields to LocalState.

Here is how automatic dynamic synchronization works:

1. Preflight in "torch/_dynamo/variables/builder.py": On the first Dynamo trace run, we trace without automatic dynamic at all; we assume all Tensor inputs that are not otherwise marked are static. This run is purely to collect all Tensor input sizes in the program.
2. torch/_dynamo/output_graph.py: At the end of the first Dynamo trace run, we perform a compiler collective to distribute all Tensor input sizes to all ranks. Then, we restart Dynamo
3. Apply the updates in "torch/_dynamo/variables/builder.py": Now that we have all sizes for every rank, we now update frame state with the observed sizes for all ranks, in rank order. Under the assumption that frame state is consistent on all ranks, this series of updates will preserve consistency.

For future work, it would be safer if we force a consistent hint on all ranks; this is more involved as we have to interpose in fakification.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130935
Approved by: https://github.com/jansel
2024-07-24 11:24:11 +00:00
Oguz Ulgen
eafbd20f23 Annotate all InstructionTranslator (#131509)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131509
Approved by: https://github.com/zou3519
2024-07-24 05:31:01 +00:00
Michael Lazos
9575b1afad Ensure tensor dict is populated with compiled autograd (#131556)
The issue addressed is that compiled autograd changes the calling convention of the FX graph to only have a single placeholder which contains a list of inputs. In this case, the meta of the tensor input nodes don't contain the `tensor_dict` meta. This adds them.

The context is that `tensor_dict` is used to convey if a tensor is an input with a static address.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131556
Approved by: https://github.com/anijain2305
2024-07-24 04:00:02 +00:00
Animesh Jain
6850e42266 [dynamo][exception] Remove older specialization for StopIteration (#131512)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131512
Approved by: https://github.com/yanboliang
ghstack dependencies: #131347, #131367, #131378, #131389, #131405, #131480
2024-07-24 00:06:53 +00:00
Aaron Orenstein
5a0068cc69 [BE] mypy: disallow untyped decorators (#131428)
Untyped decorators strip the types from their decorated function so even if the underlying function is fully typed then callers to it don't get any benefit from type annotations.

Step 1 - Enable the error and override in all the offending files.

#131429

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131428
Approved by: https://github.com/justinchuby, https://github.com/oulgen
2024-07-23 21:50:55 +00:00
Michael Lazos
93fdd0237d Ensure staticmethods can be allowed in graph (#130882)
Fixes https://github.com/pytorch/pytorch/issues/124735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130882
Approved by: https://github.com/anijain2305, https://github.com/williamwen42
2024-07-23 18:59:19 +00:00