Commit Graph

145 Commits

Author SHA1 Message Date
Jason Ansel
f4c67ffff4 [dynamo] Improve support for dynamic shapes str.format and _assert (#115203)
This removes a graph break in vision_maskrcnn.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115203
Approved by: https://github.com/yanboliang
2023-12-06 04:54:45 +00:00
Yanbo Liang
4620170008 [Dynamo] Revert multiple PRs since they triggered compilation stuck internally (#115126)
Revert the following PRs to mitigate internal compilation stuck:
#113432
#114016
#114507
#114196
#114739
#114669

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115126
Approved by: https://github.com/xush6528
2023-12-05 22:35:37 +00:00
Jason Ansel
4b8ddbbc7e [dynamo] Improve graph break message for copy.deepcopy (#115120)
I was curious what hf_T5_generate was trying to deepcopy, so I updated the errror message:
Before:
```
STATS graph_break
  ("'skip function deepcopy in file /home/jansel/conda/envs/pytorch/lib/python3.10/copy.py'', skipped according skipfiles.SKIP_DIRS'", 3)
  ...
```
After:
```
STATS graph_break
  ('copy.deepcopy UserDefinedObjectVariable(GenerationConfig)', 3)
  ...
```

Related issue: #115122

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115120
Approved by: https://github.com/oulgen
ghstack dependencies: #115095, #115046, #115057, #115119
2023-12-05 19:01:31 +00:00
Jason Ansel
522bae20df [dynamo] Support any() on SymNodeVariable (#115119)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115119
Approved by: https://github.com/yanboliang
ghstack dependencies: #115095, #115046, #115057
2023-12-05 19:01:31 +00:00
Xuehai Pan
2e8ac5ea93 [dynamo] support dict.fromkeys() / OrderedDict.fromkeys() / defaultdict.fromkeys() (#115010)
Add support for `dict.fromkeys`, `OrderedDict.fromkeys`, and `defaultdict.fromkeys`.

Fixes #114963

- #114963

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115010
Approved by: https://github.com/jansel
2023-12-04 01:49:59 +00:00
voznesenskym
4cfe997490 [dynamo] handle setting .data on a tensor (#113080)
**Dynamo**

We don't want setattr in the graph. Setting data has interesting implications on both aliasing and on the autograd engine.

The safe recipe is:

1) Disable grad
2) Call set_()
3) Manually lower the version counter on the object to hide it from the autograd engine

This is effectively the same exact thing as setting .data, and it composes properly with aot_autograd and inductor.

**aot_autograd**

For aot_autograd, there's another snag.

Specifically, when we invoke aot_autograd, we call `fake_mode.from_tensor()`, relying on memo to get the right tensor out. For .data mutations, this doesn't work, because the memoized fake_tensor is in the state it will be in at the end of the trace, not at the beginning. This means that the .data call is already applied, and the tensor shape (as in the case of these tests) mismatches. aot_autograd produces an invalid graph, with illegal calls like `torch.ops.aten.view.default(primals_2, [0])` where primals is actually sized `([6])` on input.

The new plan here is to:
1) Record tensor fakification policy in dynamo
2) provide a fresh fake mode to all backends
3) Invoke from_tensor with the stored policy to get fresh new fake tensors in aot_autograd

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113080
Approved by: https://github.com/bdhirsh
2023-12-02 00:35:44 +00:00
Jon Chuang
172a103857 [dynamo] strict=True kwarg for zip (#114047)
Fixes https://github.com/pytorch/pytorch/issues/113894

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114047
Approved by: https://github.com/ezyang
2023-11-22 08:48:51 +00:00
ydwu4
9e657ce2ed [HigherOrderOp] set should_flatten_output=True for cond (#113819)
This PR add should_flatten_outpu=True for cond. This effectively allows cond to support pytree output with the output being flattened. Note: a single tensor output will be automatically casted as tuple for torch.ops.higher_order.cond.

This PR also adds support for comparing BuiltinVariables e.g. tuple, this is to make sure we could make dynamo inline comparing two tree_spec to make sure both branches returns the same tree_spec.

Test Plan:
Existing tests. Will add more pytree tests and modify the documentations in the follow-up prs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113819
Approved by: https://github.com/zou3519
2023-11-22 04:06:30 +00:00
Yanbo Liang
033d7b670a [Dynamo][6.1/N] Refactor out TorchInGraphFunctionVariable and improve heuristic (#113432)
This is splitted from #113009, please check https://github.com/pytorch/pytorch/pull/113009#issuecomment-1804417925 for more details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113432
Approved by: https://github.com/ezyang
2023-11-17 23:42:00 +00:00
Jon Chuang
c94fdebd3e [dynamo] chore: Fallback on const_handler instead of special-casing on ConstantVariable (#113893)
Fixes https://github.com/pytorch/pytorch/pull/113874#issuecomment-1815269686

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113893
Approved by: https://github.com/ezyang
2023-11-17 07:46:58 +00:00
Jon Chuang
277229d0c6 [dynamo] Fix incorrectly casting SymNode to int when input is bool (#113871)
Fixes https://github.com/pytorch/pytorch/issues/113393, https://github.com/pytorch/pytorch/pull/113848#issuecomment-1814624510

Incorrectly casting symnode type will cause it to take the wrong path in symbolic_shapes

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113871
Approved by: https://github.com/jansel
2023-11-16 23:24:57 +00:00
PyTorch MergeBot
5d170fce29 Revert "Support tensors as Dict keys (#111196)"
This reverts commit b0805fa5d0.

Reverted https://github.com/pytorch/pytorch/pull/111196 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing internally. I will provide the details there ([comment](https://github.com/pytorch/pytorch/pull/111196#issuecomment-1813410149))
2023-11-15 23:08:00 +00:00
Philip Meier
9146ca6a07 use sourceless builder for builtin getattr (#113340)
In TorchVision we use the following (simplified) dispatch mechanism:

```python
import torch

def kernel1(tensor):
    return tensor + 2

def dispatcher1(input):
    kernel = get_kernel(dispatcher1, type(input))
    return kernel(input)

def kernel2(tensor):
    return tensor - 2

def dispatcher2(input):
    kernel = get_kernel(dispatcher2, type(input))
    return kernel(input)

# We actually use the function and type as keys, rather than their names.
# However, this currently not supported, but should be easy to add after
# https://github.com/pytorch/pytorch/pull/111196
REGISTRY = {
    "dispatcher1": {"Tensor": kernel1},
    "dispatcher2": {"Tensor": kernel2},
}

def get_kernel(dispatcher, input_type):
    dispatcher_registry = REGISTRY[dispatcher.__name__]
    for cls in input_type.__mro__:
        kernel = dispatcher_registry[cls.__name__]
        break
    return kernel
```

This can be compiled without graph breaks:

```python
cfn = torch.compile(dispatcher1, fullgraph=True)
torch.testing.assert_close(int(cfn(torch.tensor(3))), 5)

cfn = torch.compile(dispatcher2, fullgraph=True)
torch.testing.assert_close(int(cfn(torch.tensor(3))), 1)
```

However, if we start chaining these calls, we hit some issues:

```python
class Pipeline(torch.nn.Module):
    def forward(self, input):
        input = dispatcher1(input)
        input = dispatcher2(input)
        return input

cfn = torch.compile(Pipeline(), fullgraph=True)
torch.testing.assert_close(int(cfn(torch.tensor(3))), 3)
```

```
Can't access members of type(obj) for a generated custom object. Please use __class__ instead
```

The error message is not really helpful here. The following happens: when compiling `dispatcher1`, `get_kernel` gets inlined. That means when hitting `dispatcher2`, the `type` call no longer happens on an input with a source. Thus, in the first iteration we hit the top branch, while in the second we hit the bottom:

addb8e29cd/torch/_dynamo/variables/builtin.py (L1264-L1268)

And the error message I posted above originates from the type being treated as constant. This PR replaces this with a `SourcelessBuilder` instead.

With that fix in place, we hit another pointing to `input_type.__mro__`

```
AssertionError: Consider SourcelessBuilder for ephemeral objects, usually objects created locally.
```

Fix is similar: instead of using a `VariableBuilder` here, we use a `SourcelessBuilder` in case we have no `source`:

addb8e29cd/torch/_dynamo/variables/builtin.py (L1167-L1168)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113340
Approved by: https://github.com/peterbell10, https://github.com/lezcano
2023-11-15 13:01:20 +00:00
PyTorch MergeBot
77f66ade66 Revert "use sourceless builder for builtin getattr (#113340)"
This reverts commit d64bc8f0f8.

Reverted https://github.com/pytorch/pytorch/pull/113340 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but the test is failing internally ([comment](https://github.com/pytorch/pytorch/pull/113340#issuecomment-1811684167))
2023-11-15 02:06:00 +00:00
lezcano
b0805fa5d0 Support tensors as Dict keys (#111196)
This prepares the PR where we implement sets in terms of dicts.
To do so, rather than storing internally a dictionary that maps literals
to VariableTrackers, it stores (pretty much) a dictionary from VTs to VTs.
To do so, keys are wrapped in an opaque internal class `_Hashable`.
The Hashable class is opaque on purpose so that it fails hard if
if it inadvertently leaks back into user code.

We also found and fixed a number of latent bugs and inconsistencies
in the way dynamo checked what can be a dict key. More generally, we
make much clearer what are the things that need to be modified to add
a new supported key type to Dicts.

Fixes https://github.com/pytorch/pytorch/issues/107595
Fixes https://github.com/pytorch/pytorch/issues/111603
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111196
Approved by: https://github.com/jansel
2023-11-14 19:14:03 +00:00
Philip Meier
d64bc8f0f8 use sourceless builder for builtin getattr (#113340)
In TorchVision we use the following (simplified) dispatch mechanism:

```python
import torch

def kernel1(tensor):
    return tensor + 2

def dispatcher1(input):
    kernel = get_kernel(dispatcher1, type(input))
    return kernel(input)

def kernel2(tensor):
    return tensor - 2

def dispatcher2(input):
    kernel = get_kernel(dispatcher2, type(input))
    return kernel(input)

# We actually use the function and type as keys, rather than their names.
# However, this currently not supported, but should be easy to add after
# https://github.com/pytorch/pytorch/pull/111196
REGISTRY = {
    "dispatcher1": {"Tensor": kernel1},
    "dispatcher2": {"Tensor": kernel2},
}

def get_kernel(dispatcher, input_type):
    dispatcher_registry = REGISTRY[dispatcher.__name__]
    for cls in input_type.__mro__:
        kernel = dispatcher_registry[cls.__name__]
        break
    return kernel
```

This can be compiled without graph breaks:

```python
cfn = torch.compile(dispatcher1, fullgraph=True)
torch.testing.assert_close(int(cfn(torch.tensor(3))), 5)

cfn = torch.compile(dispatcher2, fullgraph=True)
torch.testing.assert_close(int(cfn(torch.tensor(3))), 1)
```

However, if we start chaining these calls, we hit some issues:

```python
class Pipeline(torch.nn.Module):
    def forward(self, input):
        input = dispatcher1(input)
        input = dispatcher2(input)
        return input

cfn = torch.compile(Pipeline(), fullgraph=True)
torch.testing.assert_close(int(cfn(torch.tensor(3))), 3)
```

```
Can't access members of type(obj) for a generated custom object. Please use __class__ instead
```

The error message is not really helpful here. The following happens: when compiling `dispatcher1`, `get_kernel` gets inlined. That means when hitting `dispatcher2`, the `type` call no longer happens on an input with a source. Thus, in the first iteration we hit the top branch, while in the second we hit the bottom:

addb8e29cd/torch/_dynamo/variables/builtin.py (L1264-L1268)

And the error message I posted above originates from the type being treated as constant. This PR replaces this with a `SourcelessBuilder` instead.

With that fix in place, we hit another pointing to `input_type.__mro__`

```
AssertionError: Consider SourcelessBuilder for ephemeral objects, usually objects created locally.
```

Fix is similar: instead of using a `VariableBuilder` here, we use a `SourcelessBuilder` in case we have no `source`:

addb8e29cd/torch/_dynamo/variables/builtin.py (L1167-L1168)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113340
Approved by: https://github.com/peterbell10, https://github.com/lezcano
2023-11-13 14:29:17 +00:00
Edward Z. Yang
0f7ac2635d Uniformly use SourcelessBuilder to handle user defined types (#113390)
Subsumes https://github.com/pytorch/pytorch/pull/110794

Fixes https://github.com/pytorch/pytorch/issues/110315

This is not really a 100% sound fix, a deeper analysis of the bug can be found at https://docs.google.com/document/d/1y-nRAPdbZEji52MPKYzC0U3VhvW9yEAEDqP5t5GhWZ0/edit

The general idea behind the fix here is that we are going to play fast and loose with user defined classes: as Dynamo is written today, we are willing to pull out these types and directly manipulate them (e.g., look at their `__mro__`, etc) without an intervening VariableTracker. As such, if I use `python_type` to extract out the Python type of a VT or if I am manually reading out the `__bases__` of a type, which may be a user defined class, if it is sourceless, all I need to do is use SourcelessBuilder instead of ConstantVariable to make sure I wrap it into the correct VT class.

The approach in https://github.com/pytorch/pytorch/pull/110794 was "more correct", but we'd have to go substantially further to get it all working. So I am doing this to unblock suo for now.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113390
Approved by: https://github.com/suo
2023-11-10 07:26:52 +00:00
Steven Troxler
cada6c7fee [dynamo] Fix a bug by desugaring in-place ops on constants (#113117)
Summary:

Python allows users to write code like
```
x: 1
x += y
x += z
```

This code has well-defined semantics: because x is an immutable primitive, the first `+=` will actually re-bind x, it is equivalent to `x = x + y`.

The second in-place operation will either similarly desugar (if the result of `x + y` is itself immutable), or possibly result in "true" in-place operation.

Now, this is a problem for us because today, dynamo tries to both resolve constant variables to their literal values at compile time and also compile in a way that treats `operator.*` builtin functions consistently. This leads to a bug where code like
```
x: 1
x += y
```
actually gets compiled to
```
1 += y
```
which is both semantically meaningless and a syntax error.

A very simple fix that we've already used to fix the special case of `+=` is to detect this, treat it as an edge case, and desugar eagerly into `x = x + y`.

The problem with that fix is that it only patched `iadd`, but actually *all* of the in-place operators exhibit this behavior.

This commit proposes that we tackle all of the inplace opeartors supported by fx in the same way: eagerly remap the operation to an assignment when the left-side is actually an immutable constant.

**Alternatives?**

There might be some other fix possible that wouldn't produce a hardcoded remapping; I know that we generally don't like the growth of mappings and blocklists in dynamo.

I'm a little skeptical about a general solution though, because the bug is due precisely to Python's highly dynamic dispatching of inplace operations by type; since the fx graph has to be purely static, I suspect that we actually have to desugar this somewhere, because the dataflow is fundamentally different for true inplace operations on types that define `__iadd__`, etc vs the desugaring on primitives.

I'm open to other suggestions

Test Plan:

I verified that the code in
https://github.com/pytorch/pytorch/issues/112656
compiles with this fix, and the compiled functions produce the same outputs as the originals.

This needs unit tests, but I'd like to get feedback on the approach in the meantime.

Fixes #112656

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113117
Approved by: https://github.com/yanboliang
2023-11-10 00:22:55 +00:00
Jon Chuang
e6eab49e11 [dynamo] graph break on setattr requires_grad (#113163)
Main: `RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn`
This PR: graph breaks and eager applies the mutation, new tensors are tracked

Fixes https://github.com/pytorch/pytorch/issues/109505 (the original bug does not occur, but a new bug where the mutation isn't applied - because AOTAutograd is not `requires_grad` mutation aware - is mitigated)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113163
Approved by: https://github.com/bdhirsh
2023-11-09 13:13:29 +00:00
Wanchao Liang
addb8e29cd Enable 2d + AC torch.compile (#112536)
This PR enables AC + torch.compile to work with FSDP + TP, the fix to
high order op path is that we need to check both tensor and tensor
subclass bases to make sourceless builder

NOTE: selective AC + 2D is still not working, need to fix this
separately

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112536
Approved by: https://github.com/yf225
2023-11-09 06:12:13 +00:00
PaliC
2c4be77f02 Revert "[dynamo] Graph break on setattr(Tensor, "data", Tensor) (#113043)" (#113297)
This reverts commit ddfe572534.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113297
Approved by: https://github.com/PaliC
2023-11-09 00:26:21 +00:00
PyTorch MergeBot
94d95a91a2 Revert "[dynamo] graph break on setattr requires_grad (#113163)"
This reverts commit d261687d5f.

Reverted https://github.com/pytorch/pytorch/pull/113163 on behalf of https://github.com/PaliC due to relevant tests are not running for this pr, however, this is fixed after landing https://github.com/pytorch/pytorch/pull/113297/ ([comment](https://github.com/pytorch/pytorch/pull/113163#issuecomment-1802967236))
2023-11-09 00:23:04 +00:00
Jon Chuang
d261687d5f [dynamo] graph break on setattr requires_grad (#113163)
Main: `RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn`
This PR: graph breaks and eager applies the mutation, new tensors are tracked

Fixes https://github.com/pytorch/pytorch/issues/109505 (the original bug does not occur, but a new bug where the mutation isn't applied - because AOTAutograd is not `requires_grad` mutation aware - is mitigated)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113163
Approved by: https://github.com/bdhirsh
2023-11-08 19:51:23 +00:00
voznesenskym
3e4d14702a On grad access, check if grad has changed and update stored example grad as needed (#112811)
Fixes https://github.com/pytorch/pytorch/issues/112446

This is a doozy of a PR, there's a few important things to keep in mind here:

1) We MUST lift all tensors accessed via attrs to inputs, getattr is a no go in the graph, it violates the aot_autograd contract. Furthermore, aot_autograd does not know how to apply in-place ops to intermediary tensors that are attributes (aka from getattr) anyway. Views from ops are fine.

2) `.grad` access handling in dynamo peeks at the underlying value, the real tensor, because re-piping FakeTensors already made with this fake_mode through builder anew is a no go.

3) We have no proper mechanism for updating the hint / grapharg.example (the real value in (2) above) midway through trace

Therefore, what we need to do is reconcile the difference in grad stashed on grapharg.example. The easiest way to do this is lazily, upon .grad access, by reading the new value off the right fake tensors. We can then make a tensor using that data as a hint to VariableBuilder to make the right VariableTracker. Note that the example value used here (torch.zeros) in the PR, is a dummy value only used as a tracing hint, it does not leak out into real runtime code.

Alternatively, we could implement accumulate_grad_ in python...

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112811
Approved by: https://github.com/jansel
2023-11-08 05:45:00 +00:00
Jason Ansel
5fe96eaaf4 [dynamo] Remove VariableTracker.propagate (#111726)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111726
Approved by: https://github.com/voznesenskym
ghstack dependencies: #111306, #111415, #111725
2023-11-07 19:55:19 +00:00
Jason Ansel
843a8ecd24 [dynamo] Remove VariableTracker.add_options (#111725)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111725
Approved by: https://github.com/voznesenskym
ghstack dependencies: #111306, #111415
2023-11-07 19:55:19 +00:00
Jason Ansel
9664190952 [dynamo] Eagerly install guards (#111415)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111415
Approved by: https://github.com/voznesenskym
ghstack dependencies: #111306
2023-11-07 19:55:19 +00:00
Jason Ansel
2964682490 [dynamo] Add LazyVariableTracker (#111306)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111306
Approved by: https://github.com/voznesenskym
2023-11-07 19:55:19 +00:00
Jon Chuang
ddfe572534 [dynamo] Graph break on setattr(Tensor, "data", Tensor) (#113043)
Fixes https://github.com/pytorch/pytorch/issues/113030

Alias information needs to be applied in eager before we can continue to trace the graph.

----

Perhaps this is too strict - couldn't we fx trace through the in-graph (pointer) aliasing, and track mutations through fake tensors instead, and still apply the aliasing mutation epilogue for further mutations outside of graph? 🤔

Regardless, it didn't seem to work too well when I tried this. Seems that `Tensor.__setattr__` doesn't work well in fx graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113043
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
2023-11-07 03:56:21 +00:00
Jon Chuang
41720c2a48 [dynamo] add infinite generators itertools.{count, repeat, cycle} (#110967)
Fixes https://github.com/pytorch/pytorch/pull/110953/files#r1352868935

Depends on: https://github.com/pytorch/pytorch/pull/110953

Why not use these for `repeat(item, count)`:
> These are not preferred as they return an opaque VariableTracker. In particular, one cannot do `enumerate(repeat(1))`. `repeat(1, 10)` benefits from the integration enjoyed by `ListVariableIterator`

Follow ups:
- [ ] make listiterator an IteratorVariable, define iterator integrations on base IteratorVariable where unspecialized https://github.com/pytorch/pytorch/pull/110967#discussion_r1356656469
    - Please make a new issue for this
- [ ] explore integrating cpython itertools test suite https://github.com/pytorch/pytorch/pull/110967#discussion_r1358326402
- [ ] Use something other than `StopIteration` to handle iterator termination https://github.com/pytorch/pytorch/pull/110967#discussion_r1358336038
- [ ] Add test case for consuming iterator simultaneously from two code points https://github.com/pytorch/pytorch/pull/110967/files#r1358325511

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110967
Approved by: https://github.com/ezyang
2023-11-01 00:33:17 +00:00
Jason Ansel
4b8a5e1854 [dynamo] Remove VariableTracker.as_specialized (#112363)
My local testing can't seem to find this function actually doing anything.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112363
Approved by: https://github.com/yanboliang
2023-10-30 20:07:55 +00:00
Yanbo Liang
061bf1a153 [5/N] Make torch context manager a TorchCtxManagerClassVariable (#111622)
Major change in this PR is to make torch context manager class a separate ```TorchCtxManagerClassVariable```, since we have dynamo implementation for these ctx managers.

I was thinking to wrap them as ```UserDefinedClassVariable``` and do dispatch at ```USCVariable.call_function```, but it seems almost the same amount of work and this way is more clear.

This is on the way of moving ```TorchVariable``` to ```TorchFunctionVariable``` which will only handle the functions who would be allowed in graph (e.g, ```torch.sin```) and constant folded (e.g, ```torch.is_floating_point```). All other torch functions would be go through skip/inline rules, and would be wrapped as ```UserFunctionVariable``` (for inlined) and ```SkipFilesVariable``` (for skipped).
The next steps:
* Wrap torch modules, classes, objects as regular ```PythonModuleVariable```, ```UserDefinedClassVariable``` and ```UserDefinedObjectVariable```.
* Generate the allow in graph torch functions list and wrap them as ```TorchFunctionVariable```.
* Finally merge ```skipfiles.check``` and ```is_allowed``` into one function ```allow_skip.check(fn)``` which would return a Enum of allow, skip and inline.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111622
Approved by: https://github.com/jansel
2023-10-27 21:26:54 +00:00
lezcano
1774704fc1 [dynamo] Simplify add_dict in preparation to refactor it with call_set (#110523)
The previous implementation had a fair amount of repeated code, and did
things like calling `add_options` where options was always empty (which
is fine, as the guards are already set within ConstDictVariable).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110523
Approved by: https://github.com/yanboliang, https://github.com/jansel
ghstack dependencies: #110522
2023-10-27 20:17:10 +00:00
lezcano
1dcbd1c088 [dynamo] [easy] Move Set to dicts.py (#110522)
A set is more of a dict than a list if you ask me.
This comes before the refactor where we implement sets and dicts via the
same logic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110522
Approved by: https://github.com/jansel
2023-10-27 20:17:10 +00:00
PyTorch MergeBot
5344468712 Revert "[dynamo] Properly track user-defined types for type() (#110794)"
This reverts commit ad4ccf9689.

Reverted https://github.com/pytorch/pytorch/pull/110794 on behalf of https://github.com/ezyang due to looks like this actually fails internal tests ([comment](https://github.com/pytorch/pytorch/pull/110794#issuecomment-1778002262))
2023-10-24 20:42:26 +00:00
Jon Chuang
6d78f34a06 fix regression which creates a new fake tensor (#111864)
Fixes regression identified here: ccd6b373b5 (r1369334484)

Now that `get_fake_value` will identify aliases, we should not try to wrap the fake value again.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111864
Approved by: https://github.com/eellison
2023-10-24 05:11:48 +00:00
Jon Chuang
36d34ce951 [dynamo] support comparing LHS constant with tensor (#111492)
Fixes https://github.com/pytorch/pytorch/issues/108582

Depends on https://github.com/pytorch/pytorch/pull/111557 for fixing broken integration tests. (due to this PR unblocking an in-graph set membership)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111492
Approved by: https://github.com/Skylion007
2023-10-23 19:05:14 +00:00
Ken Jin
ad4ccf9689 [dynamo] Properly track user-defined types for type() (#110794)
Closes https://github.com/pytorch/pytorch/issues/110315.

Thanks to @ezyang for the easy repro!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110794
Approved by: https://github.com/ezyang
2023-10-23 17:34:23 +00:00
Michael Lazos
fb8876069d Support tracing base torch_function impl (#111731)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111731
Approved by: https://github.com/jansel
ghstack dependencies: #111730
2023-10-23 07:11:32 +00:00
Chen, Zejun
8e60d646b9 [dynamo][stream]support device-agnostic stream in dynamo and capture stream/event method in fx graph (#108312)
This PR implements 2 things:
1. support the device agnostic stream and runtime APIs captured by the dynamo.
2. support the stream methods(include the event) captured by the dynamo.

Here are details for 1st.
Previously the stream captured in dynamo was tightly bind to CUDA. Here we implement a global singleton container named `StreamMethodContainer` for different backends to register their associated stream methods to dynamo. When import the backend’s product, the stream operations can be registered directly by calling

```
device_stream_method = {'current_stream': method_1,
                         'create_stream_context': method_2,
                         'set_stream': method_3,
                         'set_stream_by_id': method_4}
torch._dynamo.stream.register_stream_method(device_name, device_stream_method)
```

Stream methods need to be passed in this API according to the precise semantics represented by the dict key in `device_stream_method`. After register, these methods can be used by dynamo to capture the stream operations in users’ script, for example, get the current stream or set the specific stream. Additionally, the wrapped stream variable and the stream context variable are changed to be the device-agnostic, the proxy functions of these variables are assigned by the associated methods in the container. All of this are illustrated in the below. Below is a illustration.

![image](https://github.com/pytorch/pytorch/assets/74231238/37ac7350-c539-4167-9886-c3744ecab65d)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108312
Approved by: https://github.com/jansel, https://github.com/jgong5
2023-10-22 13:22:58 +00:00
Jon Chuang
47eed65481 [dynamo] Add is_ support for Tensors, force get_fake_value to reuse previously computed example_value if available (#111565)
Use FakeTensor id match as equivalent to object identity match

cc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111565
Approved by: https://github.com/ezyang
2023-10-21 13:56:30 +00:00
Jon Chuang
101210e2ce [dynamo] cast single-elem tensors to float and int (#111518)
Fixes https://github.com/pytorch/pytorch/issues/109538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111518
Approved by: https://github.com/ezyang
2023-10-20 22:53:58 +00:00
Yanbo Liang
da662248fb [Dynamo] Fix autograd.Function tracing errors loudly involving saved tensors (#111277)
Fixes #104792

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111277
Approved by: https://github.com/jansel, https://github.com/zou3519
2023-10-15 00:47:59 +00:00
PyTorch MergeBot
33403336fa Revert "[user errors] compulsory case names, allow multiple (#110878)"
This reverts commit 2ae71c4598.

Reverted https://github.com/pytorch/pytorch/pull/110878 on behalf of https://github.com/kit1980 due to export/test_export.py::TestExport::test_multiple_definitions_same_name_dim - TypeError: UserError.init() missing 1 required positional argument: 'case_names' ([comment](https://github.com/pytorch/pytorch/pull/110878#issuecomment-1754360051))
2023-10-10 04:44:40 +00:00
Avik Chaudhuri
2ae71c4598 [user errors] compulsory case names, allow multiple (#110878)
We want to get to a point where most UserErrors link to exportdb examples. This PR makes passing case names non-optional to make this intent clearer and encourage developers who raise UserErrors to make or point to examples that make fixing such errors more obvious for users.

In addition, sometimes there are multiple examples that are relevant to an error. Thus this PR also enables passing multiple case names.

Retry of #110733 which was reverted due to a landrace.

Differential Revision: [D50087148](https://our.internmc.facebook.com/intern/diff/D50087148/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110878
Approved by: https://github.com/gmagogsfm, https://github.com/tugsbayasgalan
2023-10-10 03:48:07 +00:00
Animesh Jain
58637c4b43 [dynamo] Remove SuperSource (#110475)
The motivation for removing this is already present in the pre-PR comments. Copying it

~~~
# NB - SuperSource is a weird one.
# it is our only source with 2 bases, so we use the objec
# as the base, rather than the type, since an invocation
# like super(Foo, foo) is represented here, the source object base is more spiritually
# aligned with the instance, rather than the type.
# This whole construction is questionable tho, and we should probably find a way to
# avoid this exception to our otherwise nice source parentage invariant.
~~~

Instead of using super(a, b), we can use `type(b).__mro__[index]`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110475
Approved by: https://github.com/jansel
2023-10-08 04:45:06 +00:00
Tugsbayasgalan Manlaibaatar
0a5f0b5db3 Suport tracing HuggingFace models (#110748)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110748
Approved by: https://github.com/avikchaudhuri
2023-10-07 22:37:28 +00:00
Huy Do
18f0d3af72 Revert "[user errors] compulsory case names, allow multiple (#110733)" (#110783)
This reverts commit 983f6f36db.  I have no idea how to revert https://github.com/pytorch/pytorch/pull/110733 with the bot.  So reverting it manually for now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110783
Approved by: https://github.com/ZainRizvi, https://github.com/kit1980
2023-10-07 07:32:39 +00:00
Avik Chaudhuri
983f6f36db [user errors] compulsory case names, allow multiple (#110733)
We want to get to a point where most `UserError`s link to `exportdb` examples. This PR makes passing case names non-optional to make this intent clearer and encourage developers who raise `UserError`s to make or point to examples that make fixing such errors more obvious for users.

In addition, sometimes there are multiple examples that are relevant to an error. Thus this PR also enables passing multiple case names.

Differential Revision: [D50020465](https://our.internmc.facebook.com/intern/diff/D50020465/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110733
Approved by: https://github.com/zhxchen17
2023-10-07 01:25:12 +00:00
Avik Chaudhuri
416eca9736 export db links for user errors (#110555)
Ideally all `_dynamo.exc.UserError`s should have "case names", i.e., link to examples in `exportdb`.

This PR adds case names to several instances of `_dynamo.exc.UserError`. In particular, looking at coverage based on `UserErrorType`:
* `DYNAMIC_CONTROL_FLOW`, `ANTI_PATTERN`, and `STANDARD_LIBRARY` are fully covered.
* `CONSTRAINT_VIOLATION` and `DYNAMIC_DIM` have no coverage. We don't seem to have any dedicated examples of specifying dynamic shapes in `exportdb` (although they are used in some other examples without explanation, to avoid some specialization that would make such examples moot).
* `INVALID_INPUT` is only partly covered. Frankly this is tedious to cover via examples.

Differential Revision: [D49928518](https://our.internmc.facebook.com/intern/diff/D49928518/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110555
Approved by: https://github.com/angelayi, https://github.com/ydwu4
2023-10-05 05:03:04 +00:00