Commit Graph

163 Commits

Author SHA1 Message Date
ydwu4
94a54b89aa [dynamo] Add BACKEND_MATCH guard to detect and recompile when backend changes (#107337)
**Motivation:**
We try to make torch.cond use torch.compile automatically so that we could error out when there is side-effects in the branches and correctly handle the closures.

Before this PR, we have a warning if we don't turn on a config raise_on_backend_change (turning it on gives us an error) for the following code:
```python
def foo()

# Inside torch.cond, we'd like to do something like
torch.compile(foo, backend="eager", fullgraph=True)(...)
...
# Users may then call torch.compile somewhere else.
# Dynamo will use the cached code of foo for "eager" backend
# but we expect dynamo to recompile with "inductor" backend.
torch.compile(foo, backend="inductor")(...)
```

This PR adds a BACKEND_MATCH guard. Effectively, it implements a per-backend cache. In the above example, the cached code for "eager" won't work for "inductor" due to guard check failures and the second torch.compile will do a re-compilation. In the future, it might be useful to have something like a configuration guard that guards against dynamo configuration changes across different compiles (e.g. compile a function with fullgraph=False then compile it again with fullgraph=True).

**Implementation:**
1. We add a guarded_backend_cache and check the most_recent_backend against the backend associated with cached code. We also remove the raise_on_backend_change flag.

Note: More lines are printed for debug log due to newly added context manager and guard adds .

**Test Plan:**
Removed original tests that raise on different backend and add a new test to test whether the BACKEND_MATCH guard can guard against backend change.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107337
Approved by: https://github.com/jansel
2023-09-14 15:49:30 +00:00
Avik Chaudhuri
47be61e12b untracked inputs in constraints (#109037)
Differential Revision: [D49157009](https://our.internmc.facebook.com/intern/diff/D49157009/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109037
Approved by: https://github.com/zhxchen17
2023-09-12 06:50:01 +00:00
PyTorch MergeBot
56c2386157 Revert "reland [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108883)"
This reverts commit d4230e5574.

Reverted https://github.com/pytorch/pytorch/pull/108883 on behalf of https://github.com/huydhn due to Per the discussion thread on D49122208, reverting this change ([comment](https://github.com/pytorch/pytorch/pull/108883#issuecomment-1712707853))
2023-09-10 04:40:02 +00:00
Animesh Jain
d4230e5574 reland [finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108883)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108883
Approved by: https://github.com/voznesenskym, https://github.com/huydhn
2023-09-09 03:12:31 +00:00
Jason Ansel
4965fffeda [dynamo] Move global state guards to C++ (#108624)
This combines a bunch of python global state guards into a single C++ guard and switches to checking them 100% of the time.  It also adds a few new guards for things that change inductor's behavior.   Even though we are checking more things, I expect this to be much faster.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108624
Approved by: https://github.com/anijain2305
2023-09-08 04:07:08 +00:00
PyTorch MergeBot
72f24d0001 Revert "[dynamo][finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108528)"
This reverts commit 34bb74c4cf.

Reverted https://github.com/pytorch/pytorch/pull/108528 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it has some nasty merge conflicts after the revert of D48910794. I need to revert this so the conflict could be resolved. Please help rebase this tomorrow and reland the change ([comment](https://github.com/pytorch/pytorch/pull/108528#issuecomment-1711034781))
2023-09-08 03:49:41 +00:00
PyTorch MergeBot
38fcf77a1b Revert "[dynamo] Add BACKEND_MATCH guard to detect and recompile when backend changes (#107337)"
This reverts commit 1a64ec7dd4.

Reverted https://github.com/pytorch/pytorch/pull/107337 on behalf of https://github.com/huydhn due to Sorry for reverting your change but inductor perf smoke test starts to regress after this ([comment](https://github.com/pytorch/pytorch/pull/107337#issuecomment-1710974588))
2023-09-08 02:03:48 +00:00
ydwu4
1a64ec7dd4 [dynamo] Add BACKEND_MATCH guard to detect and recompile when backend changes (#107337)
**Motivation:**
We try to make torch.cond use torch.compile automatically so that we could error out when there is side-effects in the branches and correctly handle the closures.

Before this PR, we have a warning if we don't turn on a config raise_on_backend_change (turning it on gives us an error) for the following code:
```python
def foo()

# Inside torch.cond, we'd like to do something like
torch.compile(foo, backend="eager", fullgraph=True)(...)
...
# Users may then call torch.compile somewhere else.
# Dynamo will use the cached code of foo for "eager" backend
# but we expect dynamo to recompile with "inductor" backend.
torch.compile(foo, backend="inductor")(...)
```

This PR adds a BACKEND_MATCH guard. Effectively, it implements a per-backend cache. In the above example, the cached code for "eager" won't work for "inductor" due to guard check failures and the second torch.compile will do a re-compilation. In the future, it might be useful to have something like a configuration guard that guards against dynamo configuration changes across different compiles (e.g. compile a function with fullgraph=False then compile it again with fullgraph=True).

**Implementation:**
1. We add a guarded_backend_cache and check the most_recent_backend against the backend associated with cached code. We also remove the raise_on_backend_change flag.

2. Then newly added context manager and guard adds more lines for debug log so we change the uppper limit from 50 to 55.

**Test Plan:**
Removed original tests that raise on different backend and add a new test to test whether the BACKEND_MATCH guard can guard against backend change.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107337
Approved by: https://github.com/jansel
2023-09-07 22:45:54 +00:00
Animesh Jain
34bb74c4cf [dynamo][finishing colesbury's PR 100642] Guard on nn.Module dicts and type (#108528)
**This PR is a 99% copy paste of Sam Gross** (@colesbury) work at https://github.com/pytorch/pytorch/pull/100642. Copied from there

--------
The NN_MODULE guard now subsumes guards on Module attributes. The check_fn will fail if the module attributes are changed (such as Module.training), parameters, submodules, and buffers are added or removed, and if fields are changed on the type itself.

This gives up specificity in the guard check -- if any field is changed the check_fn fails -- for faster overall checks.

-----

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108528
Approved by: https://github.com/ezyang
2023-09-07 01:45:47 +00:00
Flavio Sales Truzzi
cd4f74fb2e [PT2] - Add check for stack (#108012)
Summary:
Add check for `guard.stack` which was causing exceptions like:

```
toch._dynamo.exc.InternalTorchDynamoError: 'NoneType' object has no attribute 'format'
```

Test Plan: contbuild & OSS CI

Differential Revision: D48709458

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108012
Approved by: https://github.com/anijain2305
2023-08-28 23:30:34 +00:00
Animesh Jain
9d2ffc5dfa [reland][Dynamo] cache_size policy #107496 (#108069)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108069
Approved by: https://github.com/yanboliang
2023-08-28 22:06:54 +00:00
PyTorch MergeBot
b4c6c4da88 Revert "[Dynamo] cache_size policy (#107496)"
This reverts commit 4175a6e944.

Reverted https://github.com/pytorch/pytorch/pull/107496 on behalf of https://github.com/ZainRizvi due to Breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/107496#issuecomment-1693590121))
2023-08-25 16:07:14 +00:00
Animesh Jain
4175a6e944 [Dynamo] cache_size policy (#107496)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107496
Approved by: https://github.com/ezyang
ghstack dependencies: #107645
2023-08-24 21:50:00 +00:00
Animesh Jain
8c62f01cb7 [dynamo][guards] Use dict for storing weakrefs (#107645)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107645
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-08-23 20:52:38 +00:00
Yukio Siraichi
bcede143bd Do not mutate SymNode expression. (#107492)
This PR stops `SymNode` from mutating (i.e. simplifying) its expression. Instead, the
simplification (without mutation) is deferred to the `SymNode.maybe_as_int` method.

```python
- FakeTensor(size=(s0,), ...)
- FakeTensor(size=(s1, s2, s3), ...)

- Eq(s0, s1 + s2 + s3)

- FakeTensor(size=(s0,), ...)
- FakeTensor(size=(s1, s2, s3), ...)
```

In summary, this PR:
- Replaces `SymNode._expr` by `SymNode.expr`, removing the old property function
    - This makes it so `SymNode` instances never update their expression
- Creates `SymNode.simplified_expr()` method for actually calling `ShapeEnv.replace` on
  its expression. Note that this doesn't updates `SymNode.expr`
- Changes how `tensor.size()` gets converted to its Python `torch.Size` type
    - Instead of calling `SymInt::maybe_as_int()` method, we create a new
      `SymInt::is_symbolic()` method for checking whether it is actually a symbolic value
    - This is needed so that when we call `tensor.size()` in the Python side, the returned
      sequence is faithful to the actual data, instead of possibly simplifying it and
      returning an integer
    - 2 files needs this modification:
        - _torch/csrc/Size.cpp_: for handling `torch.Tensor.size` Python calls
        - _torch/csrc/utils/pybind.cpp_: for handling `symint.cast()` C++ calls

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107492
Approved by: https://github.com/ezyang
ghstack dependencies: #107523
2023-08-22 12:38:05 +00:00
Animesh Jain
a506d0ad8f [dynamo] Store originating source in the Guard object (#107634)
Many times, I find myself wanting to know the source for the guard. This PR adds that as a field of guard itself.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107634
Approved by: https://github.com/voznesenskym
ghstack dependencies: #107622
2023-08-22 02:16:31 +00:00
lezcano
612c8a8c84 Guard numpy imports in the dynamo folder (#107299)
Fixes https://github.com/pytorch/pytorch/issues/107228

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107299
Approved by: https://github.com/atalman
2023-08-21 19:07:20 +00:00
Edward Z. Yang
ad07a4bc56 Print per-tensor guard messages for TENSOR_MATCH (#107562)
The new guard messages look like:

```
check_tensor(L['y'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[3], stride=[1])  # _dynamo/variables/builder.py:1237 in wrap_fx_proxy_cls
```

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107562
Approved by: https://github.com/anijain2305, https://github.com/jansel
ghstack dependencies: #107505, #107516, #107530, #107532
2023-08-21 18:00:00 +00:00
Edward Z. Yang
796ce67229 Single source of truth for guard logging (#107532)
Instead of (poorly) reconstructing the guard list from the guards on OutputGraph, we log them at the horses mouth: when we actually codegen the guard. This only requires very modest refactoring: as we translate guards into code parts, we also have to pass the source guard along so we can use it to give stack information.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107532
Approved by: https://github.com/anijain2305
ghstack dependencies: #107505, #107516, #107530
2023-08-21 13:02:12 +00:00
Edward Z. Yang
68b9bf9671 Simplify verbose error guard printing (#107516)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107516
Approved by: https://github.com/anijain2305
ghstack dependencies: #107505
2023-08-20 06:50:27 +00:00
Edward Z. Yang
d6d485fa8c Revamp guard debug logging (#107505)
The new guard printout looks like this:

```
[DEBUG] GUARDS:
[DEBUG]   ___check_type_id(L['name'], 7605632)                          # if name == "special_attr":  # test/dynamo/test_misc.py:1155 in __getattribute__
[DEBUG]   L['name'] == '_backward_pre_hooks'                            # if name == "special_attr":  # test/dynamo/test_misc.py:1155 in __getattribute__
[DEBUG]   ___check_obj_id(L['self'], 139746432564960)                   # return super().__getattribute__(name)  # test/dynamo/test_misc.py:1157 in __getattribute__
[DEBUG]   ___check_obj_id(L['__class__'], 1451499216)                   # return super().__getattribute__(name)  # test/dynamo/test_misc.py:1157 in __getattribute__
[DEBUG]   ___is_grad_enabled()                                          # _dynamo/output_graph.py:346 in init_ambient_guards
[DEBUG]   not ___are_deterministic_algorithms_enabled()                 # _dynamo/output_graph.py:342 in init_ambient_guards
[DEBUG]   ___is_torch_function_enabled()                                # _dynamo/output_graph.py:350 in init_ambient_guards
[DEBUG]   utils_device.CURRENT_DEVICE == None                           # _dynamo/output_graph.py:348 in init_ambient_guards
```

Along with the guards, we also print what line of user code caused the guard to be added, or what line of Dynamo internal code added the guard (if there is no user stack trace, which is typically the case for ambient guards.)

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107505
Approved by: https://github.com/mlazos, https://github.com/voznesenskym, https://github.com/anijain2305
2023-08-20 06:50:27 +00:00
Michael Lazos
e0d6072f69 Add API to mark input tensors static for cudagraphs (#107154)
Adds API to mark tensor as a static input -
To make this trigger recompiles properly, I'll need to update tensor match checks to also check for this new attribute

Additional concern is memory - the tensors will be kept alive, but this is the current behavior for nn modules and parameters.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107154
Approved by: https://github.com/eellison
2023-08-16 04:38:19 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00
Edward Z. Yang
91afefb55b Fix some fake mode confusion between inner/outer fake mode in export (#106515)
Fixes https://github.com/pytorch/pytorch/issues/106412

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106515
Approved by: https://github.com/voznesenskym, https://github.com/BowenBao, https://github.com/thiagocrepaldi
2023-08-04 15:42:23 +00:00
Aaron Gokaslan
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
Michael Lazos
1597dd7a54 Report guard failures with recompiles logging (#105500)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105500
Approved by: https://github.com/Chillee, https://github.com/anijain2305
2023-07-19 02:20:44 +00:00
Michael Voznesensky
a6758cb304 Revert "Revert "SetVariable in dynamo (#103205)"" + Fix for improved graph breaks (#105345)
This reverts commit 94b3f9f646.

Fix

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105345
Approved by: https://github.com/atalman
2023-07-17 23:21:30 +00:00
PyTorch MergeBot
94b3f9f646 Revert "SetVariable in dynamo (#103205)"
This reverts commit 82fb5edfc7.

Reverted https://github.com/pytorch/pytorch/pull/103205 on behalf of https://github.com/atalman due to Failing cuda11.8-py3.10-gcc7-sm86 / test (inductor_torchbench_dynamic) with CUDA oom ([comment](https://github.com/pytorch/pytorch/pull/103205#issuecomment-1638115073))
2023-07-17 13:13:47 +00:00
Michael Voznesensky
82fb5edfc7 SetVariable in dynamo (#103205)
Set initial
Fixes https://github.com/pytorch/pytorch/issues/94738

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103205
Approved by: https://github.com/jansel
2023-07-15 02:25:31 +00:00
Michael Lazos
05eea20eb9 [dynamo] Simulate torch function enablement state (#105091)
Part of https://github.com/pytorch/pytorch/issues/93723

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105091
Approved by: https://github.com/voznesenskym, https://github.com/anijain2305
2023-07-13 17:42:20 +00:00
PyTorch MergeBot
bfd995f0d6 Revert "Specialize storage_offset - Does not cover automatic dynamic (#104204)"
This reverts commit 803c14490b.

Reverted https://github.com/pytorch/pytorch/pull/104204 on behalf of https://github.com/ezyang due to also due to https://github.com/pytorch/pytorch/issues/104563 ([comment](https://github.com/pytorch/pytorch/pull/104204#issuecomment-1620653507))
2023-07-04 19:41:32 +00:00
Michael Voznesensky
803c14490b Specialize storage_offset - Does not cover automatic dynamic (#104204)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104204
Approved by: https://github.com/wconstab
2023-06-27 05:51:42 +00:00
Michael Voznesensky
e5e9d563c2 Lift user defined attributes into inputs for certain cases (user defined types and tensors) (#103386)
(1) Lazy (converts to dynamo variable on access only)
(2) Uses existing side effect/reconstruct tech
(3) not tensor opinionated

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103386
Approved by: https://github.com/jansel
2023-06-20 23:45:19 +00:00
Michael Voznesensky
aece6705d1 Move locals/globals to output graph, make it easier to access them anywhere (#103456)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103456
Approved by: https://github.com/jansel
2023-06-14 20:04:33 +00:00
Edward Z. Yang
7be2a6228d Delete non-dynamic shapes export special case in guard creation (#103295)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103295
Approved by: https://github.com/voznesenskym
2023-06-10 01:26:06 +00:00
Yukio Siraichi
f72f0119ec Implement CSE for dynamo guards. (#98488)
This PR extracted the CSE part of the code in #89707.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98488
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/anijain2305
2023-05-17 10:47:24 +00:00
Avik Chaudhuri
d41134e2f2 dynamic equality constraint (#99993)
This diff adds support for dynamic equality constraints of the form `dynamic_dim(x, 0) == dynamic_dim(y, 1)`. The process of constraint discovery can already understand equality guards between dimensions and suggests such equality constraints, so this closes the loop on that. Correspondingly we now raise `ConstraintViolation` when we find that such a guard is added on a dynamic dimension and the user did not specify such a constraint. (NOTE: This is distinct from a dynamic dimension being guarded equal to a constant, which is already an error.)

Differential Revision: [D45279437](https://our.internmc.facebook.com/intern/diff/D45279437/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99993
Approved by: https://github.com/tugsbayasgalan
2023-05-05 21:09:18 +00:00
Animesh Jain
8994d9e610 [dynamo] Hide guard_fail_hook behind a flag to improve cache lookup time (+10% DebertaV2) (#100590)
For TorchDynamo eager backend, DebertaV2 speedup improves from 0.77x to 0.87x.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100590
Approved by: https://github.com/voznesenskym, https://github.com/wconstab
2023-05-04 18:52:21 +00:00
Michael Voznesensky
2439090bef Remove special casing for stride/size setup for guards (#100456)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100456
Approved by: https://github.com/ezyang
2023-05-03 01:59:52 +00:00
Michael Voznesensky
a145a3332c Add tensor to fake clone snapshot for immutable source of truth (#100128)
There's a longstanding, well known mutability bug in dynamo, https://github.com/pytorch/pytorch/issues/93610 (and more issues, but this is the one I had at hand).

Ops that do in place mutation of tensors will mutate their corresponding FakeTensors.

So, for example, if you do `t_` on a tensor, you will reverse its strides. This, in turn, means that the FakeTensors strides are now also reversed, say, if you are trying to torch.compile:

```
class F(torch.nn.Module):
            def forward(self, x, y):
                x = x.t_()
                y = y.t_()
                return (x + y,)
```

However, we recently introduced accessing the fake_tensor memo/cache to get the symbolic shape values for sizes and strides during guard installation time.

This means that tensors captured with a given size and stride, say, for x above, size:(3,3) stride:(3, 1), will get their memo updates to size(3, 3), stride(1, 3).  Now, whenever you access this value for anything, it reflects it's current state in the tracing, as opposed to the state at which we initially started tracing on.

This causes us to produce guards that are never valid, for the example above, that `x.stride()[0] == 3`.

The solution is to not allow mutation to affect the fake tensors we use as source of truth here. We can do this by forcing a clone of the fake tensor at builder time, and storing that as the source of truth for our dynamic sizes and strides during guard installation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100128
Approved by: https://github.com/ezyang
2023-04-27 23:58:15 +00:00
Michael Voznesensky
96ceae3a7f Use memoized only mode for guard size/stride extraction (#99742)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99742
Approved by: https://github.com/ezyang
2023-04-25 01:05:42 +00:00
Edward Z. Yang
dc1c0924ec Properly parenthesize dynamo_dynamic_indices test (#99823)
I've got the E2E test case which triggered this in https://github.com/pytorch/pytorch/pull/99809

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99823
Approved by: https://github.com/voznesenskym
2023-04-23 22:41:34 +00:00
Michael Voznesensky
4c2892944f Guard static shapes alongside tensors, instead of from shape_env, in dynamic_shapes=True (#99566)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99566
Approved by: https://github.com/ezyang
2023-04-22 16:46:52 +00:00
Michael Voznesensky
0ac0d9d224 Pass locals to enum_repr to correctly make the guard str for enums (#99680)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99680
Approved by: https://github.com/jansel
2023-04-21 07:14:49 +00:00
Edward Z. Yang
e47e8c9d98 Guard on default device (#99551)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99551
Approved by: https://github.com/voznesenskym, https://github.com/mlazos
2023-04-20 17:02:59 +00:00
Edward Z. Yang
6b6dc4418d Warn if guards are added to ShapeEnv after we produced guards (#97820)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97820
Approved by: https://github.com/voznesenskym
2023-04-19 19:23:52 +00:00
Michael Voznesensky
10fbdcf72c Re-PR of 90269 - Force all nn_module associated tensors to be static (#99108)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99108
Approved by: https://github.com/ezyang
2023-04-14 05:53:48 +00:00
PyTorch MergeBot
629377ea8b Revert "Replace _dynamo.config with an object instead of module (#96455)"
This reverts commit 420104a886.

Reverted https://github.com/pytorch/pytorch/pull/96455 on behalf of https://github.com/jansel due to BC breaking, was landed prematurely
2023-04-12 15:06:14 +00:00
Han Qi
420104a886 Replace _dynamo.config with an object instead of module (#96455)
Summary:
    Replace _dynamo.config with an object instead of module

    Current usage patterns of setting and reading fields on config will work
    unchanged.

    Only changes needed going forward:
    1. import torch._dynamo.config will not work. However, just doing
       import torch._dynamo is sufficient to access dynamo config
       as torch._dynamo.config.

    2. Files inside of _dynamo folder need to access config via
       from torch._dynamo.config_util import config instead of
       from torch._dynamo import config. Because _dynamo/__init__.py
       imports some of the files so it would be circular import.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96455
Approved by: https://github.com/williamwen42
2023-04-11 21:23:32 +00:00
Edward Z. Yang
9a8f71f23e Convert logging f-strings to use % format (#98697)
Codemod done with
https://gist.github.com/ezyang/2e8b0463cdc6be278478495b23ff0530 with
assistance from ChatGPT.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98697
Approved by: https://github.com/voznesenskym
2023-04-10 12:19:31 +00:00
Yanbo Liang
a5f3468618 [Dynamo] Fix bug when dynamo generate guards for enum type (#98652)
Fixes Meta internal user case, actually I think this is a ```enum``` bug, we provide workaround in dynamo.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98652
Approved by: https://github.com/jansel
2023-04-08 04:30:30 +00:00
Edward Z. Yang
69f9bd2323 Don't error if we mark_dynamic without dynamic_shapes on (#98324)
In the terminal state, it won't matter if you have dynamic_shapes
on or not, mark_dynamic will always work.

Today, it's helpful to make this not error so I can easily swap
between static or not and run experiments.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98324
Approved by: https://github.com/voznesenskym
2023-04-05 19:40:22 +00:00
knwng
e943b120a3 Fix incorrectly getting the name of OrderedDict's index in dynamo (#96940)
Fixes #96737

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96940
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
2023-04-05 03:53:45 +00:00
Michael Voznesensky
b1e60bfb6a Pass f_locals as a dict rather than kwargs (#98107)
Fixes https://github.com/pytorch/pytorch/issues/97688

One big problem is that instead of printing x < y we now print
`E["x"] < E["y"]` and now all of the tests wobbled and I'm mad.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98107
Approved by: https://github.com/ezyang
2023-04-04 00:30:08 +00:00
Jason Ansel
55afaa46a4 Support functools.partial and itertools.product (#98120)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98120
Approved by: https://github.com/anijain2305
2023-04-03 18:23:25 +00:00
Sam Gross
87f5e92916 [dynamo] Add guards for deterministic algos (#96695)
Inductor now falls back to eager mode for deterministic algos. Add guards in dynamo to check if the deterministic algos mode changes.

See #93537

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96695
Approved by: https://github.com/ngimel, https://github.com/jansel
2023-03-31 16:28:45 +00:00
Edward Z. Yang
97fc8ea5f4 Run the benchmark suite with dynamic batch only (#97912)
Symbolic shapes compile time on full CI with inductor is horribly long (even though our aot_eager local runs seemed to suggest that the added latency was only 10s per model.) To patch over the problem for now, run the benchmark suite with dynamic batch only.  This should absolve a lot of sins.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97912
Approved by: https://github.com/janeyx99, https://github.com/desertfire
2023-03-30 18:04:48 +00:00
lantiankaikai
94bae36a1f Fix strip_function_call in GuardBuilder (#97810)
repo:
from #92670 this address one of the bug for TorchDynamo

pytest ./generated/test_PeterouZh_CIPS_3D.py -k test_003

Issue:
In GuardBuilder, when parsing argnames with "getattr(a.layers[slice(2)][0]._abc, '0')" it returns "getattr(a", where it suppose to return "a", and thus causing SyntaxError.

This PR fix the regex and add couple test cases.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97810
Approved by: https://github.com/yanboliang
2023-03-30 17:46:10 +00:00
Edward Z. Yang
8372c5dc68 Refactor dynamic dims api, stateless internals, higher level export API (#96699)
The purpose of this API is to execute a few large components of work:

1) Refactor all the internals of plumbing dynamic dimension information after dynamo to be stateless
2) Decouple allocation controls around dynamic dimensions from verification
3) For (2), for allocation, create an enum that dictates whether we are in DUCK (default today), STATIC (aka assume_static_default in the past), or DYNAMIC (aka user constrained, do not duck shape)
4) For (2), for verification, we separate out the list of dynamic ranges entirely from allocation. This means shape_env does not tracking for what we verify on, and instead, it is the callers job to invoke produce_guards() with the various things they want verified, specifically, with the valid ranges. We do use constrain ranges to refine value ranges when doing analysis.
5) We have decided, therefore, as an extension of (4) to double down on "late" checks versus "eager" checks, primarily because the mechanisms for gathering what actually matters happens during guards, and should be a purview of the caller seeking guards, not the shape env. However, for dynamo, these structures are essentially one and the same.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96699
Approved by: https://github.com/avikchaudhuri, https://github.com/ezyang
2023-03-29 16:55:49 +00:00
Will Constable
f4ac8e0052 Add dynamo config skip_nnmodule_hook_guards (#97830)
This lets users that are sure they won't use hooks avoid overhead
related to dynamo guards on (assumedly) empty hook dicts on all
nn modules.

Only enable this flag if you are sure you won't change hook-behavior
after compiling.  It is ok to register a hook and then compile, if
you promise never to remove/alter the hook.  It is also ok to
not register a hook and compile, if you never register a hook later.

Note- this is not the best we can do, and hopefully in the future
we can avoid the need for this option following some of these paths
- make guards fast enough to not be an issue when guarding on hook
  dicts
- make a mode where dynamo actually skips tracing __call__ so
  hooks are consistently ignored by compiled programs
- use nnmodule versioning so hook changes can be guarded without
  explicit hook dict guards

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97830
Approved by: https://github.com/jansel
2023-03-29 04:25:27 +00:00
Will Constable
57c13fde18 Test and fix guard fail message in CompileProfiler (#97055)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97055
Approved by: https://github.com/voznesenskym, https://github.com/jansel
2023-03-22 02:17:57 +00:00
Michael Voznesensky
f9ce593267 Extend aot autograd dedup guards to params, stop using positions (#96774)
The purpose of this PR is to remove reliance on argument positions in dedup guards, AND extend the functionality to params.

A version of this PR was stamped prior https://github.com/pytorch/pytorch/pull/95831 - but was kinda gross, because it was based on an underlying PR that did way too much with source names.

This PR leaves most of that alone, in favor of just reusing the same name standardization logic that dynamo module registration does.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96774
Approved by: https://github.com/ezyang
2023-03-21 05:59:33 +00:00
Michael Voznesensky
722c4e59a4 Replace source check with assert (#95640)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95640
Approved by: https://github.com/ezyang
2023-03-19 21:51:59 +00:00
Edward Z. Yang
99efe3ef5a Generate type match guard for torch.Size input (#96421)
I suppose hypothetically, if the user code ends up working
polymorphically over the SizeVariable, in such a way that a tuple would
work, this type match is not necessary.  But we do not carefully test
for this.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96421
Approved by: https://github.com/jansel, https://github.com/voznesenskym
2023-03-12 23:04:55 +00:00
Michael Voznesensky
34a7c79eac Rename func (#95639)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95639
Approved by: https://github.com/ezyang
2023-03-01 23:03:09 +00:00
Michael Voznesensky
1e2e149570 Dynamic dim guards (#95584)
Guards for dynamic dims, essentially authored/co-authored by @ezyang by triple checking my (originally faulty) logic. Comments in code explain the guard decision tree.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95584
Approved by: https://github.com/ezyang
2023-03-01 06:17:41 +00:00
Kazuaki Ishizaki
46385b3e48 Fix typos under torch/_dynamo directory (#95599)
This PR fixes typos in comments and messages of `.py` files under `torch/_dynamo` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95599
Approved by: https://github.com/ezyang
2023-02-28 03:44:24 +00:00
Michael Voznesensky
9ded087bac During export, generate Python TENSOR_MATCH guards (#94970)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94970
Approved by: https://github.com/ezyang
2023-02-24 05:37:31 +00:00
Will Constable
a12e92d8e4 Support nn.Module forward hooks in torchdynamo (#92125)
Tweak dynamo behavior in 2 places when calling nn.Modules,
to route the call to __call__  instead of .forward(), since
__call__ is the codepath that eager users hit and will dispatch
to hooks correctly.
 (1) inside NNModuleVariable.call_function, which covers the common case
     of calling a module from code dynamo is already tracing
 (2) at the OptimizedModule layer, which is the entrypoint
     into a top-level nn.Module dynamo is about to compile

This exposes a new bug: NNModuleVariable used to special-case calling
module.forward() (which is a method) as a UserFunctionVariable with an extra
'self' arg.  After tracing into module.__call__, there is no longer a special
case for the eventual call into .forward, and it gets wrapped in a
UserDefinedObjectVariable following standard behavior of ._wrap().  UDOV can't be
called, so this broke some tests.

- Fix: add a new special case in _wrap() that treats methods as a UserDefinedMethod
  instead of UserDefinedObjectVariable.  Now, the forward method can be called.

Also, fix NNModuleVar.call_method routing forward back to __call__

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92125
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/voznesenskym
2023-02-24 05:10:29 +00:00
Will Constable
24dd37ef51 Add BOOL_FALSE guard to optimize empty container case (#95248)
There is a fast way to implement a guard for an empty dict, which is to check its bool() value.

However, we can't use this guard in general, since we can only safely apply it at runtime if the runtime value actually is a dict (or, another type that works with 'bool' in the same way).  A counterexample is when a tensor is passed instead of a dict, and throws on bool() operator.

So we can put a type check in the guard, but that is slow enough it defeats the purpose.

Instead, we note that for the case of NNModuleVariables (which are specialized NNModules not unspecialized ones), we already have a hook in place to invalidate the guards if setattr is called.  I am claiming that setattr is the only way that the type of a property on an NNModule could change.  If I'm right, then it's safe to (a) only use this guard for NNModuleVariables, (b) not do a type check inside the guard.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95248
Approved by: https://github.com/voznesenskym
2023-02-23 21:35:15 +00:00
PyTorch MergeBot
254b161def Revert "During export, generate Python TENSOR_MATCH guards (#94970)"
This reverts commit 5a8092f058.

Reverted https://github.com/pytorch/pytorch/pull/94970 on behalf of https://github.com/voznesenskym due to Clowny comparison bug on edge cases for devices
2023-02-23 17:47:59 +00:00
Michael Voznesensky
5a8092f058 During export, generate Python TENSOR_MATCH guards (#94970)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94970
Approved by: https://github.com/ezyang
2023-02-22 17:28:17 +00:00
PyTorch MergeBot
6ae60b19b7 Revert "During export, generate Python TENSOR_MATCH guards (#94970)"
This reverts commit 5d2eb6d636.

Reverted https://github.com/pytorch/pytorch/pull/94970 on behalf of https://github.com/jeanschmidt due to Requires codev to land internal test changes
2023-02-22 16:49:37 +00:00
Michael Voznesensky
5d2eb6d636 During export, generate Python TENSOR_MATCH guards (#94970)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94970
Approved by: https://github.com/ezyang
2023-02-21 19:12:57 +00:00
Edward Z. Yang
a81cf49d97 Remove dead functions (#94415)
CR from https://github.com/pytorch/pytorch/pull/94307

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94415
Approved by: https://github.com/Skylion007, https://github.com/voznesenskym
2023-02-09 12:37:56 +00:00
Edward Z. Yang
8c835a9e52 Factor out SYMPY_INTERP (#94307)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94307
Approved by: https://github.com/Skylion007, https://github.com/albanD
2023-02-07 19:23:11 +00:00
Michael Voznesensky
60a3b7425d Small refactor of shape guards to allow for 1:1 code_parts (#93894)
By moving guard string assembly into dynamo's default behavior and letting code_parts do the work, we can have much better shape guard failures.

Before this fix, the guard failure in the test would look like:

```
'x.size()[1] == x.size()[0] and x.stride()[0] == x.[264 chars]!= 1' != 'x.size()[0] < 3'
- x.size()[1] == x.size()[0] and x.stride()[0] == x.size()[0] and x.stride()[1] == 1 and x.storage_offset() == 0 and y.size()[0] == x.size()[0] and y.size()[1] == x.size()[0] and y.stride()[0] == x.size()[0] and y.stride()[1] == 1 and y.storage_offset() == 0 and x.size()[0] < 3 and x.size()[0] != 0 and x.size()[0] != 1
+ x.size()[0] < 3
```
now it is
```
"x.size()[0] < 3"
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93894
Approved by: https://github.com/ezyang
2023-02-05 09:24:12 +00:00
Andrew Gu
f9d2600ce2 [Dynamo] Rename GuardBuilder.guarded_code -> check_fn_manager (#93934)
I was reading Dynamo code to learn and thought to clarify this naming to remove the `TODO`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93934
Approved by: https://github.com/ezyang
2023-02-02 17:20:25 +00:00
Yanbo Liang
304d8dd6c8 [Dynamo] Support enum.Enum type as dict key (#93026)
Fixes Meta internal user case of using ```enum.Enum``` type as dict key, pleaser refer the added test case for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93026
Approved by: https://github.com/mlazos
2023-01-29 06:37:10 +00:00
Michael Voznesensky
4ca511c69e Fix positional issues in dedup guards (#93137)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93137
Approved by: https://github.com/bertmaher, https://github.com/wconstab, https://github.com/bdhirsh
2023-01-28 19:21:32 +00:00
Michael Voznesensky
38a4cb765b Torch package support in dynamo (#91821)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91821
Approved by: https://github.com/suo, https://github.com/malfet
2023-01-20 05:03:34 +00:00
PyTorch MergeBot
60fe2f4420 Revert "Torch package support in dynamo (#91821)"
This reverts commit 3726d23219.

Reverted https://github.com/pytorch/pytorch/pull/91821 on behalf of https://github.com/huydhn due to The change causes flakiness on trunk. See https://github.com/pytorch/pytorch/issues/92196#issuecomment-1386368909 for more details
2023-01-18 02:17:25 +00:00
Will Constable
6cfaa92239 Handle tensor default func args when inlining (#90575)
Handle tensor default func/method args when inlining

    Previously, when inlining a function, its default arguments
    were only wrapped with VariableTrackers if non-tensor. Now,
    tensor default args are also handled by adding them to the
    parent InstructionTranslator as an attribute.

    - also patches up a missing source in nnmodule call_function,
      needed to properly guard on a default arg in its methods
    - adds new 'DefaultsSource' type which guards either a `__defaults__`
      or `__kwdefaults__` entry on a function

Fixes #90361  https://github.com/pytorch/torchdynamo/issues/1968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90575
Approved by: https://github.com/voznesenskym
2023-01-12 05:04:18 +00:00
Yanbo Liang
f40777e4ad [Dynamo] Fix guard bug when np.float used in control flow (#91991)
Fixes 14k github models: https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_Sanster_lama_cleaner.py#L2392

Error
```
File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/guards.py", line 263, in CONSTANT_MATCH
    self.EQUALS_MATCH(guard)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/guards.py", line 197, in EQUALS_MATCH
    assert istype(
AssertionError: float64
```

```np.float``` is unspecialized by default, which has guard on ```TYPE_MATCH```. However, it will be baked when being used in control flow, which has guard on ```EQUALS_MATCH```. We should make ```EQUALS_MATCH``` support ```np.float```.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91991
Approved by: https://github.com/jansel
2023-01-11 23:16:56 +00:00
Michael Voznesensky
3726d23219 Torch package support in dynamo (#91821)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91821
Approved by: https://github.com/suo, https://github.com/malfet
2023-01-10 06:53:15 +00:00
PyTorch MergeBot
f6c7cf1bf5 Revert "Torch package support in dynamo (#91821)"
This reverts commit eeb3e49ed4.

Reverted https://github.com/pytorch/pytorch/pull/91821 on behalf of https://github.com/malfet due to According to minihud broke misc tests, see eeb3e49ed4
2023-01-09 14:39:14 +00:00
Michael Voznesensky
eeb3e49ed4 Torch package support in dynamo (#91821)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91821
Approved by: https://github.com/suo
2023-01-08 01:46:24 +00:00
Andrew M. James
7cd951c21e Properly guard all numpy usage within dynamo and remove UnspecializedNumpyVariable (#90795)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90795
Approved by: https://github.com/ngimel, https://github.com/cpuhrsch
2023-01-06 22:36:38 +00:00
Edward Z. Yang
f8740db410 Properly resolve source_ref when constructing shape guards (#91058)
Whenever you guard on something, you're supposed to tell GuardBuilder about it, so GuardBuilder knows that it has to actually bind it in scope when it creates the guard function. But shape env guards bypass that mechanism completely. Well, now they don't.

For the most part, this didn't matter in practice, because we usually had a `TENSOR_MATCH` guard floating around that made sure that the guard stayed live. But if we ever eliminate those guards (e.g., because we build it into the shape guard directly; something we'll probably want to do when https://github.com/pytorch/pytorch/pull/89707 goes online) then this will indeed matter.

One complication: some of the shape env guards are on globals. You have to make sure to shunt the usage to the correct guard builder in that case. Maybe it would be better if we refactored things so there is only one GuardBuilder. Not sure.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91058
Approved by: https://github.com/voznesenskym
2022-12-30 05:56:56 +00:00
Edward Z. Yang
bcf15cd93b Store source, not sname, in Symbol (#91057)
I'm going to need this in the follow up PR. Instead of storing only Source.name() in Symbol, I now store a full on Source. Lots of replumbing reoccurs. In particular:

- Move Source to torch._guards to break cycles
- I have to add TensorPropertySource and NegateSource to handle x.size()[0] and -x codegen that I was doing with string manipulation previously
- I tighten up invariants so that I never pass source=None; instead I pass ConstantSource (these are constant sources right) and test for that rather than source being missing. I think this is more parsimonious
- Some mypy wobbles from new imports

I didn't move LocalSource and friends to torch._guards, but I ended up needing to access them in a few places. The main annoyance with moving these is that then I also need to move the bytecode codegen stuff, and that's not so easy to move without bringing in the kitchen sink.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91057
Approved by: https://github.com/albanD, https://github.com/voznesenskym, https://github.com/zou3519
2022-12-30 05:56:56 +00:00
PyTorch MergeBot
b68fd7e319 Revert "Store source, not sname, in Symbol (#91057)"
This reverts commit 88c581be87.

Reverted https://github.com/pytorch/pytorch/pull/91057 on behalf of https://github.com/atalman due to causing internal build failures
2022-12-21 22:33:15 +00:00
Edward Z. Yang
88c581be87 Store source, not sname, in Symbol (#91057)
I'm going to need this in the follow up PR. Instead of storing only Source.name() in Symbol, I now store a full on Source. Lots of replumbing reoccurs. In particular:

- Move Source to torch._guards to break cycles
- I have to add TensorPropertySource and NegateSource to handle x.size()[0] and -x codegen that I was doing with string manipulation previously
- I tighten up invariants so that I never pass source=None; instead I pass ConstantSource (these are constant sources right) and test for that rather than source being missing. I think this is more parsimonious
- Some mypy wobbles from new imports

I didn't move LocalSource and friends to torch._guards, but I ended up needing to access them in a few places. The main annoyance with moving these is that then I also need to move the bytecode codegen stuff, and that's not so easy to move without bringing in the kitchen sink.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91057
Approved by: https://github.com/albanD, https://github.com/voznesenskym
2022-12-21 04:51:51 +00:00
Edward Z. Yang
57390116e0 Restructure ShapeEnv so it uses GuardBuilder.SHAPE_ENV directly (#91055)
The idea is to make ShapeEnv guards less of a one-off special snowflake, and integrate it more closely with the regular builder infrastructure. But it is not so easy: the shape env code has to live after tensor match code, because we need to know that the values in question are tensors before we start matching on them. So we introduce a new `shape_env_code` field to put the special shape env code, so we can add it to the final constructed code after tensor.

Everything else works the obvious way. There's a new ShapeEnvSource for constructing the singleton SHAPE_ENV guard that drives the shape env guard construction. I added some more docs and also made the printed code for guards include the enclosing lambda for more clarity.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91055
Approved by: https://github.com/albanD, https://github.com/voznesenskym
2022-12-21 03:50:47 +00:00
Michael Voznesensky
b72caf311d Introduce guardexpr, aot autograd guarding of duplicates into torch._guards (#90955)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90955
Approved by: https://github.com/ezyang
2022-12-18 03:05:47 +00:00
Edward Z. Yang
bbea58d500 Stop using GraphArgs for shape env guard source tracking (#90911)
GraphArgs worked fairly well, but it was still missing sources
sometimes.  Now, we maintain an auxiliary data structure which we
MUST populate whenever we fakeify a tensor / allocate a bare SymInt.
This should guarantee once and for all that every symbol is available.
Should fix swin_base_patch4_window7_224.

While I was at it, I moved fakeification utility back to builder
as it was only used at once call site.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90911
Approved by: https://github.com/voznesenskym
2022-12-16 05:22:56 +00:00
Michael Voznesensky
6c8ef6a4c2 Add tracing context, Integrate dynamo guards into torch._guards (#90647)
As defined here: https://docs.google.com/document/d/1oniZEgAaHE1IMByPRWRKbUHeaW06E2HMfCTCQyMRLek/edit#

This PR creates a new structure, a TracingContext, whose lifecycle matches that of the traced frame. It carries on it a GuardsContext, and eventually, a FakeTensorMode. It is the source of truth of all accumulated guards.

In this PR, we create the structure, and integrate it into dynamo. We do so by mapping OutputGraph's guards structure to its guard structure.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90647
Approved by: https://github.com/ezyang
2022-12-14 07:35:32 +00:00
Edward Z. Yang
8fd31ac4da Preserve original GraphArgs for shape guard codegen (#90665)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90665
Approved by: https://github.com/voznesenskym
2022-12-12 02:35:23 +00:00
Michael Voznesensky
11442accc6 Make torch._guards, shuffle structures around for migration (#90636)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90636
Approved by: https://github.com/ezyang
2022-12-11 23:16:07 +00:00
PyTorch MergeBot
15a4c60383 Revert "Make torch._guards, shuffle structures around for migration (#90636)"
This reverts commit 933b6c4eed.

Reverted https://github.com/pytorch/pytorch/pull/90636 on behalf of https://github.com/huydhn due to Breaking lint on master. Please rebase and run lintrunner -a before re-merging the PR
2022-12-11 10:15:47 +00:00
Michael Voznesensky
933b6c4eed Make torch._guards, shuffle structures around for migration (#90636)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90636
Approved by: https://github.com/ezyang
2022-12-11 06:04:17 +00:00