Commit Graph

282 Commits

Author SHA1 Message Date
Wanchao Liang
2ee6b97464 [dtensor] move DTensor to public namespace (#133113)
Moving DTensor to be in the public namespace, to formally add the
documentation page that includes all the public APIs. This includes:

* many path renames and path import fixes
* a dedicated doc page without too much content yet (adding in the next
  PRs)
* To preserve the BC for users still using the `torch.distributed._tensor`,
  I added a shim script to redirect old path calls to the new module

The BC preserving is evidented by the fact that all DTensor tests are still
working without changing the public imports. So it's safe to land the
changes

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133113
Approved by: https://github.com/XilunWu
ghstack dependencies: #133305, #133306
2024-08-17 05:09:52 +00:00
Edward Z. Yang
90d2593b3e Revert #132806, #132736, #132539, #132487 (#133570)
This reverts commit 25df063f04.
This reverts commit de00c79583.
This reverts commit 419b76c4ac.
This reverts commit bc57d5b6ff.

Differential Revision: [D61335013](https://our.internmc.facebook.com/intern/diff/D61335013)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133570
Approved by: https://github.com/albanD, https://github.com/jansel, https://github.com/anijain2305
2024-08-15 20:54:21 +00:00
Xuehai Pan
24dee99cb7 Populate submodules of torch._C to sys.modules recursively (#132216)
See comment:

e9d1c26275/torch/__init__.py (L938-L950)

This PR recursively sets the submodules in the C extension to `sys.modules` (e.g., `_C._dynamo.eval_frame`).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132216
Approved by: https://github.com/ezyang
2024-08-08 10:20:25 +00:00
PyTorch MergeBot
ff81ca8e0c Revert "Populate submodules of torch._C to sys.modules recursively (#132216)"
This reverts commit 672ce4610e.

Reverted https://github.com/pytorch/pytorch/pull/132216 on behalf of https://github.com/PaliC due to was breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/132216#issuecomment-2274112397))
2024-08-07 18:45:00 +00:00
Brian Hirsh
e6eee04875 dynamo: use equality guards instead of id guards for Placement/DeviceMesh (#124401)
After talking to @anijain2305, we probably can't land this since it won't work for C++ guards. But we should still be able to do better than ID_MATCH

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124401
Approved by: https://github.com/anijain2305
2024-08-06 17:14:44 +00:00
Animesh Jain
419b76c4ac [dynamo] Reland 132308, 132314, 132318, 132334 - Make builtin nn modules attributes static (#132539)
Relanding 4 PRs ending at https://github.com/pytorch/pytorch/pull/132334

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132539
Approved by: https://github.com/Skylion007, https://github.com/yanboliang, https://github.com/mlazos
2024-08-03 02:08:22 +00:00
PyTorch MergeBot
b8f7019df0 Revert "[dynamo] Track params/buffers and mark them as static (#132334)"
This reverts commit babb249a89.

Reverted https://github.com/pytorch/pytorch/pull/132334 on behalf of https://github.com/anijain2305 due to broke internal tests ([comment](https://github.com/pytorch/pytorch/pull/132334#issuecomment-2265942261))
2024-08-02 18:41:19 +00:00
Animesh Jain
babb249a89 [dynamo] Track params/buffers and mark them as static (#132334)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132334
Approved by: https://github.com/ezyang, https://github.com/mlazos
2024-08-02 08:55:43 +00:00
Xuehai Pan
672ce4610e Populate submodules of torch._C to sys.modules recursively (#132216)
See comment:

e9d1c26275/torch/__init__.py (L938-L950)

This PR recursively sets the submodules in the C extension to `sys.modules` (e.g., `_C._dynamo.eval_frame`).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132216
Approved by: https://github.com/ezyang
2024-08-01 12:04:59 +00:00
Animesh Jain
612ea35395 [dynamo] Introduce UnspecializedBuiltinNNModuleSource (#132312)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132312
Approved by: https://github.com/yanboliang
ghstack dependencies: #132302, #132304
2024-08-01 06:21:05 +00:00
Animesh Jain
e772547d70 [dynamo][rename/refactor] Rename guard_source NN_MODULE to SPECIALIZED_NN_MODULE (#132302)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132302
Approved by: https://github.com/yanboliang
2024-08-01 04:35:43 +00:00
Xuehai Pan
e74ba1b34a [BE][Easy][15/19] enforce style for empty lines in import segments in torch/_d*/ (#129767)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129767
Approved by: https://github.com/anijain2305
2024-07-31 21:18:11 +00:00
Animesh Jain
e2b941a1b4 [dynamo] Rename TENSOR_ALIASING to OBJECT_ALIASING. Permit OBJECT_ALIASING for dict guards (#131480)
Fixes https://github.com/pytorch/pytorch/issues/129667

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131480
Approved by: https://github.com/williamwen42
ghstack dependencies: #131347, #131367, #131378, #131389, #131405
2024-07-24 00:06:53 +00:00
Animesh Jain
e49c0acc39 [dynamo] Revert https://github.com/pytorch/pytorch/pull/130416 (#131058)
All the changes brought by the original PR have been addressed in alternative ways in the stack. Why the original PR has to be reverted requires  more effort because there is some bad interaction with export.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131058
Approved by: https://github.com/williamwen42
2024-07-19 17:26:24 +00:00
Animesh Jain
ac76dd606f [dynamo] Alternative way to skip empty hooks guards on inbuilt nn modules (#131057)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131057
Approved by: https://github.com/williamwen42, https://github.com/jansel
ghstack dependencies: #131056
2024-07-19 04:42:38 +00:00
Michael Lazos
470f07c840 Add guard override capability for tensor subclass metadata (#130780)
Fixes https://github.com/pytorch/pytorch/issues/114405

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130780
Approved by: https://github.com/anijain2305, https://github.com/bdhirsh
ghstack dependencies: #130779
2024-07-17 19:13:53 +00:00
Michael Lazos
bea6762c01 Add guards on subclass metadata (#130779)
This PR adds guards in dynamo which verify the equality of tensor subclass metadata along with tests verifying the expected recompile behavior. The next PR adds the capability to override the guard behavior to possibly perform the check in a less expensive manner.

Toward fixing https://github.com/pytorch/pytorch/issues/114405

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130779
Approved by: https://github.com/anijain2305, https://github.com/bdhirsh
2024-07-17 19:13:52 +00:00
Animesh Jain
1a266def4f [dynamo][unsoundness but very controlled] Skip guards on inbuilt nn module hooks (#130420)
Reduces the guard overhead from 2.1k units to 1k units. Compared to no-inlining (0.4k units), this reduces the slowdown from 5x to 2.5x.

This introduces unsoundness, but only for hooks for inbuilt nn modules (user defined nn module hooks are fine).

Each builtin nn module adds 4 empty ordered dict checks in the check_fn. This blows up for models with large numbers of builtin nn modules. With this PR, we skip those guards. There is no other easy way I can think of right now to control the guard overhead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130420
Approved by: https://github.com/jansel
ghstack dependencies: #130654
2024-07-15 06:19:53 +00:00
Yidi Wu
1cae60a87e Caching attr_proxy for nn_module attribute to fix guard check failure (#130280)
Fixes https://github.com/pytorch/pytorch/issues/129939

Differential Revision: [D59594605](https://our.internmc.facebook.com/intern/diff/D59594605)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130280
Approved by: https://github.com/anijain2305
2024-07-11 18:21:35 +00:00
Xuehai Pan
973037be6a [BE][Easy] apply autofix for ruff rules unnecessary-collection-call (C408): list() / tuple() / dict() (#130199)
This PR changes the empty collection factory call to Python literals:

- `list()` -> `[]`
- `tuple()` -> `()`
- `dict()` -> `{}`

The Python literals are more performant and safer. For example, the bytecode for building an empty dictionary:

```bash
$ python3 -m dis - <<EOS
import collections

d1 = {}
d2 = dict()

dict = collections.OrderedDict
d3 = dict()
EOS
```

```text
  0           0 RESUME                   0

  1           2 LOAD_CONST               0 (0)
              4 LOAD_CONST               1 (None)
              6 IMPORT_NAME              0 (collections)
              8 STORE_NAME               0 (collections)

  3          10 BUILD_MAP                0
             12 STORE_NAME               1 (d1)

  4          14 PUSH_NULL
             16 LOAD_NAME                2 (dict)
             18 CALL                     0
             26 STORE_NAME               3 (d2)

  6          28 LOAD_NAME                0 (collections)
             30 LOAD_ATTR                8 (OrderedDict)
             50 STORE_NAME               2 (dict)

  7          52 PUSH_NULL
             54 LOAD_NAME                2 (dict)
             56 CALL                     0
             64 STORE_NAME               5 (d3)
             66 RETURN_CONST             1 (None)
```

The dict literal `{}` only has one bytecode `BUILD_MAP`, while the factory call `dict()` has three `PUSH_NULL + LOAD_NAME + CALL`. Also, the factory call is not safe if users override the `dict` name in `locals` or `globals` (see the example of replacing with `OrderedDict` above).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130199
Approved by: https://github.com/malfet
2024-07-11 17:30:28 +00:00
Animesh Jain
f7d7b94017 [dynamo][unspecialized-nn-module] Distinguish between user-defined and builtin nn module (#130416)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130416
Approved by: https://github.com/jansel
ghstack dependencies: #130285, #130368
2024-07-11 14:13:24 +00:00
Animesh Jain
fed8b0055f [dynamo][bufgix] Fix the value for key manager (#130368)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130368
Approved by: https://github.com/jansel
ghstack dependencies: #130285
2024-07-11 14:13:19 +00:00
Animesh Jain
9c612df504 [dynamo][cpp-guards][QOL] Print NO_TENSOR_ALIASING guard once (#130285)
NO_TENSOR_ALIASING guard lists all tensors. Printing it on every occurence is ugly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130285
Approved by: https://github.com/jansel
2024-07-11 14:13:14 +00:00
Edward Z. Yang
e836ee1955 Enhancements to recompiles logs (#130043)
----

- We now record on CacheEntry what the compile id that populated it was, so now we can say why a specific frame was rejected
- Add structured log for recompiles under name artifact "recompile_reasons". As it stands, it's not terribly structured, but this was the easiest thing I could do to start
- Slightly reformat multi-reason printing; since we only report one guard failure seems better to have it as a single line

Example output:

```
V0703 10:34:13.273000 140345997743104 torch/_dynamo/guards.py:2590] [0/1] [__recompiles] Recompiling function f in /data/users/ezyang/a/pytorch/b.py:3
V0703 10:34:13.273000 140345997743104 torch/_dynamo/guards.py:2590] [0/1] [__recompiles]     triggered by the following guard failure(s):
V0703 10:34:13.273000 140345997743104 torch/_dynamo/guards.py:2590] [0/1] [__recompiles]     - 0/0: tensor 'L['x']' size mismatch at index 0. expected 4, actual 5
```

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130043
Approved by: https://github.com/anijain2305
2024-07-09 03:40:56 +00:00
Animesh Jain
7ea8a3c9b8 [dynamo] Validate check_fn (#118448)
Fixes - https://github.com/pytorch/pytorch/issues/128090

Tracker issue here - https://github.com/pytorch/pytorch/issues/129937

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118448
Approved by: https://github.com/jansel, https://github.com/ezyang
2024-07-05 18:04:12 +00:00
Joel Schlosser
6897631ceb Guard on inner tensor names for traceable wrapper subclasses (#129618)
Fixes #129601

Background: it's possible that a traceable wrapper subclass will have an optional inner tensor constituent (e.g. NJT's cached min / max sequence lengths). To specify this, the subclass's `__tensor_flatten__()` impl should leave out any unspecified optional inner tensors in the returned list of `attrs`.

This PR guards on the list of inner tensor `attrs` returned in `subclass.__tensor_flatten__()[0]`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129618
Approved by: https://github.com/anijain2305
2024-06-28 16:30:25 +00:00
Animesh Jain
17d1723aee [dynamo][unspecialized-nn-modules] Remove dead (also incorrect) code (#129316)
This code is unused because we just inline the `.parameters` call. The code was also wrong because side-effects only track the first level of mutations. An object might not marked mutated if one of the child objects (like a dict) is mutated.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129316
Approved by: https://github.com/jansel
2024-06-23 03:02:27 +00:00
Animesh Jain
c008488b9c [dynamo][guards] Dont run TYPE_MATCH for DICT_LENGTH C++ guard (#129163)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129163
Approved by: https://github.com/williamwen42, https://github.com/jansel
2024-06-21 06:27:19 +00:00
Animesh Jain
f2f4dde2d3 [dynamo] Remove ID_MATCH for FSDPModuleVariable (#129015)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129015
Approved by: https://github.com/yf225
ghstack dependencies: #129098
2024-06-20 19:23:32 +00:00
Animesh Jain
ea47d542ca [dynamo][guards] Remove BOOL_FALSE - not needed after C++ guards (#129098)
PyDict_Size is very fast ... earlier with Python guards, Cpython will go through layers of fluff to finally call the PyDict_Size. With C++ guards, its not needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129098
Approved by: https://github.com/jansel
2024-06-20 14:40:27 +00:00
Will Feng
979edbbe12 [Traceable FSDP2] Dynamo support FSDP2 use_training_state context manager (#127854)
Improve Dynamo to support the FSDP2 `use_training_state()` context manager.

Test command:
`
pytest -rA test/distributed/_composable/fsdp/test_fully_shard_compile.py::TestFullyShardCompile::test_dynamo_trace_use_training_state
`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127854
Approved by: https://github.com/yanboliang
2024-06-16 08:48:52 +00:00
Animesh Jain
7e092a62e6 [dynamo] Support weakref objects (#128533)
Fixes https://github.com/pytorch/pytorch/issues/125720

I was earlier worried that DELETE_* or STORE_* on referent values should result in a graph break, because they could invalidate the weak ref. But then @zou3519 pointed out that weakref invalidation will happen EVENTUALLY, CPython provides no guarantees when the weakref will be invalidated (even when the user calls del x and x is the last reference).

So any code that relies on del x to invalidate the weakref of x right away is BAD code. CPython provide no guarantees. Therefore we can (ab)use this nuance, and can just ignore DELETE_* or STORE_* on the referent objects.

The only corner case is when Dynamo is reconstructing the weakref object. Dynamo will have a hard time being correct here, so just SKIP_FRAME on such a case. This is rare.

Cpython notes
1) https://docs.python.org/3/library/weakref.html
2) https://docs.python.org/3/reference/datamodel.html#index-2

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128533
Approved by: https://github.com/jansel
2024-06-15 02:16:25 +00:00
Aaron Orenstein
dcfa7702c3 Flip default value for mypy disallow_untyped_defs [1/11] (#127838)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127838
Approved by: https://github.com/oulgen
2024-06-08 18:16:33 +00:00
Animesh Jain
bb6bfd9ad8 [dynamo][compile-time] Cache the child guard managers (#127377)
Reduces compile time of MobileBertForMaskedLM model from 39 seconds to 26 seconds. This was a regression introduced by #125202. Before that PR, compile time was 24 seconds. The extra two seconds is just because we are going through enormous number of guards.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127377
Approved by: https://github.com/jansel
2024-05-31 04:23:56 +00:00
Animesh Jain
1876f0fec1 [dynamo][nn module guards] Use TENSOR_MATCH, and not ID_MATCH, for numpy tensors (#126246)
Fixes speech_transformer regression here - https://hud.pytorch.org/benchmark/torchbench/inductor_no_cudagraphs?startTime=Tue%2C%2007%20May%202024%2019%3A22%3A54%20GMT&stopTime=Tue%2C%2014%20May%202024%2019%3A22%3A54%20GMT&granularity=hour&mode=training&dtype=amp&lBranch=main&lCommit=02093b6c6ae1046368e2500881d0bb5880873386&rBranch=main&rCommit=b24ad7eab55eaf660893dddae949fc714e434338

Thanks to @eellison  and @bdhirsh for isolating the regression to nn module guards.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126246
Approved by: https://github.com/jansel
ghstack dependencies: #126203
2024-05-16 01:57:59 +00:00
Animesh Jain
90461d4986 [dynamo] Detect monkeypatching on nn module forward method (#126203)
An alternative was https://github.com/pytorch/pytorch/pull/124975. Though it was safer because it was adding guards for every inlined function, it was causing guard overhead for a few models of > 20%.  The overhead of this PR is minimal for the common unpatched case.

Fixes an internal issue - [fb.workplace.com/groups/1075192433118967/permalink/1411067766198097](https://fb.workplace.com/groups/1075192433118967/permalink/1411067766198097/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126203
Approved by: https://github.com/ezyang
2024-05-15 20:41:13 +00:00
Edward Z. Yang
2ba102f689 Implement native support for float inputs in Dynamo and ShapeEnv (#125325)
The big idea is that floats are treated as Tensors on input/output to the FX graph, but on the inside, we immediately call item() on the synthetic Tensor and record regular float operations on it. Canonicalization to Tensor operations will happen in a standalone FX pass. This behavior is controlled by `specialize_float` config variable when set to False.

The generated graph looks like this for the test `test_unspec_float_output`:

```
 def forward(self, L_x_: "f32[3]", L_y_: "f32[]"):
     l_x_ = L_x_
     l_y_ = L_y_

     # File: /data/users/ezyang/a/pytorch/test/dynamo/test_unspec.py:511 in f, code: return x + 1, y * 2
     add: "f32[3]" = l_x_ + 1;  l_x_ = None
     item: "Sym(zf0)" = l_y_.item();  l_y_ = None
     mul: "Sym(2*zf0)" = item * 2;  item = None
     scalar_tensor: "f32[]" = torch.scalar_tensor(mul);  mul = None
     return (add, scalar_tensor)
```

The ingredients:

* **torch/_dynamo/variables/builder.py** When `specialize_float` is False, we wrap float literals with `wrap_symfloat`. This is an unholy mashup of `wrap_symint` and `wrap_unspecialized_primitive`. The overall strategy is that we first generate a tensor argument (because that's what we want to show up into the FX graph), but then immediately call item() on the tensor argument to get a SymNodeVariable, which we will do the rest of the tracing with.  Importantly, this SymNodeVariable is backed with the source of the original float: this means we can guard on the resulting value (something we could NOT do with UnspecializedPythonVariable). This has to be done manually, because if you literally call item() on the tensor, you will end up with an unbacked float. There is a bit of copy paste from wrap_symint and wrap_unspecialized_primitive which we can try to factor out, but this really is its own thing and you should review every line of code in the function.
* **torch/fx/experimental/symbolic_shapes.py** We now can generate guards on float inputs, and these guards are handled inside of ShapeEnv. So we need to be able to allocate (backed!) float symbols, and produce guards for them. Fairly straightforward generalization.
* **torch/_dynamo/codegen.py** I also need to maintain the invariant that there are no float outputs to the FX graph. I chose to do this at codegen time. When we detect a SymNodeVariable on the return stack for a float, we on the fly convert it (via `as_tensor`) to a TensorVariable, which is the true output. We then special case the output bytecode to call item() on it again. The tensor conversion is memoized on SymNodeVariable since we typically run the code generation process twice.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125325
Approved by: https://github.com/lezcano, https://github.com/jansel
2024-05-14 04:10:01 +00:00
Animesh Jain
ae5e2ab92e [dynamo][fsdp] Use Tensor match for FSDP modules (#125827)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125827
Approved by: https://github.com/yf225, https://github.com/jansel
ghstack dependencies: #125828, #125805
2024-05-09 21:26:15 +00:00
ydwu4
461ffaaaf3 [dynamo] support torchbind object input (#124978)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124978
Approved by: https://github.com/jansel
2024-05-07 03:02:00 +00:00
Aaron Gokaslan
1dd42e42c4 [BE]: Try TCH autofixes on torch/ (#125536)
Tries TCH autofixes and see what breaks

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125536
Approved by: https://github.com/ezyang
2024-05-05 23:13:59 +00:00
Animesh Jain
5ba777f46e [guards][cpp-guards] Optimize NN module getattr guards (#124522)
Improves the guard overhead of MobileBert model with nn module guards from 92000 units to 20000 units.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124522
Approved by: https://github.com/jansel
ghstack dependencies: #125439, #125421
2024-05-04 22:08:56 +00:00
Animesh Jain
8706da2bad [dynamo][cpp-guards] Improve recompilation reason logic for NO_TENSOR_ALIASING guard (#125439)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125439
Approved by: https://github.com/williamwen42
2024-05-03 04:49:41 +00:00
Animesh Jain
a13a0a2479 [dynamo][easy] Simple fixes to prepare for nn module guards (#125316)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125316
Approved by: https://github.com/williamwen42
ghstack dependencies: #125275
2024-05-02 12:08:11 +00:00
Edward Z. Yang
da5d2d9b3e Hotfix: restore CPP guard string in structured trace (#125303)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125303
Approved by: https://github.com/albanD
2024-05-02 03:57:19 +00:00
Animesh Jain
e68d65dae2 [dynamo][cpp-guards] Differentiate dict guards wrt to guarding on key order (#124779)
We guard on key order
1) When a key is a non-constant object
2) When we actually need key order - like .values, .items etc

For dicts/OrderedDicts that do not require key order guarding, we just rely on usual `GuardManger + DictGetItemGuardAccessor`. This is faster than going through the `list(d.keys())` based design for OrderedDicts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124779
Approved by: https://github.com/jansel
2024-04-25 08:20:35 +00:00
Jason Ansel
11e6f84ad8 [dynamo] Graph break on uninitialized nn.Module (#123790)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123790
Approved by: https://github.com/anijain2305
ghstack dependencies: #123700, #123705, #123786
2024-04-12 19:03:13 +00:00
Animesh Jain
b9675e820e [dynamo][cpp-guards] Improve the logs (#123780)
For this program

~~~
@torch.compile(backend="eager")
def fn(x, y, d):
    return x * y * d["foo"] * d["bar"]
~~~

Python logs are

~~~
V0410 15:48:57.778000 140318524949632 torch/_dynamo/guards.py:1785] [0/0] [__guards] GUARDS:
V0410 15:48:57.778000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] ___check_type_id(L['d'], 8833952)                             # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:48:57.778000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] len(L['d']) == 2                                              # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:48:57.779000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] list(L['d'].keys()) == ['foo', 'bar']                         # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:48:57.779000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] hasattr(L['x'], '_dynamo_dynamic_indices') == False           # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:48:57.779000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] hasattr(L['y'], '_dynamo_dynamic_indices') == False           # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:48:57.779000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] ___check_type_id(L['d']['bar'], 8842592)                      # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:48:57.779000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] L['d']['bar'] == 2                                            # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:48:57.779000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] ___check_type_id(L['d']['foo'], 8842592)                      # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:48:57.779000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] L['d']['foo'] == 4                                            # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:48:57.779000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] utils_device.CURRENT_DEVICE == None                           # _dynamo/output_graph.py:450 in init_ambient_guards
V0410 15:48:57.779000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[4], stride=[1])  # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:48:57.780000 140318524949632 torch/_dynamo/guards.py:1803] [0/0] [__guards] check_tensor(L['y'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[4], stride=[1])  # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
~~~

CPP logs are

~~~
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1792] [0/0] [__guards] GUARDS:
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards]
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] TREE_GUARD_MANAGER:
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] +- RootGuardManager
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None                           # _dynamo/output_graph.py:450 in init_ambient_guards
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | +- DictSubclassGuardManager: source=L['d'], accessed_by=DictGetItemGuardAccessor(d)
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | +- KeyValueManager pair at index=0
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | | +- KeyManager: GuardManager: source=list(L['d'].keys())[0]
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | | | +- EQUALS_MATCH: list(L['d'].keys())[0] == 'foo'                               # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | | +- ValueManager: GuardManager: source=L['d']['foo']
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | | | +- EQUALS_MATCH: L['d']['foo'] == 4                                            # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | +- KeyValueManager pair at index=1
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | | +- KeyManager: GuardManager: source=list(L['d'].keys())[1]
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | | | +- EQUALS_MATCH: list(L['d'].keys())[1] == 'bar'                               # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | | +- ValueManager: GuardManager: source=L['d']['bar']
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | | | +- EQUALS_MATCH: L['d']['bar'] == 2                                            # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | +- GuardManager: source=L['x'], accessed_by=DictGetItemGuardAccessor(x)
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[4], stride=[1])  # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | +- NO_HASATTR: hasattr(L['x'], '_dynamo_dynamic_indices') == False           # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | +- NO_TENSOR_ALIASING: check_no_aliasing(L['x'], L['y'])
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | +- GuardManager: source=L['y'], accessed_by=DictGetItemGuardAccessor(y)
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | +- TENSOR_MATCH: check_tensor(L['y'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[4], stride=[1])  # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | +- NO_HASATTR: hasattr(L['y'], '_dynamo_dynamic_indices') == False           # return x * y * d["foo"] * d["bar"]  # examples/ord_dicts.py:24 in fn
V0410 15:49:41.607000 140481927914624 torch/_dynamo/guards.py:1769] [0/0] [__guards] | | +- NO_TENSOR_ALIASING: check_no_aliasing(L['x'], L['y'])
~~~~

This info is also present in this gist for better viewing - https://gist.github.com/anijain2305/b418706e4ad4ec2d601530bc24cf8a20

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123780
Approved by: https://github.com/ezyang, https://github.com/jansel
ghstack dependencies: #123773, #123787
2024-04-11 22:23:28 +00:00
Animesh Jain
b0b7aa201c [dynamo][cpp-guards] Introduce DictSubclassGuardManager (#123773)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123773
Approved by: https://github.com/jansel
2024-04-11 22:23:28 +00:00
Animesh Jain
1346ebf12e [dynamo][guards] Delay DUPLICATE_INPUT guard because of incorrect ordering (#123605)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123605
Approved by: https://github.com/jansel
ghstack dependencies: #123606
2024-04-10 07:30:02 +00:00
Animesh Jain
7283c37c98 [dynamo] Keep guards on global function (#123423)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123423
Approved by: https://github.com/jansel
2024-04-09 04:23:11 +00:00