Commit Graph

89 Commits

Author SHA1 Message Date
William Wen
117da58b65 [dynamo 3.11] enable dynamo unittests in 3.11 (#98104)
Enable most dynamo unittests for 3.11. There are a few tests that are skipped due to failures that will be addressed in upcoming PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98104
Approved by: https://github.com/yanboliang, https://github.com/voznesenskym, https://github.com/albanD, https://github.com/jansel, https://github.com/jerryzh168, https://github.com/malfet
2023-04-10 20:04:10 +00:00
Edward Z. Yang
9a8f71f23e Convert logging f-strings to use % format (#98697)
Codemod done with
https://gist.github.com/ezyang/2e8b0463cdc6be278478495b23ff0530 with
assistance from ChatGPT.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98697
Approved by: https://github.com/voznesenskym
2023-04-10 12:19:31 +00:00
Michael Voznesensky
e016dec66e Clean up compile reason logic, report only graph break compiles (#98574)
context: https://fb.workplace.com/groups/1075192433118967/posts/1222935648344644/?comment_id=1223002365004639&reply_comment_id=1223501008288108

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98574
Approved by: https://github.com/Chillee, https://github.com/xw285cornell
2023-04-07 17:40:00 +00:00
PyTorch MergeBot
22411b6f02 Revert "[dynamo 3.11] enable dynamo unittests in 3.11 (#98104)"
This reverts commit 0066f3405f.

Reverted https://github.com/pytorch/pytorch/pull/98104 on behalf of https://github.com/huydhn due to Sorry for reverting your PR, but it is failing on CPU 3.11 test in trunk 0066f3405f.  This is probably a landrace
2023-04-07 00:05:30 +00:00
William Wen
0066f3405f [dynamo 3.11] enable dynamo unittests in 3.11 (#98104)
Enable most dynamo unittests for 3.11. There are a few tests that are skipped due to failures that will be addressed in upcoming PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98104
Approved by: https://github.com/yanboliang, https://github.com/voznesenskym, https://github.com/albanD, https://github.com/jansel, https://github.com/jerryzh168, https://github.com/malfet
2023-04-06 23:15:48 +00:00
Tugsbayasgalan Manlaibaatar
ebeaf8adf1 Add hacky example inputs to dynamo produced graph (#96561)
Executorch currently uses functorch.functionalize API, as a result we have to invoke make_fx twice (one for filtering out autograd related stuff (happens in torchdynamo.export(aten=True) and one for tracing the functionalized version of the graph). The previous PR changes the make_fx behaviour to pass in fake tensors used in dynamo. But as Executorch invokes the second make_fx directly, we need to have access to fake tensors that dynamo used. We cannot call torchdynamo.export again in the second round because we don't have a way to functionalize inside dynamo at the moment. Hence I added this attribute in dynamo for now. Once we move to AOTAutograd functionalization, we don't have to deal with this anymore and I will remove this.

Differential Revision: [D43994692](https://our.internmc.facebook.com/intern/diff/D43994692)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96561
Approved by: https://github.com/zhxchen17, https://github.com/voznesenskym
2023-04-05 20:54:33 +00:00
Tugsbayasgalan Manlaibaatar
75ac6fdcdd Propogate dynamo shape_env to make_fx (#96437)
Currently, when we use assume_static_by_default flag, dynamo won't produce any symbols for input tensors. But when we pass the dynamo generated graph onto make_fx via torchdynamo.export(aten_graph=True), there is no way to pass this flag. We enable this by directly passing the fake tensors dynamo used to make_fx and call make_fx with "real" mode with fake tensors from dynamo.

Note that this is modified version of (https://github.com/pytorch/pytorch/pull/96143)

Differential Revision: [D44561753](https://our.internmc.facebook.com/intern/diff/D44561753)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96437
Approved by: https://github.com/jansel, https://github.com/ezyang
2023-04-04 20:37:30 +00:00
Yanbo Liang
a6bd21d935 [Dynamo] Eagerly initializing Lazy Module to reduce graph breaks (#97946)
Fixes Meta internal user case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97946
Approved by: https://github.com/wconstab
2023-04-03 22:24:43 +00:00
William Wen
cb4bc8e0f5 [dynamo 3.11] support prefix instructions MAKE_CELL, COPY_FREE_VARS, RETURN_GENERATOR, RESUME (#96506)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96506
Approved by: https://github.com/jansel
2023-03-31 18:16:17 +00:00
PyTorch MergeBot
8e5c5d2023 Revert "Propogate dynamo shape_env to make_fx (#96437)"
This reverts commit 3a22916c7a.

Reverted https://github.com/pytorch/pytorch/pull/96437 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2023-03-29 23:47:59 +00:00
Tugsbayasgalan Manlaibaatar
3a22916c7a Propogate dynamo shape_env to make_fx (#96437)
Currently, when we use assume_static_by_default flag, dynamo won't produce any symbols for input tensors. But when we pass the dynamo generated graph onto make_fx via torchdynamo.export(aten_graph=True), there is no way to pass this flag. We enable this by directly passing the fake tensors dynamo used to make_fx and call make_fx with "real" mode with fake tensors from dynamo.

Note that this is modified version of (https://github.com/pytorch/pytorch/pull/96143)

Differential Revision: [D43994693](https://our.internmc.facebook.com/intern/diff/D43994693)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96437
Approved by: https://github.com/jansel, https://github.com/ezyang
2023-03-29 22:34:37 +00:00
Edward Z. Yang
8372c5dc68 Refactor dynamic dims api, stateless internals, higher level export API (#96699)
The purpose of this API is to execute a few large components of work:

1) Refactor all the internals of plumbing dynamic dimension information after dynamo to be stateless
2) Decouple allocation controls around dynamic dimensions from verification
3) For (2), for allocation, create an enum that dictates whether we are in DUCK (default today), STATIC (aka assume_static_default in the past), or DYNAMIC (aka user constrained, do not duck shape)
4) For (2), for verification, we separate out the list of dynamic ranges entirely from allocation. This means shape_env does not tracking for what we verify on, and instead, it is the callers job to invoke produce_guards() with the various things they want verified, specifically, with the valid ranges. We do use constrain ranges to refine value ranges when doing analysis.
5) We have decided, therefore, as an extension of (4) to double down on "late" checks versus "eager" checks, primarily because the mechanisms for gathering what actually matters happens during guards, and should be a purview of the caller seeking guards, not the shape env. However, for dynamo, these structures are essentially one and the same.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96699
Approved by: https://github.com/avikchaudhuri, https://github.com/ezyang
2023-03-29 16:55:49 +00:00
Will Constable
784dd583a6 Automatically register/clear dynamo profiler hooks while profiling (#96199)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96199
Approved by: https://github.com/jansel
2023-03-14 21:19:33 +00:00
gmagogsfm
82d3d053b9 Properly capturing argument names for decorated/wrapped functions (#96557)
`inspect.getfullargspec` does not properly handle functions/methods wrapped by functools.wraps(). As a result, it gets an empty list of `args` in FullArgSpec.

This PR rewrites the logic using `inspect.signature`, which handles functools.wraps() correctly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96557
Approved by: https://github.com/jansel
2023-03-12 01:40:06 +00:00
Will Constable
d4f5f9fdb4 Profile dynamo guards (#96119)
Adds a profiler start and end callback to dynamo's C eval_frame impl, which can be used to profile a region providing a name for visualization.  Currently only hooks up one usage to profile cache lookup (primarily covering guards and linear search through  linked list).

Example profile taken from toy model:
`python benchmarks/dynamo/distributed.py --toy_model --profile --dynamo aot_eager`
<img width="1342" alt="image" src="https://user-images.githubusercontent.com/4984825/223225931-b2f6c5a7-505a-4c90-9a03-34982f6dc033.png">

Planning to measure overhead in CI, and probably can't afford to check this in enabled by default.  Will have to evaluate UX options such as `config.profile_dynamo_cache = True` or some other way.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96119
Approved by: https://github.com/jansel
2023-03-07 16:12:22 +00:00
BowenBao
8ca3c881db Dynamo.export to preserve names of args & kwargs (#95851)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95851
Approved by: https://github.com/jansel
2023-03-07 05:07:08 +00:00
BowenBao
c596504292 Type annotate dynamo.export (#95742)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95742
Approved by: https://github.com/jansel
2023-03-07 05:07:08 +00:00
Michael Voznesensky
22c9896ea4 Use original arg names if possible (#95898)
Use graphargs

rm

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95898
Approved by: https://github.com/suo
2023-03-06 19:04:49 +00:00
Edward Z. Yang
d303665d33 Make int unspecialization actually work (#95621)
OK, so this PR used to be about reducing the number of constants we specialize on, but it turns out that unspecialization was ~essentially never used (because we still constant specialized way too aggressively) and I ended up having to fix a bunch of issues to actually get tests to pass. So this PR is now "make int unspecialization actually work". As part of this, I have to turn off unspecialization by default, as there are still latent bugs in inductor.

The general strategy is that an unspecialized int is represented as a SymInt. Representing it as a 0d tensor (which is what the code used to do) is untenable: (1) we often need unspecialized ints to participate in size computations, but we have no way of propagating sympy expressions through tensor compute, and (2) a lot of APIs work when passed SymInt, but not when passed a Tensor. However, I continue to represent Numpy scalars as Tensors, as they are rarely used for size computation and they have an explicit dtype, so they are more accurately modeled as 0d tensors.

* I folded in the changes from https://github.com/pytorch/pytorch/pull/95099 as I cannot represent unspecialized ints as SymInts without also turning on dynamic shapes. This also eliminates the necessity for test_unspec.py, as toggling specialization without dynamic shapes doesn't do anything. As dynamic shapes defaults to unspecializing, I just deleted this entirely; for the specialization case, I rely on regular static shape tests to catch it. (Hypothetically, we could also rerun all the tests with dynamic shapes, but WITH int/float specialization, but this seems... not that useful? I mean, I guess export wants it, but I'd kind of like our Source heuristic to improve enough that export doesn't have to toggle this either.)
* Only 0/1 integers get specialized by default now
* A hodgepodge of fixes. I'll comment on the PR about them.

Fixes https://github.com/pytorch/pytorch/issues/95469

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95621
Approved by: https://github.com/jansel, https://github.com/Chillee
2023-03-04 01:22:08 +00:00
Michael Voznesensky
ac07de4a61 Add export docs, improve asserts (#94961)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94961
Approved by: https://github.com/tugsbayasgalan
2023-03-03 23:40:00 +00:00
Tugsbayasgalan Manlaibaatar
dd88954511 Preserve specialize_int_float during export (#95741)
In the next PR, i will error when dynamo tries to add "implicit" input so that it doesn't fail during sanity check.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95741
Approved by: https://github.com/yanboliang
2023-03-01 21:26:16 +00:00
Will Constable
dc10ab15b7 Warn on modification of OptimizedModule.forward (#95673)
Partially addresses #95641

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95673
Approved by: https://github.com/ezyang
2023-02-28 23:21:23 +00:00
Will Constable
6bdef7a5ff Warn on dynamo OptimizedModule.forward() (#95672)
Partially addresses #95641

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95672
Approved by: https://github.com/ezyang
2023-02-28 23:21:03 +00:00
Kazuaki Ishizaki
46385b3e48 Fix typos under torch/_dynamo directory (#95599)
This PR fixes typos in comments and messages of `.py` files under `torch/_dynamo` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95599
Approved by: https://github.com/ezyang
2023-02-28 03:44:24 +00:00
Tugsbayasgalan Manlaibaatar
454c48b987 Add experimental torch.export prototype (#95070)
This is WIP PR for adding torch.export API in OSS. Couple of points:
- I intentionally named it as experimental_export so that ppl don't get confused thinking this is our official API
- We don't plan to use AOTAutograd backend just yet. The reason we have it here is because the functionalization AOTAutograd uses is what we need for export (handling of param/buffer mutation etc). In the near future, I will extract the functionalization part and use it on top of make_fx. What we have right now is merely a placeholder.
- The reason we want to do it now is because we want to have some minimal tests running in OSS so that we can catch regressions earlier.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95070
Approved by: https://github.com/gmagogsfm, https://github.com/zhxchen17
2023-02-28 02:40:19 +00:00
Will Constable
a12e92d8e4 Support nn.Module forward hooks in torchdynamo (#92125)
Tweak dynamo behavior in 2 places when calling nn.Modules,
to route the call to __call__  instead of .forward(), since
__call__ is the codepath that eager users hit and will dispatch
to hooks correctly.
 (1) inside NNModuleVariable.call_function, which covers the common case
     of calling a module from code dynamo is already tracing
 (2) at the OptimizedModule layer, which is the entrypoint
     into a top-level nn.Module dynamo is about to compile

This exposes a new bug: NNModuleVariable used to special-case calling
module.forward() (which is a method) as a UserFunctionVariable with an extra
'self' arg.  After tracing into module.__call__, there is no longer a special
case for the eventual call into .forward, and it gets wrapped in a
UserDefinedObjectVariable following standard behavior of ._wrap().  UDOV can't be
called, so this broke some tests.

- Fix: add a new special case in _wrap() that treats methods as a UserDefinedMethod
  instead of UserDefinedObjectVariable.  Now, the forward method can be called.

Also, fix NNModuleVar.call_method routing forward back to __call__

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92125
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/voznesenskym
2023-02-24 05:10:29 +00:00
Edward Z. Yang
ca7eb1bab2 Preserve meta["val"] on export (#95314)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95314
Approved by: https://github.com/yinghai, https://github.com/voznesenskym
2023-02-22 23:24:57 +00:00
William Wen
8928e7bdb8 Raise error on 3.11 dynamo export (#95088)
For https://github.com/pytorch/pytorch/issues/94914. Realized that `dynamo.export` doesn't immediately raise an error when dynamo is trying to run on 3.11/windows.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95088
Approved by: https://github.com/weiwangmeta
2023-02-17 23:33:38 +00:00
William Wen
5cdedab0cc Raise error if torch.compile is called from windows or py 3.11 (#94940)
For https://github.com/pytorch/pytorch/issues/94914

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94940
Approved by: https://github.com/albanD
2023-02-16 23:34:52 +00:00
Jason Ansel
4d6a4401f8 Raise warning if torch.compile options change without reset (#94680)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94680
Approved by: https://github.com/wconstab, https://github.com/malfet
2023-02-13 20:21:04 +00:00
PyTorch MergeBot
e61d5b9588 Revert "Dynamo Export use fake tensor (#94276)"
This reverts commit 54fa980186.

Reverted https://github.com/pytorch/pytorch/pull/94276 on behalf of https://github.com/jeanschmidt due to break several internal build/test jobs: https://fburl.com/phabricator/1tik7ggb
2023-02-13 09:36:41 +00:00
Michael Voznesensky
e44586a78f Pass input tensor __dict__ along to placeholder nodes (#94080)
```
import torch
import torch.nn as nn

import torch._dynamo.config
import torch._inductor.config

def pre_attention_state_ops(input, mems, state):
    lc_key = state[0]
    lc_val = state[1]
    bar = []
    for i in range(0, 4):
        bar2 = []
        for j in range(0, 3):
            bar2.append(
                lc_key + lc_val + torch.tensor([0.1, 0.25, 0.4, 0.5, 0.1])
            )
        bar.append(bar2)

    return bar

mems = torch.tensor([[[1.8364, 0.2724, -1.4917, -0.4367, 0.8640]]])
state = [
    torch.tensor([[[1.0517, 0.3848, -0.6472, 0.0823, 0.9116]]]),
    torch.tensor([[[1.0517, 0.3848, -0.6472, 0.0823, 0.9116]]]),
]
i = torch.tensor(
    [
        [0.0313, -0.1487, -0.3846, -0.5321],
        [-1.7073, 1.3331, -0.0890, -1.4935],
        [-0.8314, -0.1862, -0.5935, 1.5232],
    ]
)

torch._dynamo.tag(mems, "MEMS")
torch._dynamo.tag(i, "FOO")
torch._dynamo.tag(state[0], "STATE_0")
torch._dynamo.tag(state[1], "HMMM")

exported = torch._dynamo.export(pre_attention_state_ops, i, mems, state)
out_graph = exported[0]

dynamo_result = out_graph(i, mems, state)
nodes = list(out_graph.graph.nodes)
placeholders = [node for node in nodes if node.op == "placeholder"]
for placeholder in placeholders:
    if "tags" in placeholder.meta:
        print("PLACEHOLDER TAGS?", placeholder.meta["tags"])

```

prints

PLACEHOLDER TAGS? ['STATE_0']
PLACEHOLDER TAGS? ['HMMM']

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94080
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-02-10 18:09:41 +00:00
Sherlock Huang
54fa980186 Dynamo Export use fake tensor (#94276)
This is a prerequisite for dynamo.export() to produce fine graph dynamic shape.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94276
Approved by: https://github.com/voznesenskym
2023-02-10 01:59:58 +00:00
Jason Ansel
2b0d7e63f0 Move dynamo.optimizations.distributed to backends (#93408)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93408
Approved by: https://github.com/wconstab
2023-02-02 20:42:17 +00:00
Jason Ansel
ee2729890c Refactor dynamo register_backend/BACKENDS (#93389)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93389
Approved by: https://github.com/voznesenskym
2023-02-02 19:41:48 +00:00
Jason Ansel
45eadc2c4d ConfigModule for _{dynamo,inductor}.config (#93252)
This refactors the way dynamo/inductor configs are handled to check for invalid configs and add options like patching and serialization.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93252
Approved by: https://github.com/voznesenskym
2023-02-01 19:38:05 +00:00
Sherlock Huang
36fe31f537 [Reland] Refactor stack_trace preservation for node meta preservation (#90803) (#92400)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90803
Approved by: https://github.com/jerryzh168, https://github.com/albanD
ghstack-source-id: 5848cca08ef5d6f8868f4f79d8bc29711e9a52c2

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92400
Approved by: https://github.com/jerryzh168
2023-01-30 23:30:43 +00:00
Thiago Crepaldi
95dfad9d93 Add kwargs support to torch.export() API (#92013)
Fixes [#1997](https://github.com/pytorch/torchdynamo/issues/1997)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92013
Approved by: https://github.com/jansel
2023-01-27 01:58:51 +00:00
PyTorch MergeBot
44132cc4b0 Revert "Add --timing flag, phase timing to @dynamo_timed (#92637)"
This reverts commit 773b513435.

Reverted https://github.com/pytorch/pytorch/pull/92637 on behalf of https://github.com/malfet due to Broke lint
2023-01-20 16:23:20 +00:00
Michael Voznesensky
773b513435 Add --timing flag, phase timing to @dynamo_timed (#92637)
Ex output:
```
 TIMING:
 entire_frame_compile:8.574629999999999
 backend_compile:5.26806
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92637
Approved by: https://github.com/ezyang
2023-01-20 05:01:21 +00:00
Edward Z. Yang
90024436e7 Do not specialize int/float with dynamic=True (#92570)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92570
Approved by: https://github.com/bdhirsh
2023-01-19 16:27:45 +00:00
Jason Ansel
bbce4184be Refactor inductor to use standard BACKENDS dict (#92187)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92187
Approved by: https://github.com/desertfire
2023-01-17 04:05:43 +00:00
PyTorch MergeBot
1a98c3e36c Revert "Add kwargs support to torch.export() API (#92013)"
This reverts commit 890b68281a.

Reverted https://github.com/pytorch/pytorch/pull/92013 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-01-16 13:03:48 +00:00
Thiago Crepaldi
890b68281a Add kwargs support to torch.export() API (#92013)
Fixes [#1997](https://github.com/pytorch/torchdynamo/issues/1997)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92013
Approved by: https://github.com/jansel
2023-01-13 15:17:26 +00:00
PyTorch MergeBot
498be7ed25 Revert "Refactor stack_trace preservation for node meta preservation (#90803)"
This reverts commit 0f1302eeae.

Reverted https://github.com/pytorch/pytorch/pull/90803 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-01-10 10:44:28 +00:00
Sherlock Huang
42a63a7ed9 Dynamo.export uses dynamic=True for symbolic tracing (#91899)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91899
Approved by: https://github.com/ezyang
2023-01-10 01:12:22 +00:00
Sherlock Huang
0f1302eeae Refactor stack_trace preservation for node meta preservation (#90803)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90803
Approved by: https://github.com/jerryzh168, https://github.com/albanD
2023-01-09 23:23:27 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
69acc34083 Automatically convert real tensors to fake in dynamo export (#91742)
Summary: We don't care about params/buffers being mutated in dynamo export, so it is safe to always convert them to faketensor

Test Plan: CI

Differential Revision: D42353789

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91742
Approved by: https://github.com/qihqi
2023-01-06 21:34:31 +00:00
Michael Lazos
1accd915a4 Re-enable optimizers (#90709)
Fixes
https://github.com/pytorch/pytorch/issues/90165
https://github.com/pytorch/torchdynamo/issues/328

Re-enables optimizer capture + compilation now that the dynamo slowdowns have been fixed

and it has speedups, numbers to come soon

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90709
Approved by: https://github.com/anijain2305, https://github.com/jansel, https://github.com/yanboliang
2022-12-19 04:07:41 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
f660d62ddc Make dynamo.export preserve user input/output format (#90884)
Currently, dynamo flattens the user input so when user reuses the input they use for tracing, exported graph wouldn't work as it would expect flat args.  This PR changes this behaviour by explicitly wrapping the dynamo produced graph with correct user input/output format.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90884
Approved by: https://github.com/zhxchen17, https://github.com/voznesenskym
2022-12-16 00:57:09 +00:00