Commit Graph

77 Commits

Author SHA1 Message Date
Will Constable
784dd583a6 Automatically register/clear dynamo profiler hooks while profiling (#96199)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96199
Approved by: https://github.com/jansel
2023-03-14 21:19:33 +00:00
gmagogsfm
82d3d053b9 Properly capturing argument names for decorated/wrapped functions (#96557)
`inspect.getfullargspec` does not properly handle functions/methods wrapped by functools.wraps(). As a result, it gets an empty list of `args` in FullArgSpec.

This PR rewrites the logic using `inspect.signature`, which handles functools.wraps() correctly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96557
Approved by: https://github.com/jansel
2023-03-12 01:40:06 +00:00
Will Constable
d4f5f9fdb4 Profile dynamo guards (#96119)
Adds a profiler start and end callback to dynamo's C eval_frame impl, which can be used to profile a region providing a name for visualization.  Currently only hooks up one usage to profile cache lookup (primarily covering guards and linear search through  linked list).

Example profile taken from toy model:
`python benchmarks/dynamo/distributed.py --toy_model --profile --dynamo aot_eager`
<img width="1342" alt="image" src="https://user-images.githubusercontent.com/4984825/223225931-b2f6c5a7-505a-4c90-9a03-34982f6dc033.png">

Planning to measure overhead in CI, and probably can't afford to check this in enabled by default.  Will have to evaluate UX options such as `config.profile_dynamo_cache = True` or some other way.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96119
Approved by: https://github.com/jansel
2023-03-07 16:12:22 +00:00
BowenBao
8ca3c881db Dynamo.export to preserve names of args & kwargs (#95851)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95851
Approved by: https://github.com/jansel
2023-03-07 05:07:08 +00:00
BowenBao
c596504292 Type annotate dynamo.export (#95742)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95742
Approved by: https://github.com/jansel
2023-03-07 05:07:08 +00:00
Michael Voznesensky
22c9896ea4 Use original arg names if possible (#95898)
Use graphargs

rm

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95898
Approved by: https://github.com/suo
2023-03-06 19:04:49 +00:00
Edward Z. Yang
d303665d33 Make int unspecialization actually work (#95621)
OK, so this PR used to be about reducing the number of constants we specialize on, but it turns out that unspecialization was ~essentially never used (because we still constant specialized way too aggressively) and I ended up having to fix a bunch of issues to actually get tests to pass. So this PR is now "make int unspecialization actually work". As part of this, I have to turn off unspecialization by default, as there are still latent bugs in inductor.

The general strategy is that an unspecialized int is represented as a SymInt. Representing it as a 0d tensor (which is what the code used to do) is untenable: (1) we often need unspecialized ints to participate in size computations, but we have no way of propagating sympy expressions through tensor compute, and (2) a lot of APIs work when passed SymInt, but not when passed a Tensor. However, I continue to represent Numpy scalars as Tensors, as they are rarely used for size computation and they have an explicit dtype, so they are more accurately modeled as 0d tensors.

* I folded in the changes from https://github.com/pytorch/pytorch/pull/95099 as I cannot represent unspecialized ints as SymInts without also turning on dynamic shapes. This also eliminates the necessity for test_unspec.py, as toggling specialization without dynamic shapes doesn't do anything. As dynamic shapes defaults to unspecializing, I just deleted this entirely; for the specialization case, I rely on regular static shape tests to catch it. (Hypothetically, we could also rerun all the tests with dynamic shapes, but WITH int/float specialization, but this seems... not that useful? I mean, I guess export wants it, but I'd kind of like our Source heuristic to improve enough that export doesn't have to toggle this either.)
* Only 0/1 integers get specialized by default now
* A hodgepodge of fixes. I'll comment on the PR about them.

Fixes https://github.com/pytorch/pytorch/issues/95469

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95621
Approved by: https://github.com/jansel, https://github.com/Chillee
2023-03-04 01:22:08 +00:00
Michael Voznesensky
ac07de4a61 Add export docs, improve asserts (#94961)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94961
Approved by: https://github.com/tugsbayasgalan
2023-03-03 23:40:00 +00:00
Tugsbayasgalan Manlaibaatar
dd88954511 Preserve specialize_int_float during export (#95741)
In the next PR, i will error when dynamo tries to add "implicit" input so that it doesn't fail during sanity check.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95741
Approved by: https://github.com/yanboliang
2023-03-01 21:26:16 +00:00
Will Constable
dc10ab15b7 Warn on modification of OptimizedModule.forward (#95673)
Partially addresses #95641

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95673
Approved by: https://github.com/ezyang
2023-02-28 23:21:23 +00:00
Will Constable
6bdef7a5ff Warn on dynamo OptimizedModule.forward() (#95672)
Partially addresses #95641

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95672
Approved by: https://github.com/ezyang
2023-02-28 23:21:03 +00:00
Kazuaki Ishizaki
46385b3e48 Fix typos under torch/_dynamo directory (#95599)
This PR fixes typos in comments and messages of `.py` files under `torch/_dynamo` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95599
Approved by: https://github.com/ezyang
2023-02-28 03:44:24 +00:00
Tugsbayasgalan Manlaibaatar
454c48b987 Add experimental torch.export prototype (#95070)
This is WIP PR for adding torch.export API in OSS. Couple of points:
- I intentionally named it as experimental_export so that ppl don't get confused thinking this is our official API
- We don't plan to use AOTAutograd backend just yet. The reason we have it here is because the functionalization AOTAutograd uses is what we need for export (handling of param/buffer mutation etc). In the near future, I will extract the functionalization part and use it on top of make_fx. What we have right now is merely a placeholder.
- The reason we want to do it now is because we want to have some minimal tests running in OSS so that we can catch regressions earlier.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95070
Approved by: https://github.com/gmagogsfm, https://github.com/zhxchen17
2023-02-28 02:40:19 +00:00
Will Constable
a12e92d8e4 Support nn.Module forward hooks in torchdynamo (#92125)
Tweak dynamo behavior in 2 places when calling nn.Modules,
to route the call to __call__  instead of .forward(), since
__call__ is the codepath that eager users hit and will dispatch
to hooks correctly.
 (1) inside NNModuleVariable.call_function, which covers the common case
     of calling a module from code dynamo is already tracing
 (2) at the OptimizedModule layer, which is the entrypoint
     into a top-level nn.Module dynamo is about to compile

This exposes a new bug: NNModuleVariable used to special-case calling
module.forward() (which is a method) as a UserFunctionVariable with an extra
'self' arg.  After tracing into module.__call__, there is no longer a special
case for the eventual call into .forward, and it gets wrapped in a
UserDefinedObjectVariable following standard behavior of ._wrap().  UDOV can't be
called, so this broke some tests.

- Fix: add a new special case in _wrap() that treats methods as a UserDefinedMethod
  instead of UserDefinedObjectVariable.  Now, the forward method can be called.

Also, fix NNModuleVar.call_method routing forward back to __call__

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92125
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/voznesenskym
2023-02-24 05:10:29 +00:00
Edward Z. Yang
ca7eb1bab2 Preserve meta["val"] on export (#95314)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95314
Approved by: https://github.com/yinghai, https://github.com/voznesenskym
2023-02-22 23:24:57 +00:00
William Wen
8928e7bdb8 Raise error on 3.11 dynamo export (#95088)
For https://github.com/pytorch/pytorch/issues/94914. Realized that `dynamo.export` doesn't immediately raise an error when dynamo is trying to run on 3.11/windows.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95088
Approved by: https://github.com/weiwangmeta
2023-02-17 23:33:38 +00:00
William Wen
5cdedab0cc Raise error if torch.compile is called from windows or py 3.11 (#94940)
For https://github.com/pytorch/pytorch/issues/94914

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94940
Approved by: https://github.com/albanD
2023-02-16 23:34:52 +00:00
Jason Ansel
4d6a4401f8 Raise warning if torch.compile options change without reset (#94680)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94680
Approved by: https://github.com/wconstab, https://github.com/malfet
2023-02-13 20:21:04 +00:00
PyTorch MergeBot
e61d5b9588 Revert "Dynamo Export use fake tensor (#94276)"
This reverts commit 54fa980186.

Reverted https://github.com/pytorch/pytorch/pull/94276 on behalf of https://github.com/jeanschmidt due to break several internal build/test jobs: https://fburl.com/phabricator/1tik7ggb
2023-02-13 09:36:41 +00:00
Michael Voznesensky
e44586a78f Pass input tensor __dict__ along to placeholder nodes (#94080)
```
import torch
import torch.nn as nn

import torch._dynamo.config
import torch._inductor.config

def pre_attention_state_ops(input, mems, state):
    lc_key = state[0]
    lc_val = state[1]
    bar = []
    for i in range(0, 4):
        bar2 = []
        for j in range(0, 3):
            bar2.append(
                lc_key + lc_val + torch.tensor([0.1, 0.25, 0.4, 0.5, 0.1])
            )
        bar.append(bar2)

    return bar

mems = torch.tensor([[[1.8364, 0.2724, -1.4917, -0.4367, 0.8640]]])
state = [
    torch.tensor([[[1.0517, 0.3848, -0.6472, 0.0823, 0.9116]]]),
    torch.tensor([[[1.0517, 0.3848, -0.6472, 0.0823, 0.9116]]]),
]
i = torch.tensor(
    [
        [0.0313, -0.1487, -0.3846, -0.5321],
        [-1.7073, 1.3331, -0.0890, -1.4935],
        [-0.8314, -0.1862, -0.5935, 1.5232],
    ]
)

torch._dynamo.tag(mems, "MEMS")
torch._dynamo.tag(i, "FOO")
torch._dynamo.tag(state[0], "STATE_0")
torch._dynamo.tag(state[1], "HMMM")

exported = torch._dynamo.export(pre_attention_state_ops, i, mems, state)
out_graph = exported[0]

dynamo_result = out_graph(i, mems, state)
nodes = list(out_graph.graph.nodes)
placeholders = [node for node in nodes if node.op == "placeholder"]
for placeholder in placeholders:
    if "tags" in placeholder.meta:
        print("PLACEHOLDER TAGS?", placeholder.meta["tags"])

```

prints

PLACEHOLDER TAGS? ['STATE_0']
PLACEHOLDER TAGS? ['HMMM']

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94080
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-02-10 18:09:41 +00:00
Sherlock Huang
54fa980186 Dynamo Export use fake tensor (#94276)
This is a prerequisite for dynamo.export() to produce fine graph dynamic shape.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94276
Approved by: https://github.com/voznesenskym
2023-02-10 01:59:58 +00:00
Jason Ansel
2b0d7e63f0 Move dynamo.optimizations.distributed to backends (#93408)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93408
Approved by: https://github.com/wconstab
2023-02-02 20:42:17 +00:00
Jason Ansel
ee2729890c Refactor dynamo register_backend/BACKENDS (#93389)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93389
Approved by: https://github.com/voznesenskym
2023-02-02 19:41:48 +00:00
Jason Ansel
45eadc2c4d ConfigModule for _{dynamo,inductor}.config (#93252)
This refactors the way dynamo/inductor configs are handled to check for invalid configs and add options like patching and serialization.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93252
Approved by: https://github.com/voznesenskym
2023-02-01 19:38:05 +00:00
Sherlock Huang
36fe31f537 [Reland] Refactor stack_trace preservation for node meta preservation (#90803) (#92400)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90803
Approved by: https://github.com/jerryzh168, https://github.com/albanD
ghstack-source-id: 5848cca08ef5d6f8868f4f79d8bc29711e9a52c2

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92400
Approved by: https://github.com/jerryzh168
2023-01-30 23:30:43 +00:00
Thiago Crepaldi
95dfad9d93 Add kwargs support to torch.export() API (#92013)
Fixes [#1997](https://github.com/pytorch/torchdynamo/issues/1997)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92013
Approved by: https://github.com/jansel
2023-01-27 01:58:51 +00:00
PyTorch MergeBot
44132cc4b0 Revert "Add --timing flag, phase timing to @dynamo_timed (#92637)"
This reverts commit 773b513435.

Reverted https://github.com/pytorch/pytorch/pull/92637 on behalf of https://github.com/malfet due to Broke lint
2023-01-20 16:23:20 +00:00
Michael Voznesensky
773b513435 Add --timing flag, phase timing to @dynamo_timed (#92637)
Ex output:
```
 TIMING:
 entire_frame_compile:8.574629999999999
 backend_compile:5.26806
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92637
Approved by: https://github.com/ezyang
2023-01-20 05:01:21 +00:00
Edward Z. Yang
90024436e7 Do not specialize int/float with dynamic=True (#92570)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92570
Approved by: https://github.com/bdhirsh
2023-01-19 16:27:45 +00:00
Jason Ansel
bbce4184be Refactor inductor to use standard BACKENDS dict (#92187)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92187
Approved by: https://github.com/desertfire
2023-01-17 04:05:43 +00:00
PyTorch MergeBot
1a98c3e36c Revert "Add kwargs support to torch.export() API (#92013)"
This reverts commit 890b68281a.

Reverted https://github.com/pytorch/pytorch/pull/92013 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-01-16 13:03:48 +00:00
Thiago Crepaldi
890b68281a Add kwargs support to torch.export() API (#92013)
Fixes [#1997](https://github.com/pytorch/torchdynamo/issues/1997)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92013
Approved by: https://github.com/jansel
2023-01-13 15:17:26 +00:00
PyTorch MergeBot
498be7ed25 Revert "Refactor stack_trace preservation for node meta preservation (#90803)"
This reverts commit 0f1302eeae.

Reverted https://github.com/pytorch/pytorch/pull/90803 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-01-10 10:44:28 +00:00
Sherlock Huang
42a63a7ed9 Dynamo.export uses dynamic=True for symbolic tracing (#91899)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91899
Approved by: https://github.com/ezyang
2023-01-10 01:12:22 +00:00
Sherlock Huang
0f1302eeae Refactor stack_trace preservation for node meta preservation (#90803)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90803
Approved by: https://github.com/jerryzh168, https://github.com/albanD
2023-01-09 23:23:27 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
69acc34083 Automatically convert real tensors to fake in dynamo export (#91742)
Summary: We don't care about params/buffers being mutated in dynamo export, so it is safe to always convert them to faketensor

Test Plan: CI

Differential Revision: D42353789

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91742
Approved by: https://github.com/qihqi
2023-01-06 21:34:31 +00:00
Michael Lazos
1accd915a4 Re-enable optimizers (#90709)
Fixes
https://github.com/pytorch/pytorch/issues/90165
https://github.com/pytorch/torchdynamo/issues/328

Re-enables optimizer capture + compilation now that the dynamo slowdowns have been fixed

and it has speedups, numbers to come soon

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90709
Approved by: https://github.com/anijain2305, https://github.com/jansel, https://github.com/yanboliang
2022-12-19 04:07:41 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
f660d62ddc Make dynamo.export preserve user input/output format (#90884)
Currently, dynamo flattens the user input so when user reuses the input they use for tracing, exported graph wouldn't work as it would expect flat args.  This PR changes this behaviour by explicitly wrapping the dynamo produced graph with correct user input/output format.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90884
Approved by: https://github.com/zhxchen17, https://github.com/voznesenskym
2022-12-16 00:57:09 +00:00
William Wen
e9dc8cc19b Add torch.compile support to minifier (#90308)
Initial fix for https://github.com/pytorch/torchdynamo/issues/1964.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90308
Approved by: https://github.com/mlazos
2022-12-14 18:24:42 +00:00
Michael Voznesensky
4cdc96fb4f Add hooks structure for passing around user provided hooks, add a new guard_failure_fn (#90371)
This PR introduces a new function we can pass to torch._dynamo.optimize - guard_failure_fn. Usage is in the PR, and the one stacked on top of it, but the gist of it is that it emits failed guard reason strings alongside code. This is useful for tests and debugging, as it gives far finer grained assertions and control than the compile counter alone.

This is a resubmit of https://github.com/pytorch/pytorch/pull/90129

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90371
Approved by: https://github.com/ezyang
2022-12-07 17:51:53 +00:00
Richard Zou
4068c5467d [Reland] Move functorch/_src to torch/_functorch (#88756) (#90091)
This will be the last disruptive functorch internals change.

Why are we moving these files?
- As a part of rationalizing functorch we are moving the code in
functorch/_src to torch/_functorch
- This is so that we can offer the functorch APIs as native PyTorch APIs
(coming soon) and resolve some internal build issues.

Why are we moving all of these files at once?
- It's better to break developers all at once rather than many times

Test Plan:
- wait for tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90091
Approved by: https://github.com/anijain2305, https://github.com/ezyang
2022-12-03 14:17:15 +00:00
Michael Lazos
c63afb283c Disable dynamo on optimizer lazy initialization (#89902)
Helps with https://github.com/pytorch/torchdynamo/issues/1803

Separate out the group initialization and disable dynamo on it

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89902
Approved by: https://github.com/soumith, https://github.com/albanD
2022-12-02 01:15:11 +00:00
Edward Z. Yang
99dac4dd48 Type torch._dynamo.guards (#89919)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89919
Approved by: https://github.com/albanD
2022-12-01 13:43:10 +00:00
Michael Lazos
2d32e5dd09 add env/config flag to disable dynamo (#89828)
as title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89828
Approved by: https://github.com/anijain2305
2022-11-30 01:59:44 +00:00
PyTorch MergeBot
218d9c6e09 Revert "Move functorch/_src to torch/_functorch (#88756)"
This reverts commit 52bc5c1cfe.

Reverted https://github.com/pytorch/pytorch/pull/88756 on behalf of https://github.com/clee2000 due to broke imports in tests 52bc5c1cfe https://github.com/pytorch/pytorch/actions/runs/3574742513/jobs/6010814968 probably a landrace
2022-11-29 17:17:11 +00:00
Richard Zou
52bc5c1cfe Move functorch/_src to torch/_functorch (#88756)
This will be the last disruptive functorch internals change.

Why are we moving these files?
- As a part of rationalizing functorch we are moving the code in
functorch/_src to torch/_functorch
- This is so that we can offer the functorch APIs as native PyTorch APIs
(coming soon) and resolve some internal build issues.

Why are we moving all of these files at once?
- It's better to break developers all at once rather than many times

Test Plan:
- wait for tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88756
Approved by: https://github.com/ezyang
2022-11-29 13:55:42 +00:00
Edward Z. Yang
b589e726d9 Refactor how AOTAutograd backends are defined (#89736)
There was a lot of strangeness in how AOTAutograd backends were previously defined. This refactor replaces the strangeness with something simple and straightforward. The improvements:

- There is no longer a footgun aot_autograd "backend" which doesn't actually work. No more mistyping `torch._dynamo.optimize("aot_autograd")` when you meant "aot_eager"
- Deleted aot_print because it's annoying and anyway there's no uses of it
- Instead of having BOTH the backend Subgraph and AotAutogradStrategy, there is now only an aot_autograd function which takes the kwargs to configure AOTAutograd, and then gives you a compiler function that does AOTAutograd given those kwargs. Easy.
- The primary downside is that we are now eagerly populating all of the kwargs, and that can get us into import cycle shenanigans. Some cycles I resolved directly (e.g., we now no longer manually disable the forward function before passing it to aot_autograd; aot_autograd it does it for us), but for getting inductor decompositions I had to make it take a lambda so I could lazily populate the decomps later.

New code is 130 lines shorter!

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89736
Approved by: https://github.com/anjali411, https://github.com/albanD
2022-11-28 18:39:12 +00:00
Edward Z. Yang
f45fe7de33 Add mypy checking for a few files in torch/_dynamo (#89731)
It's kind of intractable to enable mypy everywhere at the moment,
because there are a lot of errors, and also mypy is really slow
for some reason.  I just want enough types to explain the public
types for user compiler calls, going through typing the _C.dynamo
bindings along the way.  This is a first step for this.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89731
Approved by: https://github.com/suo
2022-11-28 13:14:06 +00:00
Edward Z. Yang
b04dda4291 Delay verify correctness wrapping to call site. (#89662)
There is only one call site for compiler_fn, so we can safely delay
wrapping verify correctness to here.  This will help later when we
change the backend compiler calling convention to pass fake tensors
(but I need to pass real tensors here.)

This is adapted from voz's changes at https://github.com/pytorch/pytorch/pull/89392
but with less changes to the substantive logic.  I only moved the relevant
inner implementation; there are no changes otherwise.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89662
Approved by: https://github.com/voznesenskym
2022-11-25 20:43:11 +00:00
Nikita Shulga
2de38a0714 Add torch._dynamo to docs (#89510)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89510
Approved by: https://github.com/msaroufim
2022-11-23 16:33:13 +00:00