Commit Graph

64 Commits

Author SHA1 Message Date
Will Constable
a12e92d8e4 Support nn.Module forward hooks in torchdynamo (#92125)
Tweak dynamo behavior in 2 places when calling nn.Modules,
to route the call to __call__  instead of .forward(), since
__call__ is the codepath that eager users hit and will dispatch
to hooks correctly.
 (1) inside NNModuleVariable.call_function, which covers the common case
     of calling a module from code dynamo is already tracing
 (2) at the OptimizedModule layer, which is the entrypoint
     into a top-level nn.Module dynamo is about to compile

This exposes a new bug: NNModuleVariable used to special-case calling
module.forward() (which is a method) as a UserFunctionVariable with an extra
'self' arg.  After tracing into module.__call__, there is no longer a special
case for the eventual call into .forward, and it gets wrapped in a
UserDefinedObjectVariable following standard behavior of ._wrap().  UDOV can't be
called, so this broke some tests.

- Fix: add a new special case in _wrap() that treats methods as a UserDefinedMethod
  instead of UserDefinedObjectVariable.  Now, the forward method can be called.

Also, fix NNModuleVar.call_method routing forward back to __call__

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92125
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/voznesenskym
2023-02-24 05:10:29 +00:00
Edward Z. Yang
ca7eb1bab2 Preserve meta["val"] on export (#95314)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95314
Approved by: https://github.com/yinghai, https://github.com/voznesenskym
2023-02-22 23:24:57 +00:00
William Wen
8928e7bdb8 Raise error on 3.11 dynamo export (#95088)
For https://github.com/pytorch/pytorch/issues/94914. Realized that `dynamo.export` doesn't immediately raise an error when dynamo is trying to run on 3.11/windows.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95088
Approved by: https://github.com/weiwangmeta
2023-02-17 23:33:38 +00:00
William Wen
5cdedab0cc Raise error if torch.compile is called from windows or py 3.11 (#94940)
For https://github.com/pytorch/pytorch/issues/94914

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94940
Approved by: https://github.com/albanD
2023-02-16 23:34:52 +00:00
Jason Ansel
4d6a4401f8 Raise warning if torch.compile options change without reset (#94680)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94680
Approved by: https://github.com/wconstab, https://github.com/malfet
2023-02-13 20:21:04 +00:00
PyTorch MergeBot
e61d5b9588 Revert "Dynamo Export use fake tensor (#94276)"
This reverts commit 54fa980186.

Reverted https://github.com/pytorch/pytorch/pull/94276 on behalf of https://github.com/jeanschmidt due to break several internal build/test jobs: https://fburl.com/phabricator/1tik7ggb
2023-02-13 09:36:41 +00:00
Michael Voznesensky
e44586a78f Pass input tensor __dict__ along to placeholder nodes (#94080)
```
import torch
import torch.nn as nn

import torch._dynamo.config
import torch._inductor.config

def pre_attention_state_ops(input, mems, state):
    lc_key = state[0]
    lc_val = state[1]
    bar = []
    for i in range(0, 4):
        bar2 = []
        for j in range(0, 3):
            bar2.append(
                lc_key + lc_val + torch.tensor([0.1, 0.25, 0.4, 0.5, 0.1])
            )
        bar.append(bar2)

    return bar

mems = torch.tensor([[[1.8364, 0.2724, -1.4917, -0.4367, 0.8640]]])
state = [
    torch.tensor([[[1.0517, 0.3848, -0.6472, 0.0823, 0.9116]]]),
    torch.tensor([[[1.0517, 0.3848, -0.6472, 0.0823, 0.9116]]]),
]
i = torch.tensor(
    [
        [0.0313, -0.1487, -0.3846, -0.5321],
        [-1.7073, 1.3331, -0.0890, -1.4935],
        [-0.8314, -0.1862, -0.5935, 1.5232],
    ]
)

torch._dynamo.tag(mems, "MEMS")
torch._dynamo.tag(i, "FOO")
torch._dynamo.tag(state[0], "STATE_0")
torch._dynamo.tag(state[1], "HMMM")

exported = torch._dynamo.export(pre_attention_state_ops, i, mems, state)
out_graph = exported[0]

dynamo_result = out_graph(i, mems, state)
nodes = list(out_graph.graph.nodes)
placeholders = [node for node in nodes if node.op == "placeholder"]
for placeholder in placeholders:
    if "tags" in placeholder.meta:
        print("PLACEHOLDER TAGS?", placeholder.meta["tags"])

```

prints

PLACEHOLDER TAGS? ['STATE_0']
PLACEHOLDER TAGS? ['HMMM']

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94080
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-02-10 18:09:41 +00:00
Sherlock Huang
54fa980186 Dynamo Export use fake tensor (#94276)
This is a prerequisite for dynamo.export() to produce fine graph dynamic shape.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94276
Approved by: https://github.com/voznesenskym
2023-02-10 01:59:58 +00:00
Jason Ansel
2b0d7e63f0 Move dynamo.optimizations.distributed to backends (#93408)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93408
Approved by: https://github.com/wconstab
2023-02-02 20:42:17 +00:00
Jason Ansel
ee2729890c Refactor dynamo register_backend/BACKENDS (#93389)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93389
Approved by: https://github.com/voznesenskym
2023-02-02 19:41:48 +00:00
Jason Ansel
45eadc2c4d ConfigModule for _{dynamo,inductor}.config (#93252)
This refactors the way dynamo/inductor configs are handled to check for invalid configs and add options like patching and serialization.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93252
Approved by: https://github.com/voznesenskym
2023-02-01 19:38:05 +00:00
Sherlock Huang
36fe31f537 [Reland] Refactor stack_trace preservation for node meta preservation (#90803) (#92400)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90803
Approved by: https://github.com/jerryzh168, https://github.com/albanD
ghstack-source-id: 5848cca08ef5d6f8868f4f79d8bc29711e9a52c2

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92400
Approved by: https://github.com/jerryzh168
2023-01-30 23:30:43 +00:00
Thiago Crepaldi
95dfad9d93 Add kwargs support to torch.export() API (#92013)
Fixes [#1997](https://github.com/pytorch/torchdynamo/issues/1997)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92013
Approved by: https://github.com/jansel
2023-01-27 01:58:51 +00:00
PyTorch MergeBot
44132cc4b0 Revert "Add --timing flag, phase timing to @dynamo_timed (#92637)"
This reverts commit 773b513435.

Reverted https://github.com/pytorch/pytorch/pull/92637 on behalf of https://github.com/malfet due to Broke lint
2023-01-20 16:23:20 +00:00
Michael Voznesensky
773b513435 Add --timing flag, phase timing to @dynamo_timed (#92637)
Ex output:
```
 TIMING:
 entire_frame_compile:8.574629999999999
 backend_compile:5.26806
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92637
Approved by: https://github.com/ezyang
2023-01-20 05:01:21 +00:00
Edward Z. Yang
90024436e7 Do not specialize int/float with dynamic=True (#92570)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92570
Approved by: https://github.com/bdhirsh
2023-01-19 16:27:45 +00:00
Jason Ansel
bbce4184be Refactor inductor to use standard BACKENDS dict (#92187)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92187
Approved by: https://github.com/desertfire
2023-01-17 04:05:43 +00:00
PyTorch MergeBot
1a98c3e36c Revert "Add kwargs support to torch.export() API (#92013)"
This reverts commit 890b68281a.

Reverted https://github.com/pytorch/pytorch/pull/92013 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-01-16 13:03:48 +00:00
Thiago Crepaldi
890b68281a Add kwargs support to torch.export() API (#92013)
Fixes [#1997](https://github.com/pytorch/torchdynamo/issues/1997)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92013
Approved by: https://github.com/jansel
2023-01-13 15:17:26 +00:00
PyTorch MergeBot
498be7ed25 Revert "Refactor stack_trace preservation for node meta preservation (#90803)"
This reverts commit 0f1302eeae.

Reverted https://github.com/pytorch/pytorch/pull/90803 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-01-10 10:44:28 +00:00
Sherlock Huang
42a63a7ed9 Dynamo.export uses dynamic=True for symbolic tracing (#91899)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91899
Approved by: https://github.com/ezyang
2023-01-10 01:12:22 +00:00
Sherlock Huang
0f1302eeae Refactor stack_trace preservation for node meta preservation (#90803)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90803
Approved by: https://github.com/jerryzh168, https://github.com/albanD
2023-01-09 23:23:27 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
69acc34083 Automatically convert real tensors to fake in dynamo export (#91742)
Summary: We don't care about params/buffers being mutated in dynamo export, so it is safe to always convert them to faketensor

Test Plan: CI

Differential Revision: D42353789

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91742
Approved by: https://github.com/qihqi
2023-01-06 21:34:31 +00:00
Michael Lazos
1accd915a4 Re-enable optimizers (#90709)
Fixes
https://github.com/pytorch/pytorch/issues/90165
https://github.com/pytorch/torchdynamo/issues/328

Re-enables optimizer capture + compilation now that the dynamo slowdowns have been fixed

and it has speedups, numbers to come soon

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90709
Approved by: https://github.com/anijain2305, https://github.com/jansel, https://github.com/yanboliang
2022-12-19 04:07:41 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
f660d62ddc Make dynamo.export preserve user input/output format (#90884)
Currently, dynamo flattens the user input so when user reuses the input they use for tracing, exported graph wouldn't work as it would expect flat args.  This PR changes this behaviour by explicitly wrapping the dynamo produced graph with correct user input/output format.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90884
Approved by: https://github.com/zhxchen17, https://github.com/voznesenskym
2022-12-16 00:57:09 +00:00
William Wen
e9dc8cc19b Add torch.compile support to minifier (#90308)
Initial fix for https://github.com/pytorch/torchdynamo/issues/1964.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90308
Approved by: https://github.com/mlazos
2022-12-14 18:24:42 +00:00
Michael Voznesensky
4cdc96fb4f Add hooks structure for passing around user provided hooks, add a new guard_failure_fn (#90371)
This PR introduces a new function we can pass to torch._dynamo.optimize - guard_failure_fn. Usage is in the PR, and the one stacked on top of it, but the gist of it is that it emits failed guard reason strings alongside code. This is useful for tests and debugging, as it gives far finer grained assertions and control than the compile counter alone.

This is a resubmit of https://github.com/pytorch/pytorch/pull/90129

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90371
Approved by: https://github.com/ezyang
2022-12-07 17:51:53 +00:00
Richard Zou
4068c5467d [Reland] Move functorch/_src to torch/_functorch (#88756) (#90091)
This will be the last disruptive functorch internals change.

Why are we moving these files?
- As a part of rationalizing functorch we are moving the code in
functorch/_src to torch/_functorch
- This is so that we can offer the functorch APIs as native PyTorch APIs
(coming soon) and resolve some internal build issues.

Why are we moving all of these files at once?
- It's better to break developers all at once rather than many times

Test Plan:
- wait for tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90091
Approved by: https://github.com/anijain2305, https://github.com/ezyang
2022-12-03 14:17:15 +00:00
Michael Lazos
c63afb283c Disable dynamo on optimizer lazy initialization (#89902)
Helps with https://github.com/pytorch/torchdynamo/issues/1803

Separate out the group initialization and disable dynamo on it

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89902
Approved by: https://github.com/soumith, https://github.com/albanD
2022-12-02 01:15:11 +00:00
Edward Z. Yang
99dac4dd48 Type torch._dynamo.guards (#89919)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89919
Approved by: https://github.com/albanD
2022-12-01 13:43:10 +00:00
Michael Lazos
2d32e5dd09 add env/config flag to disable dynamo (#89828)
as title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89828
Approved by: https://github.com/anijain2305
2022-11-30 01:59:44 +00:00
PyTorch MergeBot
218d9c6e09 Revert "Move functorch/_src to torch/_functorch (#88756)"
This reverts commit 52bc5c1cfe.

Reverted https://github.com/pytorch/pytorch/pull/88756 on behalf of https://github.com/clee2000 due to broke imports in tests 52bc5c1cfe https://github.com/pytorch/pytorch/actions/runs/3574742513/jobs/6010814968 probably a landrace
2022-11-29 17:17:11 +00:00
Richard Zou
52bc5c1cfe Move functorch/_src to torch/_functorch (#88756)
This will be the last disruptive functorch internals change.

Why are we moving these files?
- As a part of rationalizing functorch we are moving the code in
functorch/_src to torch/_functorch
- This is so that we can offer the functorch APIs as native PyTorch APIs
(coming soon) and resolve some internal build issues.

Why are we moving all of these files at once?
- It's better to break developers all at once rather than many times

Test Plan:
- wait for tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88756
Approved by: https://github.com/ezyang
2022-11-29 13:55:42 +00:00
Edward Z. Yang
b589e726d9 Refactor how AOTAutograd backends are defined (#89736)
There was a lot of strangeness in how AOTAutograd backends were previously defined. This refactor replaces the strangeness with something simple and straightforward. The improvements:

- There is no longer a footgun aot_autograd "backend" which doesn't actually work. No more mistyping `torch._dynamo.optimize("aot_autograd")` when you meant "aot_eager"
- Deleted aot_print because it's annoying and anyway there's no uses of it
- Instead of having BOTH the backend Subgraph and AotAutogradStrategy, there is now only an aot_autograd function which takes the kwargs to configure AOTAutograd, and then gives you a compiler function that does AOTAutograd given those kwargs. Easy.
- The primary downside is that we are now eagerly populating all of the kwargs, and that can get us into import cycle shenanigans. Some cycles I resolved directly (e.g., we now no longer manually disable the forward function before passing it to aot_autograd; aot_autograd it does it for us), but for getting inductor decompositions I had to make it take a lambda so I could lazily populate the decomps later.

New code is 130 lines shorter!

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89736
Approved by: https://github.com/anjali411, https://github.com/albanD
2022-11-28 18:39:12 +00:00
Edward Z. Yang
f45fe7de33 Add mypy checking for a few files in torch/_dynamo (#89731)
It's kind of intractable to enable mypy everywhere at the moment,
because there are a lot of errors, and also mypy is really slow
for some reason.  I just want enough types to explain the public
types for user compiler calls, going through typing the _C.dynamo
bindings along the way.  This is a first step for this.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89731
Approved by: https://github.com/suo
2022-11-28 13:14:06 +00:00
Edward Z. Yang
b04dda4291 Delay verify correctness wrapping to call site. (#89662)
There is only one call site for compiler_fn, so we can safely delay
wrapping verify correctness to here.  This will help later when we
change the backend compiler calling convention to pass fake tensors
(but I need to pass real tensors here.)

This is adapted from voz's changes at https://github.com/pytorch/pytorch/pull/89392
but with less changes to the substantive logic.  I only moved the relevant
inner implementation; there are no changes otherwise.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89662
Approved by: https://github.com/voznesenskym
2022-11-25 20:43:11 +00:00
Nikita Shulga
2de38a0714 Add torch._dynamo to docs (#89510)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89510
Approved by: https://github.com/msaroufim
2022-11-23 16:33:13 +00:00
Edward Z. Yang
7c811efab7 Add support for dynamic kwarg to torch._dynamo.optimize (#89290)
This is an easier way to enable dynamic shapes for a region.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89290
Approved by: https://github.com/soumith, https://github.com/jansel, https://github.com/voznesenskym
2022-11-19 23:51:02 +00:00
Michael Voznesensky
631baecbcd Add --explain flag to bench (#89316)
TORCHDYNAMO_DYNAMIC_SHAPES=1 AOT_DYNAMIC_SHAPES=1 time python benchmarks/dynamo/torchbench.py  --accuracy --explain  --backend aot_eager --train --only BERT_pytorch

Dynamo produced 76 graphs with 75 graph break and 198 ops

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89316
Approved by: https://github.com/ezyang
2022-11-19 03:35:09 +00:00
Michael Lazos
30c3e5afb0 Disable tracing zero_grad() (#88731)
Tracing through zero grad is slow, and doesn't provide any benefits.

Helps https://github.com/pytorch/torchdynamo/issues/1803

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88731
Approved by: https://github.com/anijain2305
2022-11-18 07:46:38 +00:00
Mark Saroufim
37c85cf5f2 Add warning if tensor cores are not used (#88844)
Fixes https://github.com/pytorch/torchdynamo/issues/1839

Should I do this for all backends or just inductor?

## Test
On a V100 I got from AWS

```python
from torch._dynamo import optimize
import torch

def fn(x, y):
    a = torch.cos(x)
    b = torch.sin(y)
    return a + b

new_fn = optimize("inductor")(fn)

a = new_fn(torch.Tensor(1),torch.Tensor(1))
print(a)
```

## New logs
```
(sourcetorch) ubuntu@ip-172-31-31-152:~/test$ python test.py
/home/ubuntu/pytorch/torch/_dynamo/eval_frame.py:318: UserWarning: Tensor cores are available but not enabled. Consider setting torch.backends.cuda.matmul.allow_tf32 == True in your python script for speedups
  warnings.warn(
tensor([1.3717])
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88844
Approved by: https://github.com/ngimel, https://github.com/mlazos, https://github.com/anijain2305
2022-11-17 07:24:58 +00:00
Animesh Jain
30d9fb9157 [dynamo][reland] API Support for nn.Module (#89113)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89113
Approved by: https://github.com/ezyang
2022-11-17 02:03:48 +00:00
Colin Taylor
edd2dea859 [torch] [analytics] add dynamo to analytics (#88915)
Summary: as title.

Differential Revision: D41237602

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88915
Approved by: https://github.com/jansel
2022-11-15 20:46:03 +00:00
PyTorch MergeBot
98bcb4acb6 Revert "[reland][dynamo] Better support for nn.Module (#88959)"
This reverts commit e950afc395.

Reverted https://github.com/pytorch/pytorch/pull/88959 on behalf of https://github.com/malfet due to Broke `test_accuracy_issue1`
2022-11-13 16:21:14 +00:00
Animesh Jain
e950afc395 [reland][dynamo] Better support for nn.Module (#88959)
Relanding https://github.com/pytorch/pytorch/pull/88629

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88959
Approved by: https://github.com/msaroufim
2022-11-13 08:19:45 +00:00
PyTorch MergeBot
ae2c668cc0 Revert "[dynamo][api] Better support of torch.nn.Module (#88629)"
This reverts commit c83348597b.

Reverted https://github.com/pytorch/pytorch/pull/88629 on behalf of https://github.com/anijain2305 due to job failing on master https://github.com/pytorch/pytorch/actions/runs/3449914495/jobs/5758267231
2022-11-12 07:52:56 +00:00
Animesh Jain
c83348597b [dynamo][api] Better support of torch.nn.Module (#88629)
This is an API change, so please review carefully.

With this PR, torchdynamo returns an `OptimizedModule` class object, a subclass of `torch.nn.Module`, when asked to optimize a `nn.Module` object. Most of the methods are redirected to the original `nn.Module`, which is installed as `_mod` in the `OptimizedModule`.

This is helpful for many cases

```
mod = MockModule()

opt_mod = torch._dynamo.optimize()(mod)

print(opt_mod) # Works

opt_mod = opt_mod.to(device="cuda")
print(opt_mod) # Works
opt_mod(input) # Triggers recompile if necessary, earlier we were shedding the TorchDynamo wrapper

opt_mod.parameters() # Refers to the original module

```

Topics unclear to me
* I have overridden many methods to raise NotImplementedError. A careful review of those will be good.
* hooks
* For the optimized forward, should we call torchdynamo optimization on `__call__` or `forward`
* What else to test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88629
Approved by: https://github.com/Chillee, https://github.com/jansel, https://github.com/msaroufim
2022-11-12 04:45:17 +00:00
Zhengxu Chen
08b2a251e1 [export] Preserve meta["val"] on placeholders in dynamo.export(). (#88651)
Summary:
Today when we transform the captured graph in the last step in export(aten_graph=True), we construct a new graph which doesn't have the all the metadata to be preserved, for example, node.meta["val"].
meta["val"] is important for writing passes and analysis on the graph later in the pipeline, we may want to preserve that on placeholder nodes.

Test Plan: test_export.py:test_export_meta_val

Differential Revision: D41110864

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88651
Approved by: https://github.com/tugsbayasgalan, https://github.com/jansel
2022-11-09 01:02:09 +00:00
Michael Suo
c0e6b4329f [dynamo] only error out on nested fx trace if dynamo is optimizing (#88640)
I think this is the final resolution to issue caused by
https://github.com/pytorch/pytorch/pull/87797. The nvfuser issue that PR
tripped up was because, even though we're correctly disabling
torchdynamo via a `DisableContext`, the nested fx trace check was still
firing. This PR properly narrows it to only fire if we're not disabled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88640
Approved by: https://github.com/yf225
2022-11-08 23:52:21 +00:00
Will Constable
678d038001 Support DDP ignored parameters in DDPOptimizer (#88460)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88460
Approved by: https://github.com/aazzolini
2022-11-04 21:42:15 +00:00