Commit Graph

30 Commits

Author SHA1 Message Date
Edward Z. Yang
f45fe7de33 Add mypy checking for a few files in torch/_dynamo (#89731)
It's kind of intractable to enable mypy everywhere at the moment,
because there are a lot of errors, and also mypy is really slow
for some reason.  I just want enough types to explain the public
types for user compiler calls, going through typing the _C.dynamo
bindings along the way.  This is a first step for this.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89731
Approved by: https://github.com/suo
2022-11-28 13:14:06 +00:00
Edward Z. Yang
b04dda4291 Delay verify correctness wrapping to call site. (#89662)
There is only one call site for compiler_fn, so we can safely delay
wrapping verify correctness to here.  This will help later when we
change the backend compiler calling convention to pass fake tensors
(but I need to pass real tensors here.)

This is adapted from voz's changes at https://github.com/pytorch/pytorch/pull/89392
but with less changes to the substantive logic.  I only moved the relevant
inner implementation; there are no changes otherwise.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89662
Approved by: https://github.com/voznesenskym
2022-11-25 20:43:11 +00:00
Nikita Shulga
2de38a0714 Add torch._dynamo to docs (#89510)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89510
Approved by: https://github.com/msaroufim
2022-11-23 16:33:13 +00:00
Edward Z. Yang
7c811efab7 Add support for dynamic kwarg to torch._dynamo.optimize (#89290)
This is an easier way to enable dynamic shapes for a region.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89290
Approved by: https://github.com/soumith, https://github.com/jansel, https://github.com/voznesenskym
2022-11-19 23:51:02 +00:00
Michael Voznesensky
631baecbcd Add --explain flag to bench (#89316)
TORCHDYNAMO_DYNAMIC_SHAPES=1 AOT_DYNAMIC_SHAPES=1 time python benchmarks/dynamo/torchbench.py  --accuracy --explain  --backend aot_eager --train --only BERT_pytorch

Dynamo produced 76 graphs with 75 graph break and 198 ops

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89316
Approved by: https://github.com/ezyang
2022-11-19 03:35:09 +00:00
Michael Lazos
30c3e5afb0 Disable tracing zero_grad() (#88731)
Tracing through zero grad is slow, and doesn't provide any benefits.

Helps https://github.com/pytorch/torchdynamo/issues/1803

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88731
Approved by: https://github.com/anijain2305
2022-11-18 07:46:38 +00:00
Mark Saroufim
37c85cf5f2 Add warning if tensor cores are not used (#88844)
Fixes https://github.com/pytorch/torchdynamo/issues/1839

Should I do this for all backends or just inductor?

## Test
On a V100 I got from AWS

```python
from torch._dynamo import optimize
import torch

def fn(x, y):
    a = torch.cos(x)
    b = torch.sin(y)
    return a + b

new_fn = optimize("inductor")(fn)

a = new_fn(torch.Tensor(1),torch.Tensor(1))
print(a)
```

## New logs
```
(sourcetorch) ubuntu@ip-172-31-31-152:~/test$ python test.py
/home/ubuntu/pytorch/torch/_dynamo/eval_frame.py:318: UserWarning: Tensor cores are available but not enabled. Consider setting torch.backends.cuda.matmul.allow_tf32 == True in your python script for speedups
  warnings.warn(
tensor([1.3717])
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88844
Approved by: https://github.com/ngimel, https://github.com/mlazos, https://github.com/anijain2305
2022-11-17 07:24:58 +00:00
Animesh Jain
30d9fb9157 [dynamo][reland] API Support for nn.Module (#89113)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89113
Approved by: https://github.com/ezyang
2022-11-17 02:03:48 +00:00
Colin Taylor
edd2dea859 [torch] [analytics] add dynamo to analytics (#88915)
Summary: as title.

Differential Revision: D41237602

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88915
Approved by: https://github.com/jansel
2022-11-15 20:46:03 +00:00
PyTorch MergeBot
98bcb4acb6 Revert "[reland][dynamo] Better support for nn.Module (#88959)"
This reverts commit e950afc395.

Reverted https://github.com/pytorch/pytorch/pull/88959 on behalf of https://github.com/malfet due to Broke `test_accuracy_issue1`
2022-11-13 16:21:14 +00:00
Animesh Jain
e950afc395 [reland][dynamo] Better support for nn.Module (#88959)
Relanding https://github.com/pytorch/pytorch/pull/88629

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88959
Approved by: https://github.com/msaroufim
2022-11-13 08:19:45 +00:00
PyTorch MergeBot
ae2c668cc0 Revert "[dynamo][api] Better support of torch.nn.Module (#88629)"
This reverts commit c83348597b.

Reverted https://github.com/pytorch/pytorch/pull/88629 on behalf of https://github.com/anijain2305 due to job failing on master https://github.com/pytorch/pytorch/actions/runs/3449914495/jobs/5758267231
2022-11-12 07:52:56 +00:00
Animesh Jain
c83348597b [dynamo][api] Better support of torch.nn.Module (#88629)
This is an API change, so please review carefully.

With this PR, torchdynamo returns an `OptimizedModule` class object, a subclass of `torch.nn.Module`, when asked to optimize a `nn.Module` object. Most of the methods are redirected to the original `nn.Module`, which is installed as `_mod` in the `OptimizedModule`.

This is helpful for many cases

```
mod = MockModule()

opt_mod = torch._dynamo.optimize()(mod)

print(opt_mod) # Works

opt_mod = opt_mod.to(device="cuda")
print(opt_mod) # Works
opt_mod(input) # Triggers recompile if necessary, earlier we were shedding the TorchDynamo wrapper

opt_mod.parameters() # Refers to the original module

```

Topics unclear to me
* I have overridden many methods to raise NotImplementedError. A careful review of those will be good.
* hooks
* For the optimized forward, should we call torchdynamo optimization on `__call__` or `forward`
* What else to test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88629
Approved by: https://github.com/Chillee, https://github.com/jansel, https://github.com/msaroufim
2022-11-12 04:45:17 +00:00
Zhengxu Chen
08b2a251e1 [export] Preserve meta["val"] on placeholders in dynamo.export(). (#88651)
Summary:
Today when we transform the captured graph in the last step in export(aten_graph=True), we construct a new graph which doesn't have the all the metadata to be preserved, for example, node.meta["val"].
meta["val"] is important for writing passes and analysis on the graph later in the pipeline, we may want to preserve that on placeholder nodes.

Test Plan: test_export.py:test_export_meta_val

Differential Revision: D41110864

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88651
Approved by: https://github.com/tugsbayasgalan, https://github.com/jansel
2022-11-09 01:02:09 +00:00
Michael Suo
c0e6b4329f [dynamo] only error out on nested fx trace if dynamo is optimizing (#88640)
I think this is the final resolution to issue caused by
https://github.com/pytorch/pytorch/pull/87797. The nvfuser issue that PR
tripped up was because, even though we're correctly disabling
torchdynamo via a `DisableContext`, the nested fx trace check was still
firing. This PR properly narrows it to only fire if we're not disabled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88640
Approved by: https://github.com/yf225
2022-11-08 23:52:21 +00:00
Will Constable
678d038001 Support DDP ignored parameters in DDPOptimizer (#88460)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88460
Approved by: https://github.com/aazzolini
2022-11-04 21:42:15 +00:00
Michael Suo
923a5e9685 [dynamo] Error when user nests FX with dynamo (#87797)
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a).

Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225, https://github.com/soumith, https://github.com/jansel
2022-11-02 17:38:56 +00:00
PyTorch MergeBot
c0761a835b Revert "[dynamo] Error when user nests FX with dynamo (#87797)"
This reverts commit 1da5aeb97b.

Reverted https://github.com/pytorch/pytorch/pull/87797 on behalf of https://github.com/ezyang due to breaks nvfuser stack, needs more investigation
2022-10-31 23:49:37 +00:00
Horace He
12dd877395 Fix all references to torchdynamo from the merge (#87731)
cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @jansel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87731
Approved by: https://github.com/yanboliang, https://github.com/ezyang, https://github.com/anijain2305, https://github.com/jansel
2022-10-31 06:51:07 +00:00
Michael Lazos
9691ba2dbd Remove excess exception logging for minifier, cleanup backend failure exception format (#87537)
Fixes https://github.com/pytorch/torchdynamo/issues/1376

Ensures exceptions are printed only in one place, once.

implements some of the ideas from https://github.com/pytorch/torchdynamo/issues/1754
- Attaches a field to the exception which indicates that it's minified, a usage message is printed if this field is present

cc @jansel @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @lezcano @fdrocha
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87537
Approved by: https://github.com/anijain2305
2022-10-28 21:33:55 +00:00
Michael Suo
1da5aeb97b [dynamo] Error when user nests FX with dynamo (#87797)
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a).

Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).

cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225, https://github.com/soumith, https://github.com/jansel
2022-10-28 04:59:08 +00:00
Michael Suo
d47ffecbe4 [dynamo] relax fake tensor restriction with assume_constant_result (#87895)
This works now because of https://github.com/pytorch/pytorch/pull/87091,
so don't error out anymore.

cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87895
Approved by: https://github.com/tugsbayasgalan, https://github.com/voznesenskym
2022-10-28 04:05:06 +00:00
PyTorch MergeBot
cda0d5a57b Revert "[dynamo] Error when user nests FX with dynamo (#87797)"
This reverts commit a485528a7e.

Reverted https://github.com/pytorch/pytorch/pull/87797 on behalf of https://github.com/kit1980 due to Broke linux-bionic-py3.7-clang9 / test (dynamo, 2, 2, linux.2xlarge), same error on pull
2022-10-27 21:16:58 +00:00
Michael Suo
a485528a7e [dynamo] Error when user nests FX with dynamo (#87797)
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a).

Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).

cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225, https://github.com/soumith, https://github.com/jansel
2022-10-27 17:17:59 +00:00
Michael Lazos
44d7ba7efb Fix debug dir bugs and minifier output directories (#87682)
Fixes https://github.com/pytorch/torchdynamo/issues/1758, https://github.com/pytorch/torchdynamo/issues/1752

- minifier_launcher.py now dumps checkpoints to \<cwd\>/checkpoints when run
- a single debug directory is created per script invocation, asserts failing with no directory will no longer occur
- torchinductor debug tracing will correctly dump to the debug directory now since no prior setup is needed, (the directory was incorrectly only initialized during dynamo tracing)

cc @jansel @lezcano @fdrocha @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87682
Approved by: https://github.com/ezyang
2022-10-25 21:55:28 +00:00
Michael Suo
e5ceab173a [dynamo] fix explain (#87640)
Another casualty of the core move
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87640
Approved by: https://github.com/voznesenskym
2022-10-24 21:31:38 +00:00
Michael Lazos
8461460d55 Unified debug directory for dynamo/inductor tools (#87438)
Fixes https://github.com/pytorch/torchdynamo/issues/1705
Fixes https://github.com/pytorch/torchdynamo/issues/1383

Adds a debug directory by default called `torchdynamo_debug` in the current working directory.
In the debug directory for each run of dynamo (an enter and exit of optimize) folder run_\<timestamp\> is created which contains any minifier/inductor/torchdynamo artifacts under respective folders.

Updated the minifier, record replay, and inductor tracing to use this directory

cc @jansel @lezcano @fdrocha @soumith @voznesenskym @yanboliang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87438
Approved by: https://github.com/soumith
2022-10-22 03:43:11 +00:00
David Berard
4fd98dfe69 Don't only apply DDP optimizer on forward frames (#87097)
Previously a check would only apply DDP optimizer on frames named "forward".

But on hf_T5_large, a graph break causes some frames like:

```
<graph break in _shift_right>
<graph break in forward>
```

So instead, apply DDP optimizer on all frames.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87097
Approved by: https://github.com/wconstab
2022-10-17 21:55:14 +00:00
Jason Ansel
054a2fd6c2 Sync changes from pytorch/torchdynamo (#87013)
This updates to:
6380959be2

Generated with:
https://github.com/pytorch/torchdynamo/blob/main/copy_to_core.sh
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87013
Approved by: https://github.com/voznesenskym
2022-10-15 21:00:57 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00