PyTorch MergeBot
98bcb4acb6
Revert "[reland][dynamo] Better support for nn.Module ( #88959 )"
...
This reverts commit e950afc395 .
Reverted https://github.com/pytorch/pytorch/pull/88959 on behalf of https://github.com/malfet due to Broke `test_accuracy_issue1`
2022-11-13 16:21:14 +00:00
Animesh Jain
e950afc395
[reland][dynamo] Better support for nn.Module ( #88959 )
...
Relanding https://github.com/pytorch/pytorch/pull/88629
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88959
Approved by: https://github.com/msaroufim
2022-11-13 08:19:45 +00:00
PyTorch MergeBot
ae2c668cc0
Revert "[dynamo][api] Better support of torch.nn.Module ( #88629 )"
...
This reverts commit c83348597b .
Reverted https://github.com/pytorch/pytorch/pull/88629 on behalf of https://github.com/anijain2305 due to job failing on master https://github.com/pytorch/pytorch/actions/runs/3449914495/jobs/5758267231
2022-11-12 07:52:56 +00:00
Animesh Jain
c83348597b
[dynamo][api] Better support of torch.nn.Module ( #88629 )
...
This is an API change, so please review carefully.
With this PR, torchdynamo returns an `OptimizedModule` class object, a subclass of `torch.nn.Module`, when asked to optimize a `nn.Module` object. Most of the methods are redirected to the original `nn.Module`, which is installed as `_mod` in the `OptimizedModule`.
This is helpful for many cases
```
mod = MockModule()
opt_mod = torch._dynamo.optimize()(mod)
print(opt_mod) # Works
opt_mod = opt_mod.to(device="cuda")
print(opt_mod) # Works
opt_mod(input) # Triggers recompile if necessary, earlier we were shedding the TorchDynamo wrapper
opt_mod.parameters() # Refers to the original module
```
Topics unclear to me
* I have overridden many methods to raise NotImplementedError. A careful review of those will be good.
* hooks
* For the optimized forward, should we call torchdynamo optimization on `__call__` or `forward`
* What else to test
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88629
Approved by: https://github.com/Chillee , https://github.com/jansel , https://github.com/msaroufim
2022-11-12 04:45:17 +00:00
Zhengxu Chen
08b2a251e1
[export] Preserve meta["val"] on placeholders in dynamo.export(). ( #88651 )
...
Summary:
Today when we transform the captured graph in the last step in export(aten_graph=True), we construct a new graph which doesn't have the all the metadata to be preserved, for example, node.meta["val"].
meta["val"] is important for writing passes and analysis on the graph later in the pipeline, we may want to preserve that on placeholder nodes.
Test Plan: test_export.py:test_export_meta_val
Differential Revision: D41110864
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88651
Approved by: https://github.com/tugsbayasgalan , https://github.com/jansel
2022-11-09 01:02:09 +00:00
Michael Suo
c0e6b4329f
[dynamo] only error out on nested fx trace if dynamo is optimizing ( #88640 )
...
I think this is the final resolution to issue caused by
https://github.com/pytorch/pytorch/pull/87797 . The nvfuser issue that PR
tripped up was because, even though we're correctly disabling
torchdynamo via a `DisableContext`, the nested fx trace check was still
firing. This PR properly narrows it to only fire if we're not disabled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88640
Approved by: https://github.com/yf225
2022-11-08 23:52:21 +00:00
Will Constable
678d038001
Support DDP ignored parameters in DDPOptimizer ( #88460 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88460
Approved by: https://github.com/aazzolini
2022-11-04 21:42:15 +00:00
Michael Suo
923a5e9685
[dynamo] Error when user nests FX with dynamo ( #87797 )
...
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a ).
Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225 , https://github.com/soumith , https://github.com/jansel
2022-11-02 17:38:56 +00:00
PyTorch MergeBot
c0761a835b
Revert "[dynamo] Error when user nests FX with dynamo ( #87797 )"
...
This reverts commit 1da5aeb97b .
Reverted https://github.com/pytorch/pytorch/pull/87797 on behalf of https://github.com/ezyang due to breaks nvfuser stack, needs more investigation
2022-10-31 23:49:37 +00:00
Horace He
12dd877395
Fix all references to torchdynamo from the merge ( #87731 )
...
cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @jansel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87731
Approved by: https://github.com/yanboliang , https://github.com/ezyang , https://github.com/anijain2305 , https://github.com/jansel
2022-10-31 06:51:07 +00:00
Michael Lazos
9691ba2dbd
Remove excess exception logging for minifier, cleanup backend failure exception format ( #87537 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1376
Ensures exceptions are printed only in one place, once.
implements some of the ideas from https://github.com/pytorch/torchdynamo/issues/1754
- Attaches a field to the exception which indicates that it's minified, a usage message is printed if this field is present
cc @jansel @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @lezcano @fdrocha
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87537
Approved by: https://github.com/anijain2305
2022-10-28 21:33:55 +00:00
Michael Suo
1da5aeb97b
[dynamo] Error when user nests FX with dynamo ( #87797 )
...
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a ).
Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).
cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225 , https://github.com/soumith , https://github.com/jansel
2022-10-28 04:59:08 +00:00
Michael Suo
d47ffecbe4
[dynamo] relax fake tensor restriction with assume_constant_result ( #87895 )
...
This works now because of https://github.com/pytorch/pytorch/pull/87091 ,
so don't error out anymore.
cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87895
Approved by: https://github.com/tugsbayasgalan , https://github.com/voznesenskym
2022-10-28 04:05:06 +00:00
PyTorch MergeBot
cda0d5a57b
Revert "[dynamo] Error when user nests FX with dynamo ( #87797 )"
...
This reverts commit a485528a7e .
Reverted https://github.com/pytorch/pytorch/pull/87797 on behalf of https://github.com/kit1980 due to Broke linux-bionic-py3.7-clang9 / test (dynamo, 2, 2, linux.2xlarge), same error on pull
2022-10-27 21:16:58 +00:00
Michael Suo
a485528a7e
[dynamo] Error when user nests FX with dynamo ( #87797 )
...
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a ).
Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).
cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225 , https://github.com/soumith , https://github.com/jansel
2022-10-27 17:17:59 +00:00
Michael Lazos
44d7ba7efb
Fix debug dir bugs and minifier output directories ( #87682 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1758 , https://github.com/pytorch/torchdynamo/issues/1752
- minifier_launcher.py now dumps checkpoints to \<cwd\>/checkpoints when run
- a single debug directory is created per script invocation, asserts failing with no directory will no longer occur
- torchinductor debug tracing will correctly dump to the debug directory now since no prior setup is needed, (the directory was incorrectly only initialized during dynamo tracing)
cc @jansel @lezcano @fdrocha @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87682
Approved by: https://github.com/ezyang
2022-10-25 21:55:28 +00:00
Michael Suo
e5ceab173a
[dynamo] fix explain ( #87640 )
...
Another casualty of the core move
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87640
Approved by: https://github.com/voznesenskym
2022-10-24 21:31:38 +00:00
Michael Lazos
8461460d55
Unified debug directory for dynamo/inductor tools ( #87438 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1705
Fixes https://github.com/pytorch/torchdynamo/issues/1383
Adds a debug directory by default called `torchdynamo_debug` in the current working directory.
In the debug directory for each run of dynamo (an enter and exit of optimize) folder run_\<timestamp\> is created which contains any minifier/inductor/torchdynamo artifacts under respective folders.
Updated the minifier, record replay, and inductor tracing to use this directory
cc @jansel @lezcano @fdrocha @soumith @voznesenskym @yanboliang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87438
Approved by: https://github.com/soumith
2022-10-22 03:43:11 +00:00
David Berard
4fd98dfe69
Don't only apply DDP optimizer on forward frames ( #87097 )
...
Previously a check would only apply DDP optimizer on frames named "forward".
But on hf_T5_large, a graph break causes some frames like:
```
<graph break in _shift_right>
<graph break in forward>
```
So instead, apply DDP optimizer on all frames.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87097
Approved by: https://github.com/wconstab
2022-10-17 21:55:14 +00:00
Jason Ansel
054a2fd6c2
Sync changes from pytorch/torchdynamo ( #87013 )
...
This updates to:
6380959be2
Generated with:
https://github.com/pytorch/torchdynamo/blob/main/copy_to_core.sh
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87013
Approved by: https://github.com/voznesenskym
2022-10-15 21:00:57 +00:00
Jason Ansel
c7c09722ad
Move TorchDynamo into PyTorch core ( #86461 )
...
Context:
https://github.com/pytorch/torchdynamo/issues/1588
This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo ) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`
This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00