Michael Voznesensky
500ebb2cd6
Fine grained dynamic shape controls ( #94787 )
...
https://docs.google.com/document/d/1aoIyYE8_6cYpWqS25thzVoIiKsT5aaUEOiiPwbIXt8k/edit
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94787
Approved by: https://github.com/ezyang
2023-02-17 22:28:37 +00:00
PyTorch MergeBot
e0ede1cc30
Revert "Fine grained dynamic shape controls ( #94787 )"
...
This reverts commit 2aa806608b .
Reverted https://github.com/pytorch/pytorch/pull/94787 on behalf of https://github.com/kit1980 due to After this PR, test_autocast_sdpa_dynamic_shapes_static_default started to fail with RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides: https://github.com/pytorch/pytorch/actions/runs/4206176846/jobs/7299657478
2023-02-17 19:52:16 +00:00
Michael Voznesensky
2aa806608b
Fine grained dynamic shape controls ( #94787 )
...
https://docs.google.com/document/d/1aoIyYE8_6cYpWqS25thzVoIiKsT5aaUEOiiPwbIXt8k/edit
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94787
Approved by: https://github.com/ezyang
2023-02-17 17:39:22 +00:00
Jason Ansel
4d6a4401f8
Raise warning if torch.compile options change without reset ( #94680 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94680
Approved by: https://github.com/wconstab , https://github.com/malfet
2023-02-13 20:21:04 +00:00
Xiaodong Wang
88e16849db
[pt2] Fix multiple races in log folder ( #93407 )
...
Summary:
There are a few races/permission errors in file creation, fixing
OSS:
1. caffe2/torch/_dynamo/utils.py, get_debug_dir: multiple process may conflict on it even it's using us. Adding pid to it
2. caffe2/torch/_dynamo/config.py: may not be a right assumption that we have permission to cwd
Test Plan: sandcastle
Differential Revision: D42905908
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93407
Approved by: https://github.com/soumith , https://github.com/mlazos
2023-02-09 21:10:14 +00:00
Jason Ansel
57d74aae55
Remove torch/_dynamo/optimizations/normalize.py ( #93278 )
...
This file was largely made obsolete by dispatcher level functionalization,
and has been disabled by config.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93278
Approved by: https://github.com/voznesenskym
2023-02-02 02:02:54 +00:00
Edward Z. Yang
ca9ebf9e2b
Delete dynamo_import and inductor_import ( #93851 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93851
Approved by: https://github.com/albanD , https://github.com/jansel
2023-02-02 01:51:29 +00:00
Edward Z. Yang
207399cf5f
Add repro_forward_only for inference debugging ( #93856 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93856
Approved by: https://github.com/williamwen42
2023-02-01 22:03:13 +00:00
Jason Ansel
45eadc2c4d
ConfigModule for _{dynamo,inductor}.config ( #93252 )
...
This refactors the way dynamo/inductor configs are handled to check for invalid configs and add options like patching and serialization.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93252
Approved by: https://github.com/voznesenskym
2023-02-01 19:38:05 +00:00
Edward Z. Yang
08041c5264
Configurable repro_tolerance for same_two_models ( #93398 )
...
Fixes https://github.com/pytorch/pytorch/issues/93293
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93398
Approved by: https://github.com/SherlockNoMad
2023-02-01 01:41:48 +00:00
Edward Z. Yang
902b4dba75
Change capture_scalar_outputs to use SymInt/SymFloat rather than Tensor to model scalars ( #93150 )
...
Previously, Dynamo faked support for item() when `capture_scalar_outputs` was True by representing it internally as a Tensor. With dynamic shapes, this is no longer necessary; we can represent it directly as a SymInt/SymFloat. Do so. Doing this requires you to use dynamic shapes; in principle we could support scalar outputs WITHOUT dynamic shapes but I won't do this unless someone hollers for it.
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Differential Revision: [D42885775](https://our.internmc.facebook.com/intern/diff/D42885775 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93150
Approved by: https://github.com/voznesenskym
2023-01-31 21:23:23 +00:00
Jason Ansel
53a669869c
Remove checks for refs/prims ( #93250 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93250
Approved by: https://github.com/voznesenskym
2023-01-30 21:42:10 +00:00
Michael Voznesensky
363ca57d02
Remove is_aot_autograd_safe_to_run ( #91927 )
...
This should be alright to remove now, because we:
1) Support LSTM
2) AOT_Autograd can cover its own mutation detection
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91927
Approved by: https://github.com/Chillee , https://github.com/bdhirsh
2023-01-21 23:54:48 +00:00
William Wen
7bc3467fff
Delete dynamic_propagation config ( #91040 )
...
Per https://github.com/pytorch/torchdynamo/issues/1949
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91040
Approved by: https://github.com/jansel
2022-12-19 22:42:11 +00:00
William Wen
86269852de
Serialize dynamo/inductor config for minifier ( #90501 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1965
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90501
Approved by: https://github.com/mlazos
2022-12-14 23:44:06 +00:00
William Wen
34dc34e8a0
Add comment to output_code in dynamo config ( #90333 )
...
Title.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90333
Approved by: https://github.com/mlazos
2022-12-12 23:36:01 +00:00
Soumith Chintala
06326a7721
[optim] skip .item calls in all optimizers when compiling with dynamo ( #88173 )
...
@mlazos: skips `item()` calls if compiling with dynamo, by defining a helper function `_get_value` which either returns the result of `.item()` or the scalar cpu tensor if compiling with dynamo. This was done because removing `item()` calls significantly regresses eager perf. Additionally, `_dispatch_sqrt` calls the appropriate sqrt function (math.sqrt, or torch.sqrt).
Fixes https://github.com/pytorch/torchdynamo/issues/1083
This PR will no longer be needed once symint support is default.
This PR closes all remaining graph breaks in the optimizers (!!)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88173
Approved by: https://github.com/albanD
2022-12-12 17:32:35 +00:00
Michael Lazos
9c4189f82d
[dynamo] Add is_compiling for dynamo ( #90329 )
...
`is_tracing` returns True during dynamo tracing and False when run in Eager
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90329
Approved by: https://github.com/jansel
2022-12-09 20:19:41 +00:00
Michael Lazos
730e44bbc7
Add logging for aot autograd and unified debug flag ( #88987 )
...
- Adds `log_level` to aot's config
- Outputs log to `<graph_name>_<log_level>.log` in aot_torchinductor subfolder of the debug directory
- Modifies the Inductor debug context to use the graph name when naming the folder instead of the os pid
- Adds `TORCH_COMPILE_DEBUG` flag to enable it, (as well as separate dynamo and inductor logs)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88987
Approved by: https://github.com/Chillee
2022-12-09 17:28:10 +00:00
Edward Z. Yang
3d4b92b171
Ensure that we fakeify tensor subclasses when they are initially tracked ( #90009 )
...
The old code didn't actually fakeify traceable tensor subclasses at the
time they are added as a GraphArg to the module; now we do, by ignoring
the subclass during fakeification and relying on Dynamo to simulate
the subclass on top. See comments for more details.
BTW, this codepath is super broken, see filed issues linked on the
inside.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90009
Approved by: https://github.com/wconstab , https://github.com/voznesenskym
2022-12-06 22:36:32 +00:00
William Wen
d224ac7f77
Remove logging.CODE ( #90234 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1932
Discussed with @mlazos: if we still want to separate streams for code logging and the rest of info, we can use a separate logger object with a unique name.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90234
Approved by: https://github.com/ezyang
2022-12-06 22:24:43 +00:00
Eli Uriegas
27ad2605c8
Hotfix to unblock TRT unit tests internally ( #90313 )
...
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Export of [D41778303](https://www.internalfb.com/diff/D41778303 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90313
Approved by: https://github.com/ezyang , https://github.com/malfet
2022-12-06 22:14:37 +00:00
William Wen
ebeecbf833
Dynamo FX graph stack traceback fix ( #87136 )
...
Migration from https://github.com/pytorch/torchdynamo/pull/1655 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87136
Approved by: https://github.com/voznesenskym
2022-12-06 02:22:16 +00:00
Michael Lazos
2d32e5dd09
add env/config flag to disable dynamo ( #89828 )
...
as title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89828
Approved by: https://github.com/anijain2305
2022-11-30 01:59:44 +00:00
Will Constable
7860fcc245
Enable DDPOptimizer by default in dynamo ( #88523 )
...
Performance benchmarks on 6 popular models from 1-64 GPUs compiled with
torchinductor show performance gains or parity with eager, and showed
regressions without DDPOptimizer. *Note: resnet50 with small batch size shows a regression with optimizer, in part due to failing to compile one subgraph due to input mutation, which will be fixed.
(hf_Bert, hf_T5_large, hf_T5, hf_GPT2_large, timm_vision_transformer, resnet50)
Correctness checks are implemented in CI (test_dynamo_distributed.py),
via single-gpu benchmark scripts iterating over many models
(benchmarks/dynamo/torchbench.py/timm_models.py/huggingface.py),
and via (multi-gpu benchmark scripts in torchbench)[https://github.com/pytorch/benchmark/tree/main/userbenchmark/ddp_experiments ].
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88523
Approved by: https://github.com/davidberard98
2022-11-29 05:27:06 +00:00
Edward Z. Yang
6904324781
Remove fake_tensor_propagation ( #89646 )
...
You always have to run dynamo with fake tensors.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89646
Approved by: https://github.com/soumith
2022-11-25 03:27:32 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
04169c5b6e
Rewrite assert statement with torch._assert under config ( #88246 )
...
This diff rewrites assert statement in python with torch._assert under config. The resulting graph looks something like:
```
SOURCE CODE:
def f(x):
assert x[0] == 3
return x.cos()
CAPTURED GRAPH:
graph():
%arg0 : [#users=2] = placeholder[target=arg0]
%getitem : [#users=1] = call_function[target=operator.getitem](args = (%arg0, 0), kwargs = {})
%eq : [#users=1] = call_function[target=operator.eq](args = (%getitem, 3), kwargs = {})
%_assert : [#users=0] = call_function[target=torch._assert](args = (%eq, "assertion_error"), kwargs = {})
%cos : [#users=1] = call_method[target=cos](args = (%arg0,), kwargs = {})
return cos
```
Note that this introduces side-effect as it could error out while executing graph, but the assertion can eliminated via DCE if we choose to ignore it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88246
Approved by: https://github.com/jansel
2022-11-17 19:49:31 +00:00
PyTorch MergeBot
9d28775c1d
Revert "Rewrite assert statement with torch._assert under config ( #88246 )"
...
This reverts commit 62ba15e10e .
Reverted https://github.com/pytorch/pytorch/pull/88246 on behalf of https://github.com/DanilBaibak due to breaking internal builds
2022-11-16 09:45:49 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
62ba15e10e
Rewrite assert statement with torch._assert under config ( #88246 )
...
This diff rewrites assert statement in python with torch._assert under config. The resulting graph looks something like:
```
SOURCE CODE:
def f(x):
assert x[0] == 3
return x.cos()
CAPTURED GRAPH:
graph():
%arg0 : [#users=2] = placeholder[target=arg0]
%getitem : [#users=1] = call_function[target=operator.getitem](args = (%arg0, 0), kwargs = {})
%eq : [#users=1] = call_function[target=operator.eq](args = (%getitem, 3), kwargs = {})
%_assert : [#users=0] = call_function[target=torch._assert](args = (%eq, "assertion_error"), kwargs = {})
%cos : [#users=1] = call_method[target=cos](args = (%arg0,), kwargs = {})
return cos
```
Note that this introduces side-effect as it could error out while executing graph, but the assertion can eliminated via DCE if we choose to ignore it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88246
Approved by: https://github.com/jansel
2022-11-15 17:14:59 +00:00
Michael Suo
923a5e9685
[dynamo] Error when user nests FX with dynamo ( #87797 )
...
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a ).
Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225 , https://github.com/soumith , https://github.com/jansel
2022-11-02 17:38:56 +00:00
PyTorch MergeBot
c0761a835b
Revert "[dynamo] Error when user nests FX with dynamo ( #87797 )"
...
This reverts commit 1da5aeb97b .
Reverted https://github.com/pytorch/pytorch/pull/87797 on behalf of https://github.com/ezyang due to breaks nvfuser stack, needs more investigation
2022-10-31 23:49:37 +00:00
Horace He
12dd877395
Fix all references to torchdynamo from the merge ( #87731 )
...
cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @jansel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87731
Approved by: https://github.com/yanboliang , https://github.com/ezyang , https://github.com/anijain2305 , https://github.com/jansel
2022-10-31 06:51:07 +00:00
Michael Suo
1da5aeb97b
[dynamo] Error when user nests FX with dynamo ( #87797 )
...
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a ).
Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).
cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225 , https://github.com/soumith , https://github.com/jansel
2022-10-28 04:59:08 +00:00
PyTorch MergeBot
cda0d5a57b
Revert "[dynamo] Error when user nests FX with dynamo ( #87797 )"
...
This reverts commit a485528a7e .
Reverted https://github.com/pytorch/pytorch/pull/87797 on behalf of https://github.com/kit1980 due to Broke linux-bionic-py3.7-clang9 / test (dynamo, 2, 2, linux.2xlarge), same error on pull
2022-10-27 21:16:58 +00:00
Akshit Khurana
b8b1d7be24
[dynamo] Add ao.nn to skipfiles inline allowlist ( #87820 )
...
Summary:
Allow torch.ao.nn module to be inlined
Test Plan:
Tested manually for https://github.com/pytorch/torchdynamo/issues/1737
Reviewers:
Subscribers:
Tasks:
Tags:
cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Differential Revision: [D40768679](https://our.internmc.facebook.com/intern/diff/D40768679 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87820
Approved by: https://github.com/jansel
2022-10-27 18:46:54 +00:00
Michael Suo
a485528a7e
[dynamo] Error when user nests FX with dynamo ( #87797 )
...
Today, this doesn't work and dynamo errors out in a very non-obvious way (see:
https://gist.github.com/suo/dde04830372ab51a4a34ea760f14200a ).
Here, we detect the error early and exit with a nicer msg. Also add a
config option to just no-op dynamo (which need to unblock internal
enablement).
cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87797
Approved by: https://github.com/yf225 , https://github.com/soumith , https://github.com/jansel
2022-10-27 17:17:59 +00:00
William Wen
a605a30732
Fix CODE level usage in dynamo config.py ( #87522 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1718 .
Tested by changing `log_level = logging.WARNING` in config.py to `log_level = logging.CODE` and running a test script that doesn't touch `log_level`.
cc @jansel @lezcano @fdrocha @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87522
Approved by: https://github.com/mlazos
2022-10-25 22:47:54 +00:00
Michael Lazos
8461460d55
Unified debug directory for dynamo/inductor tools ( #87438 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1705
Fixes https://github.com/pytorch/torchdynamo/issues/1383
Adds a debug directory by default called `torchdynamo_debug` in the current working directory.
In the debug directory for each run of dynamo (an enter and exit of optimize) folder run_\<timestamp\> is created which contains any minifier/inductor/torchdynamo artifacts under respective folders.
Updated the minifier, record replay, and inductor tracing to use this directory
cc @jansel @lezcano @fdrocha @soumith @voznesenskym @yanboliang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87438
Approved by: https://github.com/soumith
2022-10-22 03:43:11 +00:00
Edward Z. Yang
96691865b9
[dynamo] Unify raise_on_* config to suppress_errors and raise by default ( #87440 )
...
I noticed that a lot of bugs are being suppressed by torchdynamo's default
error suppression, and worse yet, there's no way to unsuppress them. After
discussion with voz and soumith, we decided that we will unify error suppression
into a single option (suppress_errors) and default suppression to False.
If your model used to work and no longer works, try TORCHDYNAMO_SUPPRESS_ERRORS=1
to bring back the old suppression behavior.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
cc @jansel @lezcano @fdrocha @mlazos @soumith @voznesenskym @yanboliang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87440
Approved by: https://github.com/voznesenskym , https://github.com/albanD
2022-10-21 17:03:29 +00:00
PyTorch MergeBot
f3cc588d09
Revert "Dynamo FX graph stack traceback fix ( #87136 )"
...
This reverts commit 89e6078bc3 .
Reverted https://github.com/pytorch/pytorch/pull/87136 on behalf of https://github.com/clee2000 due to causing a lot of tests to fail on master even though pr is green
2022-10-19 18:57:24 +00:00
William Wen
89e6078bc3
Dynamo FX graph stack traceback fix ( #87136 )
...
Migration from https://github.com/pytorch/torchdynamo/pull/1655 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87136
Approved by: https://github.com/voznesenskym
2022-10-19 17:15:43 +00:00
Jason Ansel
d45e99acf5
[dynamo] Put printing graph breaks behind a config option ( #87026 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87026
Approved by: https://github.com/soumith , https://github.com/voznesenskym
2022-10-16 19:53:42 +00:00
Jason Ansel
8f71e8de7e
Sync changes from pytorch/torchdynamo, enable tests ( #86950 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86950
Approved by: https://github.com/Chillee
2022-10-14 23:08:58 +00:00
Jason Ansel
c7c09722ad
Move TorchDynamo into PyTorch core ( #86461 )
...
Context:
https://github.com/pytorch/torchdynamo/issues/1588
This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo ) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`
This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00