Commit Graph

436 Commits

Author SHA1 Message Date
Ram Rachum
351d73b97f Fix exception causes all over the codebase (#90271)
This is the continuation to #90134 and hopefully the final PR in this series.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90271
Approved by: https://github.com/kit1980
2022-12-07 04:29:00 +00:00
Richard Zou
4068c5467d [Reland] Move functorch/_src to torch/_functorch (#88756) (#90091)
This will be the last disruptive functorch internals change.

Why are we moving these files?
- As a part of rationalizing functorch we are moving the code in
functorch/_src to torch/_functorch
- This is so that we can offer the functorch APIs as native PyTorch APIs
(coming soon) and resolve some internal build issues.

Why are we moving all of these files at once?
- It's better to break developers all at once rather than many times

Test Plan:
- wait for tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90091
Approved by: https://github.com/anijain2305, https://github.com/ezyang
2022-12-03 14:17:15 +00:00
Edward Z. Yang
8d333761a9 When dealing with dupe arguments, prefer leafifying if possible (#89896)
See code comment for details. I also had to do some extra fixes:

* `run_functionalized_fw_and_collect_metadata` now is able to handle duplicated arguments
* `aot_wrapper_dedupe` now always returns boxed compiled functions
* `aot_wrapper_dedupe` is now applied to inference compiler along with autograd compiler (preexisting)

Fixes https://github.com/pytorch/torchdynamo/issues/1939
Fixes DebertaV2ForQuestionAnswering DebertaForMaskedLM DebertaForQuestionAnswering DebertaV2ForMaskedLM

Repro command:

```
python benchmarks/dynamo/huggingface.py --performance --float32 -dcuda --training --inductor --no-skip --dashboard --only DebertaForQuestionAnswering --cold_start_latency
```

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89896
Approved by: https://github.com/bdhirsh
2022-12-01 13:42:29 +00:00
PyTorch MergeBot
218d9c6e09 Revert "Move functorch/_src to torch/_functorch (#88756)"
This reverts commit 52bc5c1cfe.

Reverted https://github.com/pytorch/pytorch/pull/88756 on behalf of https://github.com/clee2000 due to broke imports in tests 52bc5c1cfe https://github.com/pytorch/pytorch/actions/runs/3574742513/jobs/6010814968 probably a landrace
2022-11-29 17:17:11 +00:00
Richard Zou
52bc5c1cfe Move functorch/_src to torch/_functorch (#88756)
This will be the last disruptive functorch internals change.

Why are we moving these files?
- As a part of rationalizing functorch we are moving the code in
functorch/_src to torch/_functorch
- This is so that we can offer the functorch APIs as native PyTorch APIs
(coming soon) and resolve some internal build issues.

Why are we moving all of these files at once?
- It's better to break developers all at once rather than many times

Test Plan:
- wait for tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88756
Approved by: https://github.com/ezyang
2022-11-29 13:55:42 +00:00
Brian Hirsh
e20ec44544 fixes for inductor <> batch norm (#89603)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89603
Approved by: https://github.com/albanD
2022-11-29 02:16:52 +00:00
PyTorch MergeBot
6ef702490d Revert "Support set_rng_state with fake tensor (#89642)"
This reverts commit 2f8769d680.

Reverted https://github.com/pytorch/pytorch/pull/89642 on behalf of https://github.com/ezyang due to elias is right this is probably wrong
2022-11-28 19:13:33 +00:00
Edward Z. Yang
2f8769d680 Support set_rng_state with fake tensor (#89642)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89642
Approved by: https://github.com/anjali411
2022-11-28 14:49:30 +00:00
Edward Z. Yang
6904324781 Remove fake_tensor_propagation (#89646)
You always have to run dynamo with fake tensors.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89646
Approved by: https://github.com/soumith
2022-11-25 03:27:32 +00:00
Edward Z. Yang
1aa1014b26 xfail maml test, instead of running it without fake tensor prop (#89645)
A previous version of this patch graph breaks when torch.tensor fails, but that causes

```
PYTORCH_TEST_WITH_DYNAMO=1 python test/nn/test_embedding.py -k test_embedding_bag_1D_padding_idx_cpu_float32
```

to start failing. Probably another latent bug that needs investigating.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89645
Approved by: https://github.com/albanD
2022-11-25 03:27:32 +00:00
Edward Z. Yang
9b13508ef3 Force test_rng_state to run with fake tensor prop (#89641)
I'm not really sure what desertfire's intended follow up was
on https://github.com/pytorch/pytorch/pull/87490 because when I remove
the unsupported() call, dynamo tests pass.  But the change here is
conservative and I think strictly better than the current situation.
The idea is to force fake tensor pop on for the test, and then just
observe that we are doing a graph break.  Clearly, export doesn't work,
so I manually xfail it.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89641
Approved by: https://github.com/anjali411
2022-11-24 22:46:47 +00:00
Edward Z. Yang
c6be06d93a Easy: These tests work with fake_tensor_propagation on (#89640)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89640
Approved by: https://github.com/anjali411, https://github.com/albanD
2022-11-24 22:46:45 +00:00
Edward Z. Yang
fd279fe85b Make pytest work again on test/dynamo (#89631)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89631
Approved by: https://github.com/anjali411
2022-11-24 17:24:25 +00:00
Elias Ellison
a8d6b82167 Fix norm decomp when dtype is passed in (#89508)
Fix for https://github.com/pytorch/torchdynamo/issues/1889. The wrapper was doing a downcast even when the dtype was explicitly passed in.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89508
Approved by: https://github.com/anijain2305
2022-11-23 20:49:09 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
04169c5b6e Rewrite assert statement with torch._assert under config (#88246)
This diff rewrites assert statement in python with torch._assert under config. The resulting graph looks something like:
```
SOURCE CODE:
def f(x):
      assert x[0] == 3
      return x.cos()

CAPTURED GRAPH:
graph():
    %arg0 : [#users=2] = placeholder[target=arg0]
    %getitem : [#users=1] = call_function[target=operator.getitem](args = (%arg0, 0), kwargs = {})
    %eq : [#users=1] = call_function[target=operator.eq](args = (%getitem, 3), kwargs = {})
    %_assert : [#users=0] = call_function[target=torch._assert](args = (%eq, "assertion_error"), kwargs = {})
    %cos : [#users=1] = call_method[target=cos](args = (%arg0,), kwargs = {})
    return cos
 ```
Note that this introduces side-effect as it could error out while executing graph, but the assertion can eliminated via DCE if we choose to ignore it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88246
Approved by: https://github.com/jansel
2022-11-17 19:49:31 +00:00
PyTorch MergeBot
9d28775c1d Revert "Rewrite assert statement with torch._assert under config (#88246)"
This reverts commit 62ba15e10e.

Reverted https://github.com/pytorch/pytorch/pull/88246 on behalf of https://github.com/DanilBaibak due to breaking internal builds
2022-11-16 09:45:49 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
62ba15e10e Rewrite assert statement with torch._assert under config (#88246)
This diff rewrites assert statement in python with torch._assert under config. The resulting graph looks something like:
```
SOURCE CODE:
def f(x):
      assert x[0] == 3
      return x.cos()

CAPTURED GRAPH:
graph():
    %arg0 : [#users=2] = placeholder[target=arg0]
    %getitem : [#users=1] = call_function[target=operator.getitem](args = (%arg0, 0), kwargs = {})
    %eq : [#users=1] = call_function[target=operator.eq](args = (%getitem, 3), kwargs = {})
    %_assert : [#users=0] = call_function[target=torch._assert](args = (%eq, "assertion_error"), kwargs = {})
    %cos : [#users=1] = call_method[target=cos](args = (%arg0,), kwargs = {})
    return cos
 ```
Note that this introduces side-effect as it could error out while executing graph, but the assertion can eliminated via DCE if we choose to ignore it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88246
Approved by: https://github.com/jansel
2022-11-15 17:14:59 +00:00
Edward Z. Yang
cbdb683dc8 Add test that bias gradient is properly tested in same_two_models (#88995)
See
https://github.com/pytorch/pytorch/pull/88629#issuecomment-1313850324
for why this got broken.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88995
Approved by: https://github.com/albanD
2022-11-15 02:55:43 +00:00
Animesh Jain
897d029a73 [reland][dynamo] fixes dict changed during runtime error (#88877)
Reland https://github.com/pytorch/pytorch/pull/87526

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88877
Approved by: https://github.com/ezyang
2022-11-13 16:20:45 +00:00
Animesh Jain
2b12bfce88 [dynamo] Skip frame when graph break in a loop (#88857)
This fixes excessing recompilation issue in tacotron2 but has few caveats - https://github.com/pytorch/torchdynamo/issues/330

For tacotron2, the repro is something like this

~~~
        def inner(x):
            return torch.sin(x)

        def fn(x):
            for _ in range(100):
                inner(x)
                torch._dynamo.graph_break()
            return x
~~~

The problem here is that Dynamo has guards on the TUPLE_ITERATOR_LEN whenever a graph break happens. Therefore, we keep on recompiling.

This PR checks if there is a backedge (helps with while loop) in presence of a graph break. If there is, Dynamo skips processing this frame. Therefore, Dynamo gets called when inner is called, and we compile only once.

Note that, if there was no graph break, we will unroll the original loop, and see one graph with 100 sin operations (just as before, so no changes there).

The caveat is - We are skipping the frame, so if we have something like this

~~~
        def fn(x):
            for _ in range(100):
                # 1000s of lines of PyTorch code
                torch._dynamo.graph_break()
            return x
~~~

Dynamo will skip processing this frame, and might miss on the optimization.

Completely open for suggestions. Happy to re-implement if there is a better way to handle this.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88857
Approved by: https://github.com/jansel, https://github.com/yanboliang
2022-11-13 09:53:38 +00:00
Michael Voznesensky
06ce1338bc [dynamo] Port all pytorch/dynamo and test/dynamo pieces over from symbolic-shapes branch (#88768)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88768
Approved by: https://github.com/jansel, https://github.com/ezyang
2022-11-13 04:50:21 +00:00
ydwu4
3765621356 torchdynamo support self.modules() for nn_module (#88695)
This PR allows models to call self.modules() during dynamo tracing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88695
Approved by: https://github.com/voznesenskym
2022-11-12 20:00:51 +00:00
PyTorch MergeBot
dba887766b Revert "torchdynamo support modules() for nn_module (#88023)"
This reverts commit 96104c7b7e.

Reverted https://github.com/pytorch/pytorch/pull/88023 on behalf of https://github.com/ydwu4 due to [Internal breakages] https://www.internalfb.com/intern/sandcastle/job/9007200067589062/
2022-11-08 18:37:48 +00:00
Yidi Wu
96104c7b7e torchdynamo support modules() for nn_module (#88023)
Differential Revision: D40820879

This diff allows models to call self.modules() during dynamo tracing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88023
Approved by: https://github.com/tugsbayasgalan, https://github.com/voznesenskym, https://github.com/jansel
2022-11-08 18:22:03 +00:00
Yanbo Liang
bd1ffc6501 [Dynamo] Fix bug: GradMode doesn't carry grad state correctly after graph break (#88537)
Fixes https://github.com/pytorch/torchdynamo/issues/1446

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88537
Approved by: https://github.com/jansel
2022-11-07 18:03:31 +00:00
Animesh Jain
c1dd13fb2f [dynamo] Support compare op for userfunctionvariable (#88372)
Helps reduce graph breaks for one of the training models

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88372
Approved by: https://github.com/jansel
2022-11-03 17:05:50 +00:00
PyTorch MergeBot
bf7c996dcb Revert "torchdynamo support modules() for nn_module (#88023)"
This reverts commit eb91e8a534.

Reverted https://github.com/pytorch/pytorch/pull/88023 on behalf of https://github.com/mehtanirav due to [Internal breakages](https://www.internalfb.com/intern/sandcastle/job/13510799692855066/insights)
2022-11-02 22:35:14 +00:00
Yanbo Liang
ccf6b558a4 [Dynamo] UserFunctionVariable supports type & ABCMeta as arguments (#88257)
Fixes https://github.com/pytorch/torchdynamo/issues/1785

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88257
Approved by: https://github.com/ezyang
2022-11-02 06:58:04 +00:00
Yidi Wu
eb91e8a534 torchdynamo support modules() for nn_module (#88023)
Differential Revision: D40820879

This diff allows models to call self.modules() during dynamo tracing.

cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88023
Approved by: https://github.com/tugsbayasgalan, https://github.com/voznesenskym, https://github.com/jansel
2022-11-01 17:10:45 +00:00
Bin Bao
2c1efe7472 Enable some PyTorch core tests with inductor (#87490)
Summary:
1) Graph break on torch.random.set_rng_state since it blocks running
inductor core tests;
2) Add several inductor-specific skips;
3) Enable several core tests for inductor CI;

cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87490
Approved by: https://github.com/eellison
2022-10-26 18:58:33 +00:00
Michael Voznesensky
bc19494814 [Dynamo] Symbolic shape guards (#87570)
**Introduces symbolic shape guards into dynamo.**

In this PR, we take the existing fake tensor infra and plumbing in dynamo and we start passing a shape_env around. This shape_env does not get plumbed down to middle layers / backend yet - it only collects expressions from frontend invocations at the moment. We then translate these expressions into guards at the point where we take other guards installed throughout dynamo - and add them to check_fn.

Part 1 of https://docs.google.com/document/d/1QJ-M4zfMkD-fjHIqW089RptjLl9EgozZGCceUbvmgfY/edit#

cc @jansel @lezcano @fdrocha @mlazos @soumith @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87570
Approved by: https://github.com/ezyang
2022-10-25 21:15:40 +00:00
Tugsbayasgalan Manlaibaatar
7b5978254f Add named_buffers to torchdynamo nn_module (#87644)
Fixes: https://github.com/pytorch/torchdynamo/issues/1738

cc @jansel @lezcano @fdrocha @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87644
Approved by: https://github.com/jansel
2022-10-25 17:00:56 +00:00
Animesh Jain
e46a8971e6 [dynamo] Support class members in nn modules (#87531)
Fixes https://github.com/pytorch/torchdynamo/issues/1740

@voznesenskym

cc @jansel @lezcano @fdrocha @mlazos @soumith @voznesenskym @yanboliang @penguinwu
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87531
Approved by: https://github.com/jansel
2022-10-24 18:48:49 +00:00
Edward Z. Yang
96691865b9 [dynamo] Unify raise_on_* config to suppress_errors and raise by default (#87440)
I noticed that a lot of bugs are being suppressed by torchdynamo's default
error suppression, and worse yet, there's no way to unsuppress them.  After
discussion with voz and soumith, we decided that we will unify error suppression
into a single option (suppress_errors) and default suppression to False.

If your model used to work and no longer works, try TORCHDYNAMO_SUPPRESS_ERRORS=1
to bring back the old suppression behavior.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

cc @jansel @lezcano @fdrocha @mlazos @soumith @voznesenskym @yanboliang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87440
Approved by: https://github.com/voznesenskym, https://github.com/albanD
2022-10-21 17:03:29 +00:00
Jason Ansel
8f71e8de7e Sync changes from pytorch/torchdynamo, enable tests (#86950)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86950
Approved by: https://github.com/Chillee
2022-10-14 23:08:58 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00