Commit Graph

25 Commits

Author SHA1 Message Date
Michael Lazos
2d9267ba30 [dynamo] Rewrite addcdiv in dynamo to its constituent ops (#90227)
This avoids a graph break when `value` is used. This fixes a graph break in the variants of Adam and Adagrad optimizers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90227
Approved by: https://github.com/jansel
2022-12-06 05:08:44 +00:00
Edward Z. Yang
962ebe88a2 Assert there are no outstanding side effects before calling cond (#90208)
The current cond implementation is silently incorrect when
there are outstanding side effects, since the locally tracked
side effects are lost when the recursive export call is made.
At least we raise an assert now.

I'm working on a refactor of cond which should be able to sidestep
this problem. Maybe.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D41746973](https://our.internmc.facebook.com/intern/diff/D41746973)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90208
Approved by: https://github.com/voznesenskym
2022-12-06 03:53:48 +00:00
William Wen
ebeecbf833 Dynamo FX graph stack traceback fix (#87136)
Migration from https://github.com/pytorch/torchdynamo/pull/1655.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87136
Approved by: https://github.com/voznesenskym
2022-12-06 02:22:16 +00:00
PyTorch MergeBot
6ef702490d Revert "Support set_rng_state with fake tensor (#89642)"
This reverts commit 2f8769d680.

Reverted https://github.com/pytorch/pytorch/pull/89642 on behalf of https://github.com/ezyang due to elias is right this is probably wrong
2022-11-28 19:13:33 +00:00
Edward Z. Yang
2f8769d680 Support set_rng_state with fake tensor (#89642)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89642
Approved by: https://github.com/anjali411
2022-11-28 14:49:30 +00:00
Edward Z. Yang
6904324781 Remove fake_tensor_propagation (#89646)
You always have to run dynamo with fake tensors.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89646
Approved by: https://github.com/soumith
2022-11-25 03:27:32 +00:00
Edward Z. Yang
0c96841a20 Cond capture with fake tensors actually works; don't raise in this case (#89638)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89638
Approved by: https://github.com/anjali411
2022-11-24 22:46:40 +00:00
Yanbo Liang
9eed6b7f9a [Dynamo] Several fixes on TensorVariable & TorchVariable (#89486)
This is a group of bug fixes for [7k github models](https://github.com/pytorch/torchdynamo/issues/1884), it would fix 30+ model tests.
* Support ```tensor.type()```.
* Support ```tensor.get_device()```.
* Support ```torch.nn.functional._Reduction.get_enum```.
* Support ```torch._utils._get_device_index()```.
* Fallback ```tensor.data_ptr()```.
  * ```FakeTensor``` always returns 0
  * For no fake tensor propagation, we ```clone``` the input tensor, which makes no sense to track the original ```data_ptr```. And I don't think this is a very popular API.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89486
Approved by: https://github.com/jansel
2022-11-23 19:39:45 +00:00
Brian Hirsh
57353c9608 first draft of input mutation handling for aot autograd (#88817)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88817
Approved by: https://github.com/ezyang, https://github.com/wconstab
2022-11-23 19:20:11 +00:00
Michael Lazos
85a87e635c [dynamo] mutable local caching to make dynamo faster at tracing mutation (#89170)
Make mutation faster to speed up tracing optimizers, helps with https://github.com/pytorch/torchdynamo/issues/1803

`replace_all` no longer iterates over the entire variable tracker data structure  every time a mutation is performed

Each variable tracker internally keeps a set of contained mutable variable trackers, to provide a hint to `replace_all`. This is populated with a call to `apply` from `__post_init__` in the base `VariableTracker`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89170
Approved by: https://github.com/jansel
2022-11-19 01:47:48 +00:00
Yanbo Liang
81a4aeabdf [Dynamo] Support Tensor.nelement & torch.cuda.is_available (#89164)
Fix several errors in [7k github models](https://github.com/pytorch/torchdynamo/issues/1198).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89164
Approved by: https://github.com/soumith
2022-11-18 18:43:15 +00:00
Yanbo Liang
848e7240a1 [Dynamo] Add a dummy profiler to avoid activating real profiler (#88930)
See context at https://github.com/pytorch/torchdynamo/issues/1721#issuecomment-1312396059

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88930
Approved by: https://github.com/jansel
2022-11-16 19:08:49 +00:00
Yanbo Liang
911a1349dd [Dynamo] Fix torch.is_tensor and torch.overrides.is_tensor_like (#88704)
Fixes error from 7k github models: https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_arashwan_matrixnet.py

Error:
```
AssertionError: torch.* op returned non-Tensor bool call_function <function is_tensor at 0x7fca94d0faf0>

from user code:
   File "/scratch/ybliang/work/repos/pytorch-jit-paritybench/generated/test_arashwan_matrixnet.py", line 749, in scatter
      return scatter_map(inputs)
   File "/scratch/ybliang/work/repos/pytorch-jit-paritybench/generated/test_arashwan_matrixnet.py", line 741, in scatter_map
      assert not torch.is_tensor(obj), 'Tensors not supported in scatter.'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88704
Approved by: https://github.com/jansel
2022-11-14 22:45:50 +00:00
Michael Voznesensky
06ce1338bc [dynamo] Port all pytorch/dynamo and test/dynamo pieces over from symbolic-shapes branch (#88768)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88768
Approved by: https://github.com/jansel, https://github.com/ezyang
2022-11-13 04:50:21 +00:00
Yu Guo
a37524085d [torchdynamo] support torch.autograd._profiler_enabled (#88378)
fix https://github.com/pytorch/torchdynamo/issues/1826

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88378
Approved by: https://github.com/voznesenskym
2022-11-07 20:36:26 +00:00
Bin Bao
2c1efe7472 Enable some PyTorch core tests with inductor (#87490)
Summary:
1) Graph break on torch.random.set_rng_state since it blocks running
inductor core tests;
2) Add several inductor-specific skips;
3) Enable several core tests for inductor CI;

cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87490
Approved by: https://github.com/eellison
2022-10-26 18:58:33 +00:00
Michael Voznesensky
bc19494814 [Dynamo] Symbolic shape guards (#87570)
**Introduces symbolic shape guards into dynamo.**

In this PR, we take the existing fake tensor infra and plumbing in dynamo and we start passing a shape_env around. This shape_env does not get plumbed down to middle layers / backend yet - it only collects expressions from frontend invocations at the moment. We then translate these expressions into guards at the point where we take other guards installed throughout dynamo - and add them to check_fn.

Part 1 of https://docs.google.com/document/d/1QJ-M4zfMkD-fjHIqW089RptjLl9EgozZGCceUbvmgfY/edit#

cc @jansel @lezcano @fdrocha @mlazos @soumith @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87570
Approved by: https://github.com/ezyang
2022-10-25 21:15:40 +00:00
PyTorch MergeBot
f3cc588d09 Revert "Dynamo FX graph stack traceback fix (#87136)"
This reverts commit 89e6078bc3.

Reverted https://github.com/pytorch/pytorch/pull/87136 on behalf of https://github.com/clee2000 due to causing a lot of tests to fail on master even though pr is green
2022-10-19 18:57:24 +00:00
William Wen
89e6078bc3 Dynamo FX graph stack traceback fix (#87136)
Migration from https://github.com/pytorch/torchdynamo/pull/1655.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87136
Approved by: https://github.com/voznesenskym
2022-10-19 17:15:43 +00:00
Michael Suo
4814270708 [dynamo] Introduce get_real_value API to TensorVariable (#87091)
Right now, example_value is doing two jobs:
- We use it to propagate metadata (e.g. return type, shapes, etc.)
  throughout the graph
- We use it to satisfy queries for the actual value (e.g. torch.cond,
  `assume_constant_result`)

This is further complicated by the fact that we have two modes, one
where `example_value` is a fake tensor, and one where it is a real
tensor (this is the `fake_tensor_propagation` config flag).

This leads to scenarios where we don't support every combination of
job + mode,
e.g. if `fake_tensor_propagation=False`, `assume_constant_result` is
broken.

This is made worse by the fact that "fake tensor mode" is the default
and is required if you want dynamic shapes to work.

So, this PR introduces a `get_real_value` API that just runs the graph
up to `node` in order to get a concrete value. This API is orthogonal
to
`example_value`, so it doesn't care about `fake_tensor_propagation`.

When `fake_tensor_propagation=True`: `example_value` is a fake tensor,
you must use the `get_real_value` API to get a concrete value. This
will
be the only configuration in the future.

When `fake_tensor_propagation=False`: `example_value` and
`get_real_value` will produce the same value. This is redundant but we
will be removing this config soon.

To support this, I introduce a cache for computed real values, to
memoize the work involved if we're asking for real values a lot.

I attached this state to `OutputGraph` because it seems to be what
historically managed `example_value` lifetimes, but idk.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87091
Approved by: https://github.com/wconstab
2022-10-17 20:14:43 +00:00
Michael Voznesensky
b8007742c2 [Dynamo] More robust pyop support, module properties as args (#87020)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87020
Approved by: https://github.com/jansel
2022-10-17 19:55:39 +00:00
PyTorch MergeBot
66715767ff Revert "[Dynamo] More robust pyop support, module properties as args (#87020)"
This reverts commit 3c320a5613.

Reverted https://github.com/pytorch/pytorch/pull/87020 on behalf of https://github.com/ZainRizvi due to This appears to have caused two periodic tests to fail
2022-10-17 16:02:49 +00:00
Michael Voznesensky
3c320a5613 [Dynamo] More robust pyop support, module properties as args (#87020)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87020
Approved by: https://github.com/jansel
2022-10-16 02:15:10 +00:00
Jason Ansel
054a2fd6c2 Sync changes from pytorch/torchdynamo (#87013)
This updates to:
6380959be2

Generated with:
https://github.com/pytorch/torchdynamo/blob/main/copy_to_core.sh
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87013
Approved by: https://github.com/voznesenskym
2022-10-15 21:00:57 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00