Commit Graph

32 Commits

Author SHA1 Message Date
Joel Schlosser
d6dd67a248 Dynamo: Use out-of-place binary ops instead of in-place (#95446)
Fixes issues with things like:
```python
x = 2
x += y.shape[0]
```

resulting in invalid `2 += y.shape[0]` code in the FX graph.

Fix: Whenever dynamic shapes are involved, insert the out-of-place op to the FX graph instead of the in-place op.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95446
Approved by: https://github.com/ezyang
2023-02-27 02:10:37 +00:00
Angela Yi
ec10d23c51 [dynamo] Fix list contains check (#95092)
Original issue was something like:
```
def func(x):
    assert x.size(-1) in [4, 5, 6], "bad"
    return x + x
```
where the contains check is comparing a symint (x.size(-1)) with other integers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95092
Approved by: https://github.com/voznesenskym, https://github.com/yanboliang
2023-02-23 18:22:32 +00:00
Yanbo Liang
b5ff41a47a [Dynamo] No graph break on calling dict & collections.OrderedDict() (#95250)
It's common to call ```dict()``` or ```collections.OrderedDict()``` inside of ```forward``` function, so we should not graph break.

This pattern has been used in many places including:
* The use case in [torchvision](
928b05cad3/torchvision/models/_utils.py (L66-L73)).
* It causes ~100 model failures(nopython=True) in the 14k github models.
* Also it hits several Meta internal use cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95250
Approved by: https://github.com/jansel
2023-02-23 09:03:07 +00:00
William Wen
055a9e45aa [dynamo 3.11] changes to LOAD_GLOBAL and function calls (#94098)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94098
Approved by: https://github.com/albanD
2023-02-21 18:47:30 +00:00
Yanbo Liang
4f257a507c [Dynamo] Support Python builtin sorted function (#94949)
Fixes #94750

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94949
Approved by: https://github.com/jansel, https://github.com/Skylion007
2023-02-16 21:27:11 +00:00
Angela Yi
97510c6d50 Convert operator.not_ to torch.logical_not (#94626)
If the input to operator.not_ is a tensor, I want to convert the operator to a torch.logical_not. This allows the following test case to pass. Beforehand it resulted in the error `NotImplementedError("local_scalar_dense/item NYI for torch.bool")`

```
    def test_export_tensor_bool_not(self):
        def true_fn(x, y):
            return x + y

        def false_fn(x, y):
            return x - y

        def f(x, y):
            return cond(not torch.any(x), true_fn, false_fn, [x, y])
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94626
Approved by: https://github.com/voznesenskym
2023-02-14 21:45:48 +00:00
Xuehai Pan
5b1cedacde [BE] [2/3] Rewrite super() calls in functorch and torch (#94588)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94588
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-10 21:16:33 +00:00
Joel Schlosser
dd315e5c06 Dynamo: Support ConstantVariable (comparison_op) SymNodeVariable (#94519)
Expands the generic compare logic to handle SymNodeVariables on the right side of the expression.
Also adds support for `>=`, which it appears was mistakenly left out.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94519
Approved by: https://github.com/jansel
2023-02-09 21:17:17 +00:00
Joel Schlosser
0ce95c3a17 Dynamo: Support min / max over iterables (#94350)
Expands support for built-in `min` and `max` calls beyond binary to iterables - simply reduce over the existing binary logic.
Adds support for:
* lists
* tuples
* list iterators
* vararg min / max - `min(2, 3, 4)`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94350
Approved by: https://github.com/voznesenskym, https://github.com/ezyang
2023-02-09 00:02:40 +00:00
Michael Voznesensky
bbe33532ae Rename DynamicShapeVariable to SymNodeVariable cause thats what it is (#94152)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94152
Approved by: https://github.com/ezyang
2023-02-08 10:41:10 +00:00
Michael Voznesensky
b191a5f75f Remove overly strict assert, add test (#94151)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94151
Approved by: https://github.com/ezyang
2023-02-08 02:57:29 +00:00
Joel Schlosser
bf4fe5dddd General in-place binary op support in dynamo (#94203)
Continues the approach taken in #93271, expanding support to in-place binary ops (e.g. `__iadd__`).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94203
Approved by: https://github.com/ezyang
2023-02-07 15:12:32 +00:00
Joel Schlosser
f954498edf Dynamo: Fix to unpack ConstantVariable in call_range() (#94202)
Fixes the `pyhpc_turbulent_kinetic_energy` model in torchbench.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94202
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
2023-02-07 15:12:00 +00:00
Jason Ansel
180adf8c18 Fix bug in generic_list_compare (#94156)
https://github.com/pytorch/pytorch/pull/94054 introduced a bug in list
comparisons other than `==`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94156
Approved by: https://github.com/voznesenskym
2023-02-06 19:50:04 +00:00
PyTorch MergeBot
0444b8f560 Revert "Support neg calls to dyn shapes (#94068)"
This reverts commit 9350bcf6ae.

Reverted https://github.com/pytorch/pytorch/pull/94068 on behalf of https://github.com/malfet due to This broke hugging_face shard, see https://hud.pytorch.org/hud/pytorch/pytorch/master/1?per_page=50&name_filter=inductor_huggin
2023-02-06 17:50:10 +00:00
Michael Voznesensky
9350bcf6ae Support neg calls to dyn shapes (#94068)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94068
Approved by: https://github.com/jansel
2023-02-05 21:38:16 +00:00
Michael Voznesensky
25c0737adc dont graph break on list[SymInt] comparisons (#94054)
Reland of https://github.com/pytorch/pytorch/pull/92617

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94054
Approved by: https://github.com/jansel
2023-02-05 04:47:12 +00:00
Joel Schlosser
dc7bf1a7ea General reversible binary op support (e.g. __add__ / __radd__) in dynamo (#93271)
Generic support for reversible binary op pairs (e.g. `__add__` / `__radd__`) in dynamo.
Adds logic to flip args and try the reverse op when the forward op is unsupported.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93271
Approved by: https://github.com/voznesenskym, https://github.com/jansel, https://github.com/ezyang
2023-02-03 19:28:35 +00:00
Yanbo Liang
a6b51448f5 [Dynamo] Supports if condition on user defined object (#90892)
Fixes Meta internal user case, see the pattern in unit test.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90892
Approved by: https://github.com/jansel, https://github.com/mlazos
2023-01-26 04:19:32 +00:00
Will Constable
8e2e648f84 Propagate sources in VariableBuilder and add SuperSource (#91729)
**Motivation**
When adding support for default args (#90575), a lot of VariableTrackers missing sources were encountered.  Currently, in a lot of cases it seems OK to skip the source for VariableTrackers created (especially during inlining), but that assumption breaks down when inlining functions with default arguments.

**Summary** of changes
- propagate the self.source of the VariableBuilder to the new variables being built, which seems like it was an omission previously
- Add SuperSource to track usages of super(), so that SuperVariables can support function calls with default args

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91729
Approved by: https://github.com/ezyang
2023-01-12 05:04:18 +00:00
Andrew M. James
7cd951c21e Properly guard all numpy usage within dynamo and remove UnspecializedNumpyVariable (#90795)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90795
Approved by: https://github.com/ngimel, https://github.com/cpuhrsch
2023-01-06 22:36:38 +00:00
Joel Schlosser
8b55b86dbd Move sym_int and sym_float alongside SymInt / SymFloat in base torch package (#91317)
This PR moves the definitions for:
* `sym_int`
* `sym_ceil` (used only for `sym_int`)
* `sym_floor` (used only for `sym_int`)
* `sym_float`

from `torch/fx/experimental/symbolic_shapes.py` to `torch/__init__.py`, where `SymInt` and `SymFloat` are already defined.

This removes the need for several in-line imports, and enables proper JIT script gating for #91318. I'm very open to doing this in a better way!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91317
Approved by: https://github.com/ezyang, https://github.com/anijain2305
2022-12-28 16:08:16 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
c8f5c194ca Fix bug in dynamic shapes multiply (#90336)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90336
Approved by: https://github.com/ezyang
2022-12-09 00:59:50 +00:00
William Wen
ebeecbf833 Dynamo FX graph stack traceback fix (#87136)
Migration from https://github.com/pytorch/torchdynamo/pull/1655.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87136
Approved by: https://github.com/voznesenskym
2022-12-06 02:22:16 +00:00
Yanbo Liang
37e46a5035 [Dynamo] Fix several bugs & code refactor in RangeVariable (#89322)
Fix bug in [7k github models](https://github.com/pytorch/torchdynamo/issues/1884): https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_clovaai_stargan_v2.py
```
E       TypeError: 'list' object cannot be interpreted as an integer
E
E       from user code:
E          File "/scratch/ybliang/work/repos/pytorch-jit-paritybench/generated/test_clovaai_stargan_v2.py", line 335, in forward
E           idx = torch.LongTensor(range(y.size(0)))
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89322
Approved by: https://github.com/jansel
2022-11-23 19:44:48 +00:00
Michael Voznesensky
06ce1338bc [dynamo] Port all pytorch/dynamo and test/dynamo pieces over from symbolic-shapes branch (#88768)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88768
Approved by: https://github.com/jansel, https://github.com/ezyang
2022-11-13 04:50:21 +00:00
Yanbo Liang
b1116a5117 [Dynamo] Improve BuiltinVariable log when incorrect arg count happens (#88409)
Fixes https://github.com/pytorch/torchdynamo/issues/1832

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88409
Approved by: https://github.com/mlazos
2022-11-05 00:17:18 +00:00
Michael Voznesensky
bc19494814 [Dynamo] Symbolic shape guards (#87570)
**Introduces symbolic shape guards into dynamo.**

In this PR, we take the existing fake tensor infra and plumbing in dynamo and we start passing a shape_env around. This shape_env does not get plumbed down to middle layers / backend yet - it only collects expressions from frontend invocations at the moment. We then translate these expressions into guards at the point where we take other guards installed throughout dynamo - and add them to check_fn.

Part 1 of https://docs.google.com/document/d/1QJ-M4zfMkD-fjHIqW089RptjLl9EgozZGCceUbvmgfY/edit#

cc @jansel @lezcano @fdrocha @mlazos @soumith @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87570
Approved by: https://github.com/ezyang
2022-10-25 21:15:40 +00:00
Michael Voznesensky
2fd008ed43 [dynamo] Add support for invoking nn sequential (#87156)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87156
Approved by: https://github.com/jansel
2022-10-20 18:14:40 +00:00
PyTorch MergeBot
f3cc588d09 Revert "Dynamo FX graph stack traceback fix (#87136)"
This reverts commit 89e6078bc3.

Reverted https://github.com/pytorch/pytorch/pull/87136 on behalf of https://github.com/clee2000 due to causing a lot of tests to fail on master even though pr is green
2022-10-19 18:57:24 +00:00
William Wen
89e6078bc3 Dynamo FX graph stack traceback fix (#87136)
Migration from https://github.com/pytorch/torchdynamo/pull/1655.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87136
Approved by: https://github.com/voznesenskym
2022-10-19 17:15:43 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00