Commit Graph

52 Commits

Author SHA1 Message Date
Aaron Gokaslan
e2a3817dfd [BE] Enable C419 rule for any all shortcircuiting (#99890)
Apparently https://github.com/pytorch/pytorch/pull/78142 made torch.JIT allow for simple generator expressions which allows us to enable rules that replace unnecessary list comprehensions with generators in any/all. This was originally part of #99280 but I split it off into this PR so that it can be easily reverted should anything break.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99890
Approved by: https://github.com/justinchuby, https://github.com/kit1980, https://github.com/malfet
2023-04-25 15:02:13 +00:00
Jason Ansel
f4354b2a5e [dynamo] Support dict kwargs constructor (#98660)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98660
Approved by: https://github.com/yanboliang
2023-04-20 15:40:00 +00:00
Jason Ansel
47c685def3 [dynamo] Support DELETE_ATTR (#98698)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98698
Approved by: https://github.com/yanboliang
2023-04-15 20:31:40 +00:00
Edward Z. Yang
ca735ac856 Don't specialize when indexing by SymInt (#99123)
Fixes https://github.com/pytorch/pytorch/issues/99091

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99123
Approved by: https://github.com/msaroufim
2023-04-14 11:39:43 +00:00
PyTorch MergeBot
629377ea8b Revert "Replace _dynamo.config with an object instead of module (#96455)"
This reverts commit 420104a886.

Reverted https://github.com/pytorch/pytorch/pull/96455 on behalf of https://github.com/jansel due to BC breaking, was landed prematurely
2023-04-12 15:06:14 +00:00
Han Qi
420104a886 Replace _dynamo.config with an object instead of module (#96455)
Summary:
    Replace _dynamo.config with an object instead of module

    Current usage patterns of setting and reading fields on config will work
    unchanged.

    Only changes needed going forward:
    1. import torch._dynamo.config will not work. However, just doing
       import torch._dynamo is sufficient to access dynamo config
       as torch._dynamo.config.

    2. Files inside of _dynamo folder need to access config via
       from torch._dynamo.config_util import config instead of
       from torch._dynamo import config. Because _dynamo/__init__.py
       imports some of the files so it would be circular import.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96455
Approved by: https://github.com/williamwen42
2023-04-11 21:23:32 +00:00
Jason Ansel
0c162adfa8 [dynamo] Support callable() on user defined functions (#98662)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98662
Approved by: https://github.com/yanboliang
2023-04-11 05:43:46 +00:00
Edward Z. Yang
b09722f540 Convert logging f-strings to use % format, part two (#98700)
This hits multi-line logging strings

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98700
Approved by: https://github.com/voznesenskym
2023-04-10 12:19:31 +00:00
Jason Ansel
f4858fa8ef Improve dynamo support for autograd.Function (#98158)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98158
Approved by: https://github.com/yanboliang, https://github.com/anijain2305
2023-04-10 00:33:51 +00:00
Tugsbayasgalan Manlaibaatar
12f340dcd9 Add round as UserError (#98376)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98376
Approved by: https://github.com/anijain2305
2023-04-06 19:28:00 +00:00
PyTorch MergeBot
e394f6db5a Revert "Improve dynamo support for autograd.Function (#98158)"
This reverts commit 4716fa2411.

Reverted https://github.com/pytorch/pytorch/pull/98158 on behalf of https://github.com/huydhn due to Sorry for reverting your PR, but it seems to breaks MacOS trunk job 4716fa2411.  The signal was missing from the PR because we disabled MacOS job yesterday due to https://github.com/pytorch/pytorch/issues/98362
2023-04-06 18:15:02 +00:00
Jason Ansel
4716fa2411 Improve dynamo support for autograd.Function (#98158)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98158
Approved by: https://github.com/yanboliang, https://github.com/anijain2305
2023-04-06 16:44:37 +00:00
Tugsbayasgalan Manlaibaatar
37dc47a1ac Make caling type on user defined class UserError (#98366)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98366
Approved by: https://github.com/anijain2305
2023-04-06 05:20:50 +00:00
Michael Voznesensky
ab95b7a05f Support neg calls to dyn shapes (#94068)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94068
Approved by: https://github.com/jansel
2023-04-06 03:33:24 +00:00
Michael Lazos
e6909f6ccc [Dynamo] Fix for tuple construction from tuple iterators (#97862)
Fixes #93405

In short - when calling the builtin function `Tuple` on a list variable we added a list length guard. This paired with converting tuple iterators to a ListIteratorVariable resulted in this guard being improperly added.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97862
Approved by: https://github.com/yanboliang, https://github.com/jansel
2023-03-29 19:20:05 +00:00
BowenBao
60a68477a6 Bump black version to 23.1.0 (#96578)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96578
Approved by: https://github.com/ezyang
2023-03-15 06:27:59 +00:00
Yanbo Liang
12ab4f08b7 [Dynamo] No graph break on namedtuple and potential other functions (#96122)
```collections.namedtuple``` caused 40+ ```dynamo.export``` testing failing in 14k github models.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96122
Approved by: https://github.com/jansel, https://github.com/mlazos
2023-03-07 08:00:21 +00:00
Yanbo Liang
6ca286df69 [Dynamo] Support call dict with list/tuple as input (#95928)
Fixes Meta internal use case

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95928
Approved by: https://github.com/jansel
2023-03-04 05:52:33 +00:00
Edward Z. Yang
d303665d33 Make int unspecialization actually work (#95621)
OK, so this PR used to be about reducing the number of constants we specialize on, but it turns out that unspecialization was ~essentially never used (because we still constant specialized way too aggressively) and I ended up having to fix a bunch of issues to actually get tests to pass. So this PR is now "make int unspecialization actually work". As part of this, I have to turn off unspecialization by default, as there are still latent bugs in inductor.

The general strategy is that an unspecialized int is represented as a SymInt. Representing it as a 0d tensor (which is what the code used to do) is untenable: (1) we often need unspecialized ints to participate in size computations, but we have no way of propagating sympy expressions through tensor compute, and (2) a lot of APIs work when passed SymInt, but not when passed a Tensor. However, I continue to represent Numpy scalars as Tensors, as they are rarely used for size computation and they have an explicit dtype, so they are more accurately modeled as 0d tensors.

* I folded in the changes from https://github.com/pytorch/pytorch/pull/95099 as I cannot represent unspecialized ints as SymInts without also turning on dynamic shapes. This also eliminates the necessity for test_unspec.py, as toggling specialization without dynamic shapes doesn't do anything. As dynamic shapes defaults to unspecializing, I just deleted this entirely; for the specialization case, I rely on regular static shape tests to catch it. (Hypothetically, we could also rerun all the tests with dynamic shapes, but WITH int/float specialization, but this seems... not that useful? I mean, I guess export wants it, but I'd kind of like our Source heuristic to improve enough that export doesn't have to toggle this either.)
* Only 0/1 integers get specialized by default now
* A hodgepodge of fixes. I'll comment on the PR about them.

Fixes https://github.com/pytorch/pytorch/issues/95469

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95621
Approved by: https://github.com/jansel, https://github.com/Chillee
2023-03-04 01:22:08 +00:00
PyTorch MergeBot
33cf62359d Revert "Convert operator.not_ to torch.logical_not (#94626)"
This reverts commit 97510c6d50.

Reverted https://github.com/pytorch/pytorch/pull/94626 on behalf of https://github.com/ezyang due to not correct
2023-02-27 21:50:51 +00:00
Joel Schlosser
d6dd67a248 Dynamo: Use out-of-place binary ops instead of in-place (#95446)
Fixes issues with things like:
```python
x = 2
x += y.shape[0]
```

resulting in invalid `2 += y.shape[0]` code in the FX graph.

Fix: Whenever dynamic shapes are involved, insert the out-of-place op to the FX graph instead of the in-place op.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95446
Approved by: https://github.com/ezyang
2023-02-27 02:10:37 +00:00
Angela Yi
ec10d23c51 [dynamo] Fix list contains check (#95092)
Original issue was something like:
```
def func(x):
    assert x.size(-1) in [4, 5, 6], "bad"
    return x + x
```
where the contains check is comparing a symint (x.size(-1)) with other integers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95092
Approved by: https://github.com/voznesenskym, https://github.com/yanboliang
2023-02-23 18:22:32 +00:00
Yanbo Liang
b5ff41a47a [Dynamo] No graph break on calling dict & collections.OrderedDict() (#95250)
It's common to call ```dict()``` or ```collections.OrderedDict()``` inside of ```forward``` function, so we should not graph break.

This pattern has been used in many places including:
* The use case in [torchvision](
928b05cad3/torchvision/models/_utils.py (L66-L73)).
* It causes ~100 model failures(nopython=True) in the 14k github models.
* Also it hits several Meta internal use cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95250
Approved by: https://github.com/jansel
2023-02-23 09:03:07 +00:00
William Wen
055a9e45aa [dynamo 3.11] changes to LOAD_GLOBAL and function calls (#94098)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94098
Approved by: https://github.com/albanD
2023-02-21 18:47:30 +00:00
Yanbo Liang
4f257a507c [Dynamo] Support Python builtin sorted function (#94949)
Fixes #94750

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94949
Approved by: https://github.com/jansel, https://github.com/Skylion007
2023-02-16 21:27:11 +00:00
Angela Yi
97510c6d50 Convert operator.not_ to torch.logical_not (#94626)
If the input to operator.not_ is a tensor, I want to convert the operator to a torch.logical_not. This allows the following test case to pass. Beforehand it resulted in the error `NotImplementedError("local_scalar_dense/item NYI for torch.bool")`

```
    def test_export_tensor_bool_not(self):
        def true_fn(x, y):
            return x + y

        def false_fn(x, y):
            return x - y

        def f(x, y):
            return cond(not torch.any(x), true_fn, false_fn, [x, y])
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94626
Approved by: https://github.com/voznesenskym
2023-02-14 21:45:48 +00:00
Xuehai Pan
5b1cedacde [BE] [2/3] Rewrite super() calls in functorch and torch (#94588)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94588
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-10 21:16:33 +00:00
Joel Schlosser
dd315e5c06 Dynamo: Support ConstantVariable (comparison_op) SymNodeVariable (#94519)
Expands the generic compare logic to handle SymNodeVariables on the right side of the expression.
Also adds support for `>=`, which it appears was mistakenly left out.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94519
Approved by: https://github.com/jansel
2023-02-09 21:17:17 +00:00
Joel Schlosser
0ce95c3a17 Dynamo: Support min / max over iterables (#94350)
Expands support for built-in `min` and `max` calls beyond binary to iterables - simply reduce over the existing binary logic.
Adds support for:
* lists
* tuples
* list iterators
* vararg min / max - `min(2, 3, 4)`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94350
Approved by: https://github.com/voznesenskym, https://github.com/ezyang
2023-02-09 00:02:40 +00:00
Michael Voznesensky
bbe33532ae Rename DynamicShapeVariable to SymNodeVariable cause thats what it is (#94152)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94152
Approved by: https://github.com/ezyang
2023-02-08 10:41:10 +00:00
Michael Voznesensky
b191a5f75f Remove overly strict assert, add test (#94151)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94151
Approved by: https://github.com/ezyang
2023-02-08 02:57:29 +00:00
Joel Schlosser
bf4fe5dddd General in-place binary op support in dynamo (#94203)
Continues the approach taken in #93271, expanding support to in-place binary ops (e.g. `__iadd__`).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94203
Approved by: https://github.com/ezyang
2023-02-07 15:12:32 +00:00
Joel Schlosser
f954498edf Dynamo: Fix to unpack ConstantVariable in call_range() (#94202)
Fixes the `pyhpc_turbulent_kinetic_energy` model in torchbench.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94202
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
2023-02-07 15:12:00 +00:00
Jason Ansel
180adf8c18 Fix bug in generic_list_compare (#94156)
https://github.com/pytorch/pytorch/pull/94054 introduced a bug in list
comparisons other than `==`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94156
Approved by: https://github.com/voznesenskym
2023-02-06 19:50:04 +00:00
PyTorch MergeBot
0444b8f560 Revert "Support neg calls to dyn shapes (#94068)"
This reverts commit 9350bcf6ae.

Reverted https://github.com/pytorch/pytorch/pull/94068 on behalf of https://github.com/malfet due to This broke hugging_face shard, see https://hud.pytorch.org/hud/pytorch/pytorch/master/1?per_page=50&name_filter=inductor_huggin
2023-02-06 17:50:10 +00:00
Michael Voznesensky
9350bcf6ae Support neg calls to dyn shapes (#94068)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94068
Approved by: https://github.com/jansel
2023-02-05 21:38:16 +00:00
Michael Voznesensky
25c0737adc dont graph break on list[SymInt] comparisons (#94054)
Reland of https://github.com/pytorch/pytorch/pull/92617

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94054
Approved by: https://github.com/jansel
2023-02-05 04:47:12 +00:00
Joel Schlosser
dc7bf1a7ea General reversible binary op support (e.g. __add__ / __radd__) in dynamo (#93271)
Generic support for reversible binary op pairs (e.g. `__add__` / `__radd__`) in dynamo.
Adds logic to flip args and try the reverse op when the forward op is unsupported.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93271
Approved by: https://github.com/voznesenskym, https://github.com/jansel, https://github.com/ezyang
2023-02-03 19:28:35 +00:00
Yanbo Liang
a6b51448f5 [Dynamo] Supports if condition on user defined object (#90892)
Fixes Meta internal user case, see the pattern in unit test.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90892
Approved by: https://github.com/jansel, https://github.com/mlazos
2023-01-26 04:19:32 +00:00
Will Constable
8e2e648f84 Propagate sources in VariableBuilder and add SuperSource (#91729)
**Motivation**
When adding support for default args (#90575), a lot of VariableTrackers missing sources were encountered.  Currently, in a lot of cases it seems OK to skip the source for VariableTrackers created (especially during inlining), but that assumption breaks down when inlining functions with default arguments.

**Summary** of changes
- propagate the self.source of the VariableBuilder to the new variables being built, which seems like it was an omission previously
- Add SuperSource to track usages of super(), so that SuperVariables can support function calls with default args

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91729
Approved by: https://github.com/ezyang
2023-01-12 05:04:18 +00:00
Andrew M. James
7cd951c21e Properly guard all numpy usage within dynamo and remove UnspecializedNumpyVariable (#90795)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90795
Approved by: https://github.com/ngimel, https://github.com/cpuhrsch
2023-01-06 22:36:38 +00:00
Joel Schlosser
8b55b86dbd Move sym_int and sym_float alongside SymInt / SymFloat in base torch package (#91317)
This PR moves the definitions for:
* `sym_int`
* `sym_ceil` (used only for `sym_int`)
* `sym_floor` (used only for `sym_int`)
* `sym_float`

from `torch/fx/experimental/symbolic_shapes.py` to `torch/__init__.py`, where `SymInt` and `SymFloat` are already defined.

This removes the need for several in-line imports, and enables proper JIT script gating for #91318. I'm very open to doing this in a better way!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91317
Approved by: https://github.com/ezyang, https://github.com/anijain2305
2022-12-28 16:08:16 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
c8f5c194ca Fix bug in dynamic shapes multiply (#90336)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90336
Approved by: https://github.com/ezyang
2022-12-09 00:59:50 +00:00
William Wen
ebeecbf833 Dynamo FX graph stack traceback fix (#87136)
Migration from https://github.com/pytorch/torchdynamo/pull/1655.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87136
Approved by: https://github.com/voznesenskym
2022-12-06 02:22:16 +00:00
Yanbo Liang
37e46a5035 [Dynamo] Fix several bugs & code refactor in RangeVariable (#89322)
Fix bug in [7k github models](https://github.com/pytorch/torchdynamo/issues/1884): https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_clovaai_stargan_v2.py
```
E       TypeError: 'list' object cannot be interpreted as an integer
E
E       from user code:
E          File "/scratch/ybliang/work/repos/pytorch-jit-paritybench/generated/test_clovaai_stargan_v2.py", line 335, in forward
E           idx = torch.LongTensor(range(y.size(0)))
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89322
Approved by: https://github.com/jansel
2022-11-23 19:44:48 +00:00
Michael Voznesensky
06ce1338bc [dynamo] Port all pytorch/dynamo and test/dynamo pieces over from symbolic-shapes branch (#88768)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88768
Approved by: https://github.com/jansel, https://github.com/ezyang
2022-11-13 04:50:21 +00:00
Yanbo Liang
b1116a5117 [Dynamo] Improve BuiltinVariable log when incorrect arg count happens (#88409)
Fixes https://github.com/pytorch/torchdynamo/issues/1832

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88409
Approved by: https://github.com/mlazos
2022-11-05 00:17:18 +00:00
Michael Voznesensky
bc19494814 [Dynamo] Symbolic shape guards (#87570)
**Introduces symbolic shape guards into dynamo.**

In this PR, we take the existing fake tensor infra and plumbing in dynamo and we start passing a shape_env around. This shape_env does not get plumbed down to middle layers / backend yet - it only collects expressions from frontend invocations at the moment. We then translate these expressions into guards at the point where we take other guards installed throughout dynamo - and add them to check_fn.

Part 1 of https://docs.google.com/document/d/1QJ-M4zfMkD-fjHIqW089RptjLl9EgozZGCceUbvmgfY/edit#

cc @jansel @lezcano @fdrocha @mlazos @soumith @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87570
Approved by: https://github.com/ezyang
2022-10-25 21:15:40 +00:00
Michael Voznesensky
2fd008ed43 [dynamo] Add support for invoking nn sequential (#87156)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87156
Approved by: https://github.com/jansel
2022-10-20 18:14:40 +00:00
PyTorch MergeBot
f3cc588d09 Revert "Dynamo FX graph stack traceback fix (#87136)"
This reverts commit 89e6078bc3.

Reverted https://github.com/pytorch/pytorch/pull/87136 on behalf of https://github.com/clee2000 due to causing a lot of tests to fail on master even though pr is green
2022-10-19 18:57:24 +00:00