Animesh Jain
2bb83cd45c
[dynamo][ac] Minor refactor for better code organization and a bugfix ( #104276 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104276
Approved by: https://github.com/zou3519
2023-06-29 12:57:59 +00:00
cdzhan
c06bb82ba1
fix specialization when you pass an unspec int into slicing on a Python list. ( #104142 )
...
Fixes #103545
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104142
Approved by: https://github.com/malfet , https://github.com/jansel
2023-06-28 13:13:07 +00:00
Tugsbayasgalan Manlaibaatar
d4b85f3031
Support params/buffers inside cond and map ( #102310 )
...
With #102022 , params and buffers are always treated as special case of free variables. In this PR, I switch cond and map implementation to the this method and deprecate the old tracing mechanism.
Differential Revision: [D46746202](https://our.internmc.facebook.com/intern/diff/D46746202 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102310
Approved by: https://github.com/avikchaudhuri , https://github.com/zou3519
2023-06-20 05:33:10 +00:00
Yanbo Liang
1be1f5090e
[Dynamo] Fix broken NNModule comparison ( #103812 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103812
Approved by: https://github.com/msaroufim
2023-06-20 04:01:24 +00:00
Mengwei Liu
96c23fe212
[dynamo][numpy] Add support for builtin functions ( #103457 )
...
In order to be able to run stuff like:
```
def f(x):
a = x.numpy()
return a + a
```
This PR adds a branch in `BuiltinVariable` to handle `NumpyNdarrayVariable` case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103457
Approved by: https://github.com/ezyang
2023-06-15 09:18:45 +00:00
PyTorch MergeBot
2087d32811
Revert "Support params/buffers inside cond and map ( #102310 )"
...
This reverts commit 766f236bad .
Reverted https://github.com/pytorch/pytorch/pull/102310 on behalf of https://github.com/huydhn due to The test is failing in trunk 766f236bad ([comment](https://github.com/pytorch/pytorch/pull/102310#issuecomment-1592159710 ))
2023-06-15 00:29:20 +00:00
Tugsbayasgalan Manlaibaatar
766f236bad
Support params/buffers inside cond and map ( #102310 )
...
With #102022 , params and buffers are always treated as special case of free variables. In this PR, I switch cond and map implementation to the this method and deprecate the old tracing mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102310
Approved by: https://github.com/avikchaudhuri , https://github.com/zou3519
2023-06-14 22:32:33 +00:00
Michael Voznesensky
056bf951bf
Strengthen partially supported invariant of base for chained sources ( #103445 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103445
Approved by: https://github.com/ezyang
2023-06-13 22:44:28 +00:00
Edward Z. Yang
1d40b394e6
Remove getitem dynamic shapes special case ( #103296 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103296
Approved by: https://github.com/voznesenskym
2023-06-10 01:27:22 +00:00
Bin Bao
39bf86ae90
[dynamo] Support OrderedDict constructor with kwargs ( #103192 )
...
Summary: To solve an issue in https://github.com/pytorch/pytorch/issues/102878 .
The solution follows the example in https://github.com/pytorch/pytorch/pull/98660 .
It only solves a problem for standard OrderedDict. There is another
problem if we use a user-defined CustomDict.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103192
Approved by: https://github.com/yanboliang
2023-06-08 12:14:21 +00:00
Mark Saroufim
790f5732f6
Fix Graph Break on builtin comparison on NNModule ( #103176 )
...
Fixes https://github.com/pytorch/pytorch/issues/102338
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103176
Approved by: https://github.com/anijain2305
2023-06-07 22:51:43 +00:00
Michael Lazos
2434a205de
Support unary not on lists ( #102210 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102210
Approved by: https://github.com/anijain2305
2023-05-25 02:45:36 +00:00
Animesh Jain
2fa1b563da
[dynamo] Activation checkpoint higher order ops - Reland 101028 ( #101790 )
...
https://github.com/pytorch/pytorch/pull/101028 was reverted due to internal breakage. Relanding.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101790
Approved by: https://github.com/zou3519
2023-05-18 19:09:14 +00:00
PyTorch MergeBot
d0db7d624d
Revert "[dynamo] Activation checkpointing as higher order op ( #101028 )"
...
This reverts commit de15e740a1 .
Reverted https://github.com/pytorch/pytorch/pull/101028 on behalf of https://github.com/jeanschmidt due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/101028#issuecomment-1548280970 ))
2023-05-15 17:47:08 +00:00
Animesh Jain
de15e740a1
[dynamo] Activation checkpointing as higher order op ( #101028 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101028
Approved by: https://github.com/voznesenskym , https://github.com/zou3519
2023-05-12 03:17:41 +00:00
Michael Voznesensky
aafc6ce8cc
Produce constant variables in cases where a SymNode is created with a constant ( #100144 )
...
` AOT_DYNAMIC_SHAPES=1 TORCHDYNAMO_DYNAMIC_SHAPES=1 benchmarks/dynamo/huggingface.py --performance --training --amp --backend eager --disable-cudagraphs --device cuda --only AllenaiLongformerBase --explain`
Looks promising!
Goes from:
Dynamo produced 173 graphs covering 2760 ops with 160 graph breaks (14 unique)
To:
Dynamo produced 6 graphs covering 2298 ops with 15 graph breaks (7 unique)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100144
Approved by: https://github.com/ezyang
2023-05-01 21:32:11 +00:00
PyTorch MergeBot
89c43f4108
Revert "Produce constant variables in cases where a SymNode is created with a constant ( #100144 )"
...
This reverts commit d7bdfd3454 .
Reverted https://github.com/pytorch/pytorch/pull/100144 on behalf of https://github.com/ezyang due to ci failure is real ([comment](https://github.com/pytorch/pytorch/pull/100144#issuecomment-1529587039 ))
2023-05-01 11:10:48 +00:00
Michael Voznesensky
d7bdfd3454
Produce constant variables in cases where a SymNode is created with a constant ( #100144 )
...
` AOT_DYNAMIC_SHAPES=1 TORCHDYNAMO_DYNAMIC_SHAPES=1 benchmarks/dynamo/huggingface.py --performance --training --amp --backend eager --disable-cudagraphs --device cuda --only AllenaiLongformerBase --explain`
Looks promising!
Goes from:
Dynamo produced 173 graphs covering 2760 ops with 160 graph breaks (14 unique)
To:
Dynamo produced 6 graphs covering 2298 ops with 15 graph breaks (7 unique)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100144
Approved by: https://github.com/ezyang
2023-04-30 17:13:57 +00:00
Michael Voznesensky
e789de952f
Make sizevar addition work properly ( #100015 )
...
Rm
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100015
Approved by: https://github.com/ezyang
2023-04-26 15:59:26 +00:00
Aaron Gokaslan
e2a3817dfd
[BE] Enable C419 rule for any all shortcircuiting ( #99890 )
...
Apparently https://github.com/pytorch/pytorch/pull/78142 made torch.JIT allow for simple generator expressions which allows us to enable rules that replace unnecessary list comprehensions with generators in any/all. This was originally part of #99280 but I split it off into this PR so that it can be easily reverted should anything break.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99890
Approved by: https://github.com/justinchuby , https://github.com/kit1980 , https://github.com/malfet
2023-04-25 15:02:13 +00:00
Jason Ansel
f4354b2a5e
[dynamo] Support dict kwargs constructor ( #98660 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98660
Approved by: https://github.com/yanboliang
2023-04-20 15:40:00 +00:00
Jason Ansel
47c685def3
[dynamo] Support DELETE_ATTR ( #98698 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98698
Approved by: https://github.com/yanboliang
2023-04-15 20:31:40 +00:00
Edward Z. Yang
ca735ac856
Don't specialize when indexing by SymInt ( #99123 )
...
Fixes https://github.com/pytorch/pytorch/issues/99091
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99123
Approved by: https://github.com/msaroufim
2023-04-14 11:39:43 +00:00
PyTorch MergeBot
629377ea8b
Revert "Replace _dynamo.config with an object instead of module ( #96455 )"
...
This reverts commit 420104a886 .
Reverted https://github.com/pytorch/pytorch/pull/96455 on behalf of https://github.com/jansel due to BC breaking, was landed prematurely
2023-04-12 15:06:14 +00:00
Han Qi
420104a886
Replace _dynamo.config with an object instead of module ( #96455 )
...
Summary:
Replace _dynamo.config with an object instead of module
Current usage patterns of setting and reading fields on config will work
unchanged.
Only changes needed going forward:
1. import torch._dynamo.config will not work. However, just doing
import torch._dynamo is sufficient to access dynamo config
as torch._dynamo.config.
2. Files inside of _dynamo folder need to access config via
from torch._dynamo.config_util import config instead of
from torch._dynamo import config. Because _dynamo/__init__.py
imports some of the files so it would be circular import.
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96455
Approved by: https://github.com/williamwen42
2023-04-11 21:23:32 +00:00
Jason Ansel
0c162adfa8
[dynamo] Support callable() on user defined functions ( #98662 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98662
Approved by: https://github.com/yanboliang
2023-04-11 05:43:46 +00:00
Edward Z. Yang
b09722f540
Convert logging f-strings to use % format, part two ( #98700 )
...
This hits multi-line logging strings
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98700
Approved by: https://github.com/voznesenskym
2023-04-10 12:19:31 +00:00
Jason Ansel
f4858fa8ef
Improve dynamo support for autograd.Function ( #98158 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98158
Approved by: https://github.com/yanboliang , https://github.com/anijain2305
2023-04-10 00:33:51 +00:00
Tugsbayasgalan Manlaibaatar
12f340dcd9
Add round as UserError ( #98376 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98376
Approved by: https://github.com/anijain2305
2023-04-06 19:28:00 +00:00
PyTorch MergeBot
e394f6db5a
Revert "Improve dynamo support for autograd.Function ( #98158 )"
...
This reverts commit 4716fa2411 .
Reverted https://github.com/pytorch/pytorch/pull/98158 on behalf of https://github.com/huydhn due to Sorry for reverting your PR, but it seems to breaks MacOS trunk job 4716fa2411 . The signal was missing from the PR because we disabled MacOS job yesterday due to https://github.com/pytorch/pytorch/issues/98362
2023-04-06 18:15:02 +00:00
Jason Ansel
4716fa2411
Improve dynamo support for autograd.Function ( #98158 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98158
Approved by: https://github.com/yanboliang , https://github.com/anijain2305
2023-04-06 16:44:37 +00:00
Tugsbayasgalan Manlaibaatar
37dc47a1ac
Make caling type on user defined class UserError ( #98366 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98366
Approved by: https://github.com/anijain2305
2023-04-06 05:20:50 +00:00
Michael Voznesensky
ab95b7a05f
Support neg calls to dyn shapes ( #94068 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94068
Approved by: https://github.com/jansel
2023-04-06 03:33:24 +00:00
Michael Lazos
e6909f6ccc
[Dynamo] Fix for tuple construction from tuple iterators ( #97862 )
...
Fixes #93405
In short - when calling the builtin function `Tuple` on a list variable we added a list length guard. This paired with converting tuple iterators to a ListIteratorVariable resulted in this guard being improperly added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97862
Approved by: https://github.com/yanboliang , https://github.com/jansel
2023-03-29 19:20:05 +00:00
BowenBao
60a68477a6
Bump black version to 23.1.0 ( #96578 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96578
Approved by: https://github.com/ezyang
2023-03-15 06:27:59 +00:00
Yanbo Liang
12ab4f08b7
[Dynamo] No graph break on namedtuple and potential other functions ( #96122 )
...
```collections.namedtuple``` caused 40+ ```dynamo.export``` testing failing in 14k github models.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96122
Approved by: https://github.com/jansel , https://github.com/mlazos
2023-03-07 08:00:21 +00:00
Yanbo Liang
6ca286df69
[Dynamo] Support call dict with list/tuple as input ( #95928 )
...
Fixes Meta internal use case
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95928
Approved by: https://github.com/jansel
2023-03-04 05:52:33 +00:00
Edward Z. Yang
d303665d33
Make int unspecialization actually work ( #95621 )
...
OK, so this PR used to be about reducing the number of constants we specialize on, but it turns out that unspecialization was ~essentially never used (because we still constant specialized way too aggressively) and I ended up having to fix a bunch of issues to actually get tests to pass. So this PR is now "make int unspecialization actually work". As part of this, I have to turn off unspecialization by default, as there are still latent bugs in inductor.
The general strategy is that an unspecialized int is represented as a SymInt. Representing it as a 0d tensor (which is what the code used to do) is untenable: (1) we often need unspecialized ints to participate in size computations, but we have no way of propagating sympy expressions through tensor compute, and (2) a lot of APIs work when passed SymInt, but not when passed a Tensor. However, I continue to represent Numpy scalars as Tensors, as they are rarely used for size computation and they have an explicit dtype, so they are more accurately modeled as 0d tensors.
* I folded in the changes from https://github.com/pytorch/pytorch/pull/95099 as I cannot represent unspecialized ints as SymInts without also turning on dynamic shapes. This also eliminates the necessity for test_unspec.py, as toggling specialization without dynamic shapes doesn't do anything. As dynamic shapes defaults to unspecializing, I just deleted this entirely; for the specialization case, I rely on regular static shape tests to catch it. (Hypothetically, we could also rerun all the tests with dynamic shapes, but WITH int/float specialization, but this seems... not that useful? I mean, I guess export wants it, but I'd kind of like our Source heuristic to improve enough that export doesn't have to toggle this either.)
* Only 0/1 integers get specialized by default now
* A hodgepodge of fixes. I'll comment on the PR about them.
Fixes https://github.com/pytorch/pytorch/issues/95469
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95621
Approved by: https://github.com/jansel , https://github.com/Chillee
2023-03-04 01:22:08 +00:00
PyTorch MergeBot
33cf62359d
Revert "Convert operator.not_ to torch.logical_not ( #94626 )"
...
This reverts commit 97510c6d50 .
Reverted https://github.com/pytorch/pytorch/pull/94626 on behalf of https://github.com/ezyang due to not correct
2023-02-27 21:50:51 +00:00
Joel Schlosser
d6dd67a248
Dynamo: Use out-of-place binary ops instead of in-place ( #95446 )
...
Fixes issues with things like:
```python
x = 2
x += y.shape[0]
```
resulting in invalid `2 += y.shape[0]` code in the FX graph.
Fix: Whenever dynamic shapes are involved, insert the out-of-place op to the FX graph instead of the in-place op.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95446
Approved by: https://github.com/ezyang
2023-02-27 02:10:37 +00:00
Angela Yi
ec10d23c51
[dynamo] Fix list contains check ( #95092 )
...
Original issue was something like:
```
def func(x):
assert x.size(-1) in [4, 5, 6], "bad"
return x + x
```
where the contains check is comparing a symint (x.size(-1)) with other integers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95092
Approved by: https://github.com/voznesenskym , https://github.com/yanboliang
2023-02-23 18:22:32 +00:00
Yanbo Liang
b5ff41a47a
[Dynamo] No graph break on calling dict & collections.OrderedDict() ( #95250 )
...
It's common to call ```dict()``` or ```collections.OrderedDict()``` inside of ```forward``` function, so we should not graph break.
This pattern has been used in many places including:
* The use case in [torchvision](
928b05cad3/torchvision/models/_utils.py (L66-L73) ).
* It causes ~100 model failures(nopython=True) in the 14k github models.
* Also it hits several Meta internal use cases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95250
Approved by: https://github.com/jansel
2023-02-23 09:03:07 +00:00
William Wen
055a9e45aa
[dynamo 3.11] changes to LOAD_GLOBAL and function calls ( #94098 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94098
Approved by: https://github.com/albanD
2023-02-21 18:47:30 +00:00
Yanbo Liang
4f257a507c
[Dynamo] Support Python builtin sorted function ( #94949 )
...
Fixes #94750
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94949
Approved by: https://github.com/jansel , https://github.com/Skylion007
2023-02-16 21:27:11 +00:00
Angela Yi
97510c6d50
Convert operator.not_ to torch.logical_not ( #94626 )
...
If the input to operator.not_ is a tensor, I want to convert the operator to a torch.logical_not. This allows the following test case to pass. Beforehand it resulted in the error `NotImplementedError("local_scalar_dense/item NYI for torch.bool")`
```
def test_export_tensor_bool_not(self):
def true_fn(x, y):
return x + y
def false_fn(x, y):
return x - y
def f(x, y):
return cond(not torch.any(x), true_fn, false_fn, [x, y])
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94626
Approved by: https://github.com/voznesenskym
2023-02-14 21:45:48 +00:00
Xuehai Pan
5b1cedacde
[BE] [2/3] Rewrite super() calls in functorch and torch ( #94588 )
...
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.
- #94587
- #94588
- #94592
Also, methods with only a `super()` call are removed:
```diff
class MyModule(nn.Module):
- def __init__(self):
- super().__init__()
-
def forward(self, ...):
...
```
Some cases that change the semantics should be kept unchanged. E.g.:
f152a79be9/caffe2/python/net_printer.py (L184-L190)
f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94588
Approved by: https://github.com/ezyang , https://github.com/albanD
2023-02-10 21:16:33 +00:00
Joel Schlosser
dd315e5c06
Dynamo: Support ConstantVariable (comparison_op) SymNodeVariable ( #94519 )
...
Expands the generic compare logic to handle SymNodeVariables on the right side of the expression.
Also adds support for `>=`, which it appears was mistakenly left out.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94519
Approved by: https://github.com/jansel
2023-02-09 21:17:17 +00:00
Joel Schlosser
0ce95c3a17
Dynamo: Support min / max over iterables ( #94350 )
...
Expands support for built-in `min` and `max` calls beyond binary to iterables - simply reduce over the existing binary logic.
Adds support for:
* lists
* tuples
* list iterators
* vararg min / max - `min(2, 3, 4)`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94350
Approved by: https://github.com/voznesenskym , https://github.com/ezyang
2023-02-09 00:02:40 +00:00
Michael Voznesensky
bbe33532ae
Rename DynamicShapeVariable to SymNodeVariable cause thats what it is ( #94152 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94152
Approved by: https://github.com/ezyang
2023-02-08 10:41:10 +00:00
Michael Voznesensky
b191a5f75f
Remove overly strict assert, add test ( #94151 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94151
Approved by: https://github.com/ezyang
2023-02-08 02:57:29 +00:00