Commit Graph

233 Commits

Author SHA1 Message Date
Tugsbayasgalan Manlaibaatar
76b1d44d57 pre_dispatch aot_export (#115188)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115188
Approved by: https://github.com/bdhirsh
2023-12-25 04:51:21 +00:00
Shunting Zhang
99f7e721fe [inductor] make inductor work with new triton compile interface (#115878)
Recent 2 triton PRs (https://github.com/openai/triton/pull/2701, https://github.com/openai/triton/pull/2756) change the interface for triton.compile, this PR added the necessary change on inductor side to work with both old and new compile API.

Also there is some simplification between compilation call in subprocess and the one in main process
- previously we pass warm_cache_only=True if the compilation happens in subprocess. But triton never use that argument in the currently used pin. So I removed that
- previously we only pass compute_capability if compilation happens in subprocess. The PR change that to always passing compute_capability to triton.compile no matter if the compilation happens in main or sub process.

Updated:
There are more interface change from triton side. E.g.
- tl.math.{min, max} now requires a propagate_nan argument
- JITFunction.run now requires a warmup argument. This affect the benchmarking phase of matmul max-autotune; on the other hand, JITFunction.run forbids stream argument now. Simply removing passing this in when benchmarking matmul triton kernel will work for both old and new version of triton.
- triton Autotuner change attribute name from 'warmup' to 'num_warmup' and from 'rep' to 'num_rep'. This cause dynamo failed to handle triton Autotuner object since dynamo TritonKernelVariable makes assumption about attribute names. It's used in some test cases that a model call triton Autotuner directly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115878
Approved by: https://github.com/jansel
2023-12-22 00:09:29 +00:00
Adnan Akhundov
247f9c3de4 Preserve strides of custom Triton kernel args (#116219)
Summary: Currently, we [`clone`](19207b9183/torch/_inductor/lowering.py (L5273)) every `TensorBox` argument of custom Triton kernels while lowering them to the Inductor IR, during which the stride information of the kernel inputs is lost. This is problematic in the common case when the strides of a `torch.Tensor` argument are passed as scalars to a custom Triton kernel alongside the tensor itself (due to the underlying Triton code interpreting the tensors as raw pointers, so the contained stride semantics of the `torch.Tensor` is lost).

In this PR, we add an extended version of the existing [`clone` lowering](19207b9183/torch/_inductor/lowering.py (L2289))---`clone_preserve_reinterpret_view`---which carries over the `ir.ReinterpretVew` layers (if any) from the source `TensorBox` to the cloned one. The rationale behind adding a new function (and switching to it in the `triton_kernel_wrap` only for now) as opposed to extending the existing `clone` is keeping the semantics of the latter untouched, as it is a lowering of `torch.clone` (albeit incomplete, as the `memory_format` is currently ignored). Changing the existing `clone` would change the semantics which is not necessarily desirable in general. Open to suggestions, though.

Test Plan:

```
$ python test/dynamo/test_functions.py -k test_triton_kernel_strided_input
...
----------------------------------------------------------------------
Ran 1 test in 5.568s

OK
```

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116219
Approved by: https://github.com/jansel
2023-12-21 22:46:32 +00:00
PyTorch MergeBot
0567f71ac6 Revert " pre_dispatch aot_export (#115188)"
This reverts commit a267d67350.

Reverted https://github.com/pytorch/pytorch/pull/115188 on behalf of https://github.com/jeanschmidt due to sadly, it is required to revert this commit in order to revert https://github.com/pytorch/pytorch/pull/115454 ([comment](https://github.com/pytorch/pytorch/pull/115188#issuecomment-1866310014))
2023-12-21 14:03:18 +00:00
Tugsbayasgalan Manlaibaatar
a267d67350 pre_dispatch aot_export (#115188)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115188
Approved by: https://github.com/bdhirsh
2023-12-20 21:36:25 +00:00
Oguz Ulgen
01b979fc9a [Inductor] Fix constant folding and extern kernel mutation tracking bugs (#115908)
This PR fixes two bugs
1) Constant folding a triton kernel results in the kernel's inputs to be returned back without any modification. Disable constant folding for triton kernels. Need more investigation
2) NoneLayout buffers should not be deleted as they do not exist

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115908
Approved by: https://github.com/aakhundov, https://github.com/jansel
2023-12-19 02:06:50 +00:00
Yanbo Liang
eb3aa424ce [Reland][Dynamo] Added support for math.radians on ints with dynamic shapes (#115477)
Reland #114507

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115477
Approved by: https://github.com/larryliu0820
2023-12-09 08:58:18 +00:00
Oguz Ulgen
c9c4cdf9a9 [AOTAutograd] Do not call ctx.mark_dirty on mutations hidden from autograd (#115324)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115324
Approved by: https://github.com/bdhirsh
2023-12-09 02:23:13 +00:00
rzou
2847045ed9 Set _dynamo.config.capture_func_transforms=False (#115267)
Due to not all tests in the Dynamo shard actually running in CI, we've
started to bitrot on this implementation. Since our plan is to trace
into the functorch implementations instead of construct a HOP
(which is what capture_func_transforms=True does), let's turn off this
config by default.

Test Plan:
- Tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115267
Approved by: https://github.com/voznesenskym, https://github.com/guilhermeleobas
2023-12-07 18:42:15 +00:00
Yanbo Liang
4620170008 [Dynamo] Revert multiple PRs since they triggered compilation stuck internally (#115126)
Revert the following PRs to mitigate internal compilation stuck:
#113432
#114016
#114507
#114196
#114739
#114669

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115126
Approved by: https://github.com/xush6528
2023-12-05 22:35:37 +00:00
Jason Ansel
fe690f430a [dynamo] Fix dict.get with no default (#115048)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115048
Approved by: https://github.com/eellison, https://github.com/oulgen
ghstack dependencies: #114830, #115047
2023-12-05 01:31:33 +00:00
Xuehai Pan
3fbfa8cd0a [dynamo] support dict.copy() / OrderedDict.copy() / defaultdict.copy() (#115012)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115012
Approved by: https://github.com/jansel
ghstack dependencies: #115010, #115011
2023-12-04 01:50:10 +00:00
Xuehai Pan
917a52d2a2 [dynamo] support dict.update(seq2) / OrderedDict.update(seq2) / defaultdict.update(seq2) (#115011)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115011
Approved by: https://github.com/jansel
ghstack dependencies: #115010
2023-12-04 01:50:10 +00:00
Xuehai Pan
2e8ac5ea93 [dynamo] support dict.fromkeys() / OrderedDict.fromkeys() / defaultdict.fromkeys() (#115010)
Add support for `dict.fromkeys`, `OrderedDict.fromkeys`, and `defaultdict.fromkeys`.

Fixes #114963

- #114963

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115010
Approved by: https://github.com/jansel
2023-12-04 01:49:59 +00:00
Jason Ansel
7b3429d97c Fix error with int+SymBool (#114828)
Fixes #104797

```
  File "/home/jansel/pytorch/torch/_dynamo/utils.py", line 1486, in <lambda>
    lambda: run_node(tx.output, node, args, kwargs, nnmodule)
  File "/home/jansel/pytorch/torch/_dynamo/utils.py", line 1591, in run_node
    raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
  File "/home/jansel/pytorch/torch/_dynamo/utils.py", line 1570, in run_node
    return node.target(*args, **kwargs)
  File "/home/jansel/conda/envs/pytorch/lib/python3.10/site-packages/einops/packing.py", line 153, in unpack
    n_unknown_composed_axes = sum(x == -1 for x in lengths_of_composed_axes)
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <function unpack at 0x7f644b962710>(*(FakeTensor(..., device='cuda:0', size=(1, s0*s1, 128)), [(s0, s1)], 'b * c'), **{}):
unsupported operand type(s) for +: 'int' and 'SymBool'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114828
Approved by: https://github.com/lezcano
2023-11-30 18:30:36 +00:00
vfdev
f93ea14309 [dynamo] Added support for math ops on ints with dynamic shapes (#114507)
Fixes #114218

```
import math
import torch

def func(x, a):
    b = math.floor(a + 0.5)
    b = math.radians(a) + b
    y = x + b
    return y

cfunc = torch.compile(func, dynamic=True, fullgraph=True, backend="eager")
x = torch.tensor([0, 1, 2, 3], dtype=torch.float32)
a = 12

out = cfunc(x, a)
```

```
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] TRACED GRAPH
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]  ===== __compiled_fn_0 =====
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]  <eval_with_key>.0 class GraphModule(torch.nn.Module):
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]     def forward(self, L_a_ : torch.SymInt, s1 : torch.SymInt, L_x_ : torch.Tensor):
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         l_a_ = L_a_
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         l_x_ = L_x_
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         # File: check_math_ops.py:7, code: b = math.floor(a + 0.5)
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         add = l_a_ + 0.5
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         floor = math_floor(add);  add = None
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         # File: /pytorch/torch/_dynamo/polyfill.py:28, code: return math.pi / 180.0 * x
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         mul = 0.017453292519943295 * l_a_;  l_a_ = None
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         # File: check_math_ops.py:9, code: b = math.radians(a) + b
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         add_1 = mul + floor;  mul = floor = None
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         # File: check_math_ops.py:13, code: y = x + b
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         y = l_x_ + add_1;  l_x_ = add_1 = None
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]         return (y,)
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114507
Approved by: https://github.com/lezcano
2023-11-30 14:11:57 +00:00
Jon Chuang
172a103857 [dynamo] strict=True kwarg for zip (#114047)
Fixes https://github.com/pytorch/pytorch/issues/113894

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114047
Approved by: https://github.com/ezyang
2023-11-22 08:48:51 +00:00
Isuru Fernando
e4a88d9581 Convert SymInts to SymFloats with SymPy (#113683)
Fixes #109365

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113683
Approved by: https://github.com/ezyang, https://github.com/lezcano
2023-11-20 23:35:40 +00:00
Sijia Chen
7afceb9f64 [AOTI] add float support of triton (#114014)
Summary: As the title

Test Plan: buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/dynamo:test_dynamo -- --exact 'caffe2/test/dynamo:test_dynamo - test_functions.py::DefaultsTests::test_triton_kernel_None_arg' --print-passing-details

Differential Revision: D51421325

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114014
Approved by: https://github.com/oulgen, https://github.com/aakhundov
2023-11-20 23:03:37 +00:00
PyTorch MergeBot
e3eca4c49f Revert "Convert SymInts to SymFloats with SymPy (#113683)"
This reverts commit 0ec66b3be5.

Reverted https://github.com/pytorch/pytorch/pull/113683 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing in trunk 0ec66b3be5, probably a landrace as this is not failing on your PR ([comment](https://github.com/pytorch/pytorch/pull/113683#issuecomment-1817759130))
2023-11-19 06:09:15 +00:00
Isuru Fernando
0ec66b3be5 Convert SymInts to SymFloats with SymPy (#113683)
Fixes #109365

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113683
Approved by: https://github.com/ezyang
2023-11-18 22:18:24 +00:00
Oguz Ulgen
11857e9a64 [Inductor] Allow autotuned argument to be anywhere in the argument list (#114002)
Prior to this PR, autotuned arguments could only be at the back of the argument list. This is an inductor limitation and not triton limitation. Fixing this allows more MRS kernels to use user defined triton kernels.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114002
Approved by: https://github.com/aakhundov
ghstack dependencies: #113967
2023-11-18 18:19:32 +00:00
Oguz Ulgen
e0c3936843 [Inductor] Support ReinterpretView in inductor codegen (#113967)
Adding support for ReinterpretView in inductor so that jagged MRS kernels can use native triton kernels

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113967
Approved by: https://github.com/aakhundov
2023-11-18 18:19:32 +00:00
Oguz Ulgen
a450c784da [AotAutograd] Move mutations hidden from autograd in graph (#113454)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113454
Approved by: https://github.com/bdhirsh
2023-11-17 22:47:06 +00:00
vfdev-5
a56af02913 [dynamo] Added support for is_contiguous with dynamic shapes (#113645)
Description:
- Added support for `x.is_contiguous` with dynamic shapes

On `main` the following code is giving a graph break:
```python
import torch

@torch.compile(backend="eager", dynamic=True, fullgraph=True)
def f(x):
    if x.is_contiguous():
        return x
    else:
        return 0

x = torch.randn(13, 14)
f(x)
```
with the error message:
```
  File "pytorch/torch/_dynamo/variables/builder.py", line 1541, in wrap_fx_proxy_cls
    unimplemented(
  File "pytorch/torch/_dynamo/exc.py", line 193, in unimplemented
    raise Unsupported(msg)
torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor bool call_method is_contiguous

from user code:
   File "check_is_contig_dynamic_true.py", line 37, in f
    if x.is_contiguous():
```

This PR fixes the issue.
```
TORCH_COMPILE_DEBUG=1 python check_is_contig_dynamic_true.py
[2023-11-14 15:49:04,399] [0/0] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing f check_is_contig_dynamic_true.py:34
[2023-11-14 15:49:04,403] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line check_is_contig_dynamic_true.py:34 in f ()
[2023-11-14 15:49:04,403] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG]     @torch.compile(backend="eager", dynamic=True, fullgraph=True)
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line check_is_contig_dynamic_true.py:37 in f (f)
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG]         if x.is_contiguous():
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST x []
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR is_contiguous [LazyVariableTracker()]
[2023-11-14 15:49:04,804] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input L_x_ L['x']
[2023-11-14 15:49:04,805] [0/0] torch._dynamo.variables.builder: [DEBUG] wrap_to_fake L['x'] (5, 4) [<DimDynamic.DUCK: 1>, <DimDynamic.DUCK: 1>] [None, None]
[2023-11-14 15:49:04,839] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s0 L['x'].size()[0]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s1 L['x'].size()[1]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s2 L['x'].stride()[0]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s1 L['x'].stride()[1]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE CALL_FUNCTION 0 [GetAttrVariable(TensorVariable(), is_contiguous)]
[2023-11-14 15:49:04,843] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE POP_JUMP_IF_FALSE 12 [ConstantVariable(bool)]
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line check_is_contig_dynamic_true.py:42 in f (f)
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG]             return 0
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_CONST 0 []
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE RETURN_VALUE None [ConstantVariable(int)]
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.convert_frame: [DEBUG] Skipping frame because no content in function call f                     check_is_contig_dynamic_true.py 34
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.convert_frame: [DEBUG] No graph captured with one_graph=True
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] TorchDynamo compilation metrics:
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] Function                           Runtimes (s)
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] -------------------------------  --------------
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] _compile.<locals>.compile_inner          1.2083
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113645
Approved by: https://github.com/lezcano
2023-11-17 12:32:38 +00:00
Brian Hirsh
cebad9867b graph break on intermediate leaves that require grad (#113277)
fixes https://github.com/pytorch/pytorch/issues/90552. This is a simpler fix that just detects the situation where AOTAutograd can't create a proper backward graph for the situation and graph breaks. This was technically a silent correctness issue before.

This PR tries to always graph break when we see a factory function that returns a tensor requiring grad. I check this by seeing if the op returned a `TensorVariable` in dynamo, and if one of the input arguments was a `requires_grad=True` kwarg. I think this is high-fidelity enough, and I'm also hoping that this is uncommon enough that a graph break is reasonable here.

The fix to avoid the graph break in user land is also pretty easy - just instantiate your tensor outside of the compiled region and plumb it in.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113277
Approved by: https://github.com/eellison
ghstack dependencies: #113267, #113416, #113584
2023-11-16 02:47:45 +00:00
PyTorch MergeBot
1e60174891 Revert "[dynamo] Add run_inductor_tests entrypoint (#113278)"
This reverts commit b00311ce9e.

Reverted https://github.com/pytorch/pytorch/pull/113278 on behalf of https://github.com/huydhn due to Sorry for reverting your stack, but it is failing to list test internally with buck2 ([comment](https://github.com/pytorch/pytorch/pull/113278#issuecomment-1811646325))
2023-11-15 01:19:48 +00:00
Ken Jin
70064ac416 [Dynamo] Match closures by code ID (#109427)
Closes https://github.com/pytorch/pytorch/issues/107866

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109427
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-11-12 08:20:14 +00:00
Jason Ansel
b00311ce9e [dynamo] Add run_inductor_tests entrypoint (#113278)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113278
Approved by: https://github.com/yanboliang
2023-11-11 08:54:43 +00:00
Oguz Ulgen
68c4507bc2 [Inductor] Allow None values to be passed in as arguments to triton kernels (#113056)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113056
Approved by: https://github.com/jansel
ghstack dependencies: #112752, #113008, #112801
2023-11-07 05:29:42 +00:00
Oguz Ulgen
bfa717c6a6 [Inductor] Improve reinplace_scatters pass (#112801)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112801
Approved by: https://github.com/Chillee, https://github.com/jansel
ghstack dependencies: #112752, #113008
2023-11-07 05:29:42 +00:00
Oguz Ulgen
f6008be266 Move all triton related testing utils into shared file (#113008)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113008
Approved by: https://github.com/zou3519, https://github.com/jansel
ghstack dependencies: #112752
2023-11-07 05:29:29 +00:00
Oguz Ulgen
dbf44dffc9 [Inductor] Cache generated user defined triton kernels on tensor dtype and non tensor parameters (#112752)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112752
Approved by: https://github.com/jansel
2023-11-07 05:29:16 +00:00
Oguz Ulgen
001573b687 [Inductor] Support one node creating multiple mutations in scheduler (#112547)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112547
Approved by: https://github.com/Chillee
2023-11-03 16:01:31 +00:00
Oguz Ulgen
13d62e28a3 [Inductor] Add Dynamic shape support to user defined triton kernels (#112523)
1) This PR moves the grid function codegen to wrapper so that we can use
   IndentBuffers as opposed to manually adding tabs for indentation.
2) In inductor, emits the grid function in the body of the kernel call so
   that it can use free symbols from dynamic shapes

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112523
Approved by: https://github.com/Chillee
2023-11-02 23:58:50 +00:00
Steven Troxler
17fd4885aa [dynamo] Support custom dict constructor with kwargs (#112513)
Summary:

As of https://github.com/pytorch/pytorch/pull/103192, dynamo
supports code that creates OrderedDict instances using kwargs
for the key-value pairs rather than passing a dict literal.

But custom dicts (for example subclasses of OrderedDict) follow
a different codepath so that we can check for conditions such
as a custom `__init__` that need to force a graph break.

This commit allows kwargs for custom dict constructors - if the
args are empty and the class is not also a dataclass (which is
the case that, for example, a
`transformers.modeling_outputs.ModelOutput` instance will wind
up hitting) then treat the kwargs as the key-value pairs.

NOTE: For this to behave 100% correctly, we are relying on
the fact that python dicts behave like ordered dicts so that they
preserve the kwargs' ordering. Technically it is not guaranteed that
future versions of Python will respect this; if that behavior changes
we would need to ensure that dynamo uses OrderedDict for kwargs all
the way down in order to handle special cases like OrderedDict where
the kwargs' ordering does matter.

Test Plan:

```
pytest test/dynamo/test_functions.py
```

I also verified that the new test fails without the changes to
`dicts.py`.

Reviewers: yanboliang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112513
Approved by: https://github.com/yanboliang
2023-10-31 20:55:38 +00:00
Oguz Ulgen
219763c38d Support calling user defined triton kernels with kernel.run (#112292)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112292
Approved by: https://github.com/jansel
ghstack dependencies: #112290
2023-10-30 17:51:23 +00:00
Oguz Ulgen
1250032c2e [Inductor] Add triton.autotune support for user defined triton kernels with complex grids (#112290)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112290
Approved by: https://github.com/jansel
2023-10-30 17:48:27 +00:00
Oguz Ulgen
c14c4efc0e [Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112228
Approved by: https://github.com/jansel
2023-10-28 17:30:35 +00:00
PyTorch MergeBot
8d44999183 Revert "[Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)"
This reverts commit dbb31a2984.

Reverted https://github.com/pytorch/pytorch/pull/112228 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing ROCm test in trunk dbb31a2984 ([comment](https://github.com/pytorch/pytorch/pull/112228#issuecomment-1783660326))
2023-10-28 01:51:32 +00:00
Oguz Ulgen
dbb31a2984 [Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112228
Approved by: https://github.com/jansel
2023-10-27 21:40:22 +00:00
Oguz Ulgen
a29a844938 [Inductor] Support top level constants in user defined triton kernels (#111970)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111970
Approved by: https://github.com/jansel
ghstack dependencies: #111956
2023-10-25 02:43:51 +00:00
Oguz Ulgen
bb550b25c9 [Inductor] Support user defined triton kernels calling other triton kernels and activation functions (#111956)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111956
Approved by: https://github.com/jansel
2023-10-25 02:39:43 +00:00
Oguz Ulgen
ddcf9c050b [Inductor] Support calling user defined kernels with different type of arguments (#111939)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111939
Approved by: https://github.com/jansel, https://github.com/zou3519
ghstack dependencies: #111770, #111808
2023-10-24 19:49:48 +00:00
Jon Chuang
36d34ce951 [dynamo] support comparing LHS constant with tensor (#111492)
Fixes https://github.com/pytorch/pytorch/issues/108582

Depends on https://github.com/pytorch/pytorch/pull/111557 for fixing broken integration tests. (due to this PR unblocking an in-graph set membership)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111492
Approved by: https://github.com/Skylion007
2023-10-23 19:05:14 +00:00
Oguz Ulgen
2b2b6caf8f [inductor] Implement clone removal for user defined triton kernel via reinplace_scatters (#111627)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111627
Approved by: https://github.com/jansel
ghstack dependencies: #111434
2023-10-22 22:28:00 +00:00
Jon Chuang
c4ab229a82 [dynamo] Implement set.__contains__ for Tensor as object match of FakeTensor (#111738)
Fixes https://github.com/pytorch/pytorch/issues/111556

Dynamo implementation of `set.__contains__` previously used `__eq__` match.

But this is wrong when `__eq__` match does not imply `__hash__` match, as is the case for `torch.Tensor`, leading to inconsistent results. See: https://github.com/pytorch/pytorch/issues/111542

Hence implement as Tensor object match i.e. proxy node `'example_value'` FakeTensor match.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111738
Approved by: https://github.com/lezcano
2023-10-22 17:40:34 +00:00
Oguz Ulgen
977d3bcc46 [Inductor] Support user defined triton kernels in inductor (#111434)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111434
Approved by: https://github.com/jansel
2023-10-22 17:04:19 +00:00
Jon Chuang
47eed65481 [dynamo] Add is_ support for Tensors, force get_fake_value to reuse previously computed example_value if available (#111565)
Use FakeTensor id match as equivalent to object identity match

cc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111565
Approved by: https://github.com/ezyang
2023-10-21 13:56:30 +00:00
Jon Chuang
344fc98991 [dynamo] fix: SetVariable should test Tensor identity based example_value FakeTensor, not fx.Node (#111696)
FX Node changes after in-place op. FakeTensor remains the same.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111696
Approved by: https://github.com/ezyang
2023-10-21 08:49:21 +00:00
Jon Chuang
101210e2ce [dynamo] cast single-elem tensors to float and int (#111518)
Fixes https://github.com/pytorch/pytorch/issues/109538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111518
Approved by: https://github.com/ezyang
2023-10-20 22:53:58 +00:00
Jon Chuang
79529ef657 [dynamo] fix graph break when listlike of tensor contains const (#111572)
Fixes https://github.com/pytorch/pytorch/pull/111557#discussion_r1365620968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111572
Approved by: https://github.com/voznesenskym, https://github.com/lezcano
2023-10-19 19:51:28 +00:00
Oguz Ulgen
4e310fd875 [Autograd] Track when mutations are for triton kernels (#111500)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111500
Approved by: https://github.com/bdhirsh
2023-10-19 15:34:34 +00:00
Oguz Ulgen
defa0d3a2d Add a side table for triton kernels to avoid using itertools.partial (#110633)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110633
Approved by: https://github.com/jansel
2023-10-08 02:01:59 +00:00
Yanbo Liang
1b1bc08557 [Dynamo] SizeVariable can be indexed by symint (#110349)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110349
Approved by: https://github.com/williamwen42
2023-10-06 20:48:07 +00:00
PyTorch MergeBot
21019620ee Revert "[Dynamo] SizeVariable can be indexed by symint (#110349)"
This reverts commit 510ec7e3c5.

Reverted https://github.com/pytorch/pytorch/pull/110349 on behalf of https://github.com/PaliC due to breaking internal tests (check diff) ([comment](https://github.com/pytorch/pytorch/pull/110349#issuecomment-1748021641))
2023-10-05 04:42:33 +00:00
Oguz Ulgen
baa9af155e Add more tests for native triton kernels (#110486)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110486
Approved by: https://github.com/jansel
ghstack dependencies: #110403
2023-10-04 18:26:45 +00:00
Oguz Ulgen
f04b1a0d27 [AOTInductor] Implement autograd eager backend for native triton kernels (#110403)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110403
Approved by: https://github.com/zou3519, https://github.com/bdhirsh
2023-10-04 17:56:56 +00:00
Yanbo Liang
510ec7e3c5 [Dynamo] SizeVariable can be indexed by symint (#110349)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110349
Approved by: https://github.com/williamwen42
2023-10-04 03:20:18 +00:00
cdzhan
175b626216 Enable torch.promote_types in Dynamo tracing (#110358)
Fixes #109508

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110358
Approved by: https://github.com/Skylion007
2023-10-02 15:20:36 +00:00
Oguz Ulgen
f7ba3e85e2 [Dynamo] Add functional triton kernel wrapper (#110185)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110185
Approved by: https://github.com/jansel, https://github.com/zou3519, https://github.com/bdhirsh
ghstack dependencies: #109623
2023-09-30 04:20:20 +00:00
Oguz Ulgen
2d50a30d77 [Dynamo] Add native support for Triton Kernels to Dynamo (#109623)
This PR adds native support to Dynamo to detect Triton kernels and
create an FX graph node out of them. AOT eager and inductor modes will
be support in follow up PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109623
Approved by: https://github.com/jansel
2023-09-29 15:49:18 +00:00
Yukio Siraichi
6f48d872d0 Re-land: Break graph on manual_seed. (#109109)
Re-landing: #108647 (old #107594)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109109
Approved by: https://github.com/lezcano
2023-09-28 15:28:40 +00:00
Michael Voznesensky
e4350d6d4e Functools partial support in dynamo (#108846)
The strategy for supporting functools partials is relatively straightforward.

There are 2 cases we need to support:

**1) Functools partials as input**
In this case, we are first seeing the functools partial and it is guaranteed to have a source. As such, the args, keywords, and func of the functools partial are passed through VariableBuilder. As this is the first time we are seeing these objects (as it is an input), we re-enter VariableBuilder with a source referencing the args, keywords, and func as attributes of the input to produce:

- func: A callable VariableTracker (UDF, TorchVariable, etc) depending on the value of `func`
- args: List[VariableTracker] - note, not ListVariableTracker!
- keywords: Dict[str, VariableTracker]

A major benefit of this structure is that it very elegantly matches the args to `call_function`.

We then compose a FunctoolsPartialVariable from the VariableTrackers made above.

**2) Functools partials created within compile**
In this case, we already have all the args as known VTs, and thus just compose a FunctoolsPartialVariable as we do for case (1).

For both (1) and (2) - we propagate all guards from the func, args, and keyword VTs to the FunctoolsPartialVariable

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108846
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-09-09 17:25:02 +00:00
PyTorch MergeBot
8caaa4f4cd Revert "Re-land: Break graph on manual_seed. (#108647)"
This reverts commit c887309437.

Reverted https://github.com/pytorch/pytorch/pull/108647 on behalf of https://github.com/huydhn due to Ouch, we are hit again my another internal import error from https://github.com/pytorch/pytorch/blob/main/torch/_inductor/config.py#L205-L206 ([comment](https://github.com/pytorch/pytorch/pull/108647#issuecomment-1712230103))
2023-09-08 21:18:00 +00:00
Yukio Siraichi
c887309437 Re-land: Break graph on manual_seed. (#108647)
Trying to re-land #107594.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108647
Approved by: https://github.com/eellison
2023-09-07 12:52:38 +00:00
PyTorch MergeBot
48286d34a4 Revert "Break graph on manual_seed. (#107594)"
This reverts commit 6ad5568cbc.

Reverted https://github.com/pytorch/pytorch/pull/107594 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it has an import issue that breaks internal code ([comment](https://github.com/pytorch/pytorch/pull/107594#issuecomment-1705584405))
2023-09-04 18:00:37 +00:00
Yanbo Liang
9862c7196b [Dynamo] SetVariable supports contains (#108189)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108189
Approved by: https://github.com/voznesenskym
2023-08-31 04:28:49 +00:00
Yukio Siraichi
6ad5568cbc Break graph on manual_seed. (#107594)
Fix: #107187

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107594
Approved by: https://github.com/eellison
2023-08-30 17:24:11 +00:00
PyTorch MergeBot
4e47ea5131 Revert "Break graph on manual_seed. (#107594)"
This reverts commit 6c28de2437.

Reverted https://github.com/pytorch/pytorch/pull/107594 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it seems to cause failures in trunk on inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCUDA::test_comprehensive_uniform_cuda_float, likely a landrace ([comment](https://github.com/pytorch/pytorch/pull/107594#issuecomment-1697783965))
2023-08-29 16:38:01 +00:00
Yukio Siraichi
6c28de2437 Break graph on manual_seed. (#107594)
Fix: #107187

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107594
Approved by: https://github.com/eellison
2023-08-29 12:59:57 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00
Yanbo Liang
6560750d08 [Dynamo] Support list indexed by constant tensor (#105509)
Fixes #104092

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105509
Approved by: https://github.com/eellison
2023-07-20 20:14:04 +00:00
kshitij12345
e137ac6c59 [dynamo][torch_np] support linalg, random and fft module (#105320)
Support tracing through `np.linalg` with `torch_np` installed. Will update with other modules if this approach makes sense.

TODO:
* [x] Add test for `fft` and `random`.

Fixes https://github.com/pytorch/pytorch/issues/105269

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105320
Approved by: https://github.com/ezyang, https://github.com/lezcano
2023-07-19 11:06:37 +00:00
kshitij12345
671a21926f [torch_np] update test to use ones_like instead of empty_like (#105453)
This test fails locally (probably because deterministic mode is not on by default).

We replace the use of `empty_like` to `ones_like` as this test doesn't need `empty_like`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105453
Approved by: https://github.com/lezcano
2023-07-18 13:13:11 +00:00
Animesh Jain
88aa51fe85 [dynamo] Support defaults for namedtuples (#105341)
Fixes https://github.com/pytorch/pytorch/issues/103008

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105341
Approved by: https://github.com/jansel
2023-07-17 23:52:57 +00:00
Mengwei Liu
fb376f80a2 [retry][dynamo][numpy] Add support for np.dtype (#105034)
Original PR: #103546

Trying to support numpy function call in dynamo, with numpy dtype as argument.

For example:

```
def fn(x: int):
    return np.empty_like(x, dtype=np.float64)
```

This currently doesn't work because `NumpyVariable` doesn't implement `as_proxy()`. The idea in `as_proxy()` for now is to convert `np.float64` and other np.<dtype> into `str` and then feed into the corresponding `torch_np` method. The assumption here is that all `torch_np` methods that are taking `dtype` kwarg will be able to also take `str` as `dtype`. This assumption stands for `numpy`.

For previous example, we convert `np.float64` to `"float64"` in `as_proxy()` and then feed it into `torch_np.empy_like()` method.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105034
Approved by: https://github.com/voznesenskym
2023-07-14 21:36:36 +00:00
Animesh Jain
9647a251cb [dynamo] Dataclass variables with default field (#104840)
The main complexity comes from the __init__ function of Dataclass variables which look something like this

```
[2023-07-10 05:01:29,548] torch._dynamo.symbolic_convert: [DEBUG] INLINING <code object __init__ at 0x7f7015154450, file "<string>", line 2>
  3           0 LOAD_FAST                1 (b)
              2 LOAD_FAST                0 (self)
              4 STORE_ATTR               0 (b)

  4           6 LOAD_FAST                2 (named_tensors)
              8 LOAD_DEREF               0 (_HAS_DEFAULT_FACTORY)
             10 IS_OP                    0
             12 POP_JUMP_IF_FALSE       20
             14 LOAD_DEREF               1 (_dflt_named_tensors)
             16 CALL_FUNCTION            0
             18 JUMP_FORWARD             2 (to 22)
        >>   20 LOAD_FAST                2 (named_tensors)
        >>   22 LOAD_FAST                0 (self)
             24 STORE_ATTR               1 (named_tensors)
             26 LOAD_CONST               0 (None)
             28 RETURN_VALUE
```

There are multiple issues
* VariableBuilder call in functions.py was wrong. We were calling *options as args.
* We were not setting source while tracking the new object. This led to no source for Dataclass variable, which has some new variables in its closures as seen in the above bytecode.
* There is IS_OP in above bytecode, which brings more cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104840
Approved by: https://github.com/jansel
2023-07-13 01:25:57 +00:00
PyTorch MergeBot
f01deb23d5 Revert "[dynamo][numpy] Add support for np.dtype (#103546)"
This reverts commit 0710791929.

Reverted https://github.com/pytorch/pytorch/pull/103546 on behalf of https://github.com/voznesenskym due to Failed on bench, unclear why bench test did not run on CI ([comment](https://github.com/pytorch/pytorch/pull/103546#issuecomment-1631203461))
2023-07-11 17:23:11 +00:00
Mengwei Liu
0710791929 [dynamo][numpy] Add support for np.dtype (#103546)
## Problem

Trying to support numpy function call in dynamo, with numpy dtype as argument.

For example:

```
def fn(x: int):
    return np.empty_like(x, dtype=np.float64)
```

## Solution

This currently doesn't work because `NumpyVariable` doesn't implement `as_proxy()`. The idea in `as_proxy()` for now is to convert `np.float64` and other np.<dtype> into `torch.dtype` and then feed into the corresponding `torch_np` method.

For previous example, we convert `np.float64` to `torch.float64` in `as_proxy()` and then feed it into `torch_np.empy_like()` method.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103546
Approved by: https://github.com/ezyang
2023-07-11 06:29:15 +00:00
Yukio Siraichi
40b8d10d5e Re-land: Turn translation validation on for tests and accuracy runs by default. (#104467)
Re-landing: #103611

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104467
Approved by: https://github.com/malfet
2023-07-05 19:01:50 +00:00
PyTorch MergeBot
a2a8b4d415 Revert "Turn translation validation on for tests and accuracy runs by default. (#103611)"
This reverts commit e311bed2a8.

Reverted https://github.com/pytorch/pytorch/pull/103611 on behalf of https://github.com/malfet due to Broke inductor tests ([comment](https://github.com/pytorch/pytorch/pull/103611#issuecomment-1614850276))
2023-06-30 15:54:18 +00:00
Yukio Siraichi
e311bed2a8 Turn translation validation on for tests and accuracy runs by default. (#103611)
This PR turns translation validation on by default for tests and accuracy benchmark
runs. It also installs Z3 on CI.

The main changes are:

- Add `--no-translation-validation` as an option in _test/run_tests.py_
    - Set `PYTORCH_TEST_WITH_TV` environment variable
- Add `TEST_WITH_TV` variable in _torch/testing/_internal/common_utils.py_
- Turn translation validation on for accuracy benchmarks in _benchmarks/dynamo/common.py_
- Add Z3 installation on CI scripts

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103611
Approved by: https://github.com/ezyang
2023-06-30 01:32:21 +00:00
Edward Z. Yang
bc6ec97e02 Switch dynamic_shapes to True by default (#103597)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103597
Approved by: https://github.com/voznesenskym
2023-06-15 15:16:20 +00:00
Mengwei Liu
96c23fe212 [dynamo][numpy] Add support for builtin functions (#103457)
In order to be able to run stuff like:
```
def f(x):
	a = x.numpy()
        return a + a
```
This PR adds a branch in `BuiltinVariable` to handle `NumpyNdarrayVariable` case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103457
Approved by: https://github.com/ezyang
2023-06-15 09:18:45 +00:00
Edward Z. Yang
3596a853b4 Always apply new_empty special case in Dynamo (#103378)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103378
Approved by: https://github.com/anijain2305
2023-06-13 19:49:35 +00:00
Mengwei Liu
2eac8bd2b8 [dynamo][numpy] Support ndarray methods (#97537)
This PR adds universal support for ndarray methods. After #100839 each `NumpyNdarrayVariable` should wrap a `torch.Tensor`. This PR adds a `numpy_method_wrapper` which converts the `torch.Tensor` to `torch_np.ndarray` and then call the numpy ndarray method. Then we also try to return a `torch.Tensor` (return as-is if the value is not ndarray-like)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97537
Approved by: https://github.com/ezyang
2023-06-12 17:21:31 +00:00
Chien-Chin Huang
fff5daf3ee [Dynamo] Support methods of NamedTuple (#103217)
This PR adds the support of calling NamedTuple methods. https://github.com/pytorch/pytorch/issues/91662

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103217
Approved by: https://github.com/williamwen42
2023-06-10 00:01:40 +00:00
shibo19
c24b61bc20 Enable torch._C._get_privateuse1_backend_name in Dynamo tracing (#103141)
Fixes https://github.com/pytorch/pytorch/issues/103125
torch._C._get_privateuse1_backend_name()  will cause graph break, so I add it to the functions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103141
Approved by: https://github.com/yanboliang
2023-06-09 09:19:33 +00:00
Bin Bao
39bf86ae90 [dynamo] Support OrderedDict constructor with kwargs (#103192)
Summary: To solve an issue in https://github.com/pytorch/pytorch/issues/102878.
The solution follows the example in https://github.com/pytorch/pytorch/pull/98660.
It only solves a problem for standard OrderedDict. There is another
problem if we use a user-defined CustomDict.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103192
Approved by: https://github.com/yanboliang
2023-06-08 12:14:21 +00:00
fduwjj
52e310f7a8 Enable torch.nn.init._calculate_correct_fan in dynamo tracing (#103182)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103182
Approved by: https://github.com/yanboliang
2023-06-08 06:13:49 +00:00
fduwjj
c454534d25 Enable torch.get_autocast_gpu_dtype in Dynamo tracing (#103166)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103166
Approved by: https://github.com/williamwen42, https://github.com/yanboliang
2023-06-07 21:31:45 +00:00
fduwjj
b5021ba981 Enable torch.is_complex in Dynamo tracing (#103154)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103154
Approved by: https://github.com/yanboliang
2023-06-07 20:56:46 +00:00
Mengwei Liu
c304fddf68 [dynamo][numpy] Support graph break for numpy ndarray (#100839)
Issue: #93684

In previous PRs #95849 #99560 we redirect `numpy.*`, `<tensor>.numpy()` calls to `torch_np.*` methods and attributes, by creating `NumpyNdarrayVariable` for those calls.

We need to handle `NumpyNdarrayVariable` when graph break happens.

This PR did 2 things:
1. In `codegen.py` we made sure we can reconstruct the value wrapped by `NumpyNdarrayVariable`, to be `torch_np.ndarray` in the stack whenerver we recompiles the subgraph.
2. In `builder.py` we can wrap the value to be `NumpyNdarrayVariable` and save it as graph input.

-----

Starting from commit 6:

## A new design for supporting numpy in dynamo

In short the core concept doesn't change: we still convert `numpy` API calls to `torch_np` API calls. However, instead of wrapping a `torch_np.ndarray` in `NumpyNdarrayVariable`, the new design wraps a `torch.Tensor`.

The reason for doing this change is because we need to keep `torch.Tensor` everywhere in the captured graph, so that it works well with the backend of dynamo. See discussions in https://github.com/Quansight-Labs/numpy_pytorch_interop/issues/142 for details.

### Flow
This is an example showing how do we think about dynamo working on a simple function:
```python
def f(x: torch.Tensor, y: torch.Tensor):
    a, b = x.numpy(), y.numpy()
    c = np.add(x, y)
    return torch.from_numpy(c)
```
```

              +------------+             +------------+
 torch.Tensor |            |numpy.ndarray|            |
 -------------- .numpy()   --------------|            |
              |            |             |            |             +------------------+
              +------------+             | numpy.add  |numpy.ndarray|                  |torch.Tensor
              +------------+             |            --------------| torch.from_numpy --------------
 torch.Tensor |            |numpy.ndarray|            |             |                  |
 -------------- .numpy()   --------------|            |             +------------------+
              |            |             |            |
              +------------+             +------------+

              +------------+             +----------------+
 torch.Tensor |            |torch.Tensor |                |
 -------------- .detach()  --------------|                |
              |            |             |                |                +----------------+            +------------+
              +------------+             |                |torch_np.ndarray|                |torch.Tensor|            |torch.Tensor
                                         | torch_np.add   -----------------| util.to_tensor -------------| .detach()  --------------
              +------------+             |                |                |                |            |            |
 torch.Tensor |            |torch.Tensor |                |                +----------------+            +------------+
 -------------- .detach()  --------------|                |
              |            |             |                |
              +------------+         |   +----------------+                                   |
                                     |                       wrapper on torch_np.add          |
                                     +--------------------------------------------------------+
```

### Approach

`torch_np` APIs can take both `torch_np.ndarray` as well as `torch.Tensor`. What  we need to do is to have a wrapper for these APIs to convert the return value back to `torch.Tensor`. This way only the wrapper is showing up in the captured graph, with `torch.Tensor`s as input and `torch.Tensor` as output.

If we have a graph break or we've traced to the end of the program, we need to inspect all the `NumpyNdarrayVariable` in the stack and convert them back to `numpy.ndarray`, to make sure the compiled version is still behaving the same as the eager version.

### Examples
Here's an example of the graph generated:

```python
def fn(x: np.ndarray, y: np.ndarray):
    a = x.real
    b = y.real
    torch._dynamo.graph_break()
    return np.add(a, 1), np.add(b, 1)
```

Graph generated:

```
[2023-05-16 10:31:48,737] torch._dynamo.output_graph.__graph: [DEBUG] TRACED GRAPH
 __compiled_fn_0 <eval_with_key>.0 opcode         name            target                                                      args                    kwargs
-------------  --------------  ----------------------------------------------------------  ----------------------  --------
placeholder    l_x_            L_x_                                                        ()                      {}
placeholder    l_y_            L_y_                                                        ()                      {}
call_function  from_numpy      <built-in method from_numpy of type object at 0x12b1fdc80>  (l_x_,)                 {}
call_function  from_numpy_1    <built-in method from_numpy of type object at 0x12b1fdc80>  (l_y_,)                 {}
call_function  attr_wrapper    <function attr_wrapper at 0x12e8693a0>                      (from_numpy, 'real')    {}
call_function  attr_wrapper_1  <function attr_wrapper at 0x12e8693a0>                      (from_numpy_1, 'real')  {}
output         output          output                                                      ((),)                   {}

[2023-05-16 10:31:48,908] torch._dynamo.output_graph.__graph: [DEBUG] TRACED GRAPH
 __compiled_fn_2 <eval_with_key>.1 opcode         name           target                                                      args                             kwargs
-------------  -------------  ----------------------------------------------------------  -------------------------------  --------
placeholder    l_a_           L_a_                                                        ()                               {}
placeholder    l_b_           L_b_                                                        ()                               {}
call_function  from_numpy     <built-in method from_numpy of type object at 0x12b1fdc80>  (l_a_,)                          {}
call_function  from_numpy_1   <built-in method from_numpy of type object at 0x12b1fdc80>  (l_b_,)                          {}
call_function  wrapped_add    <Wrapped function <original add>>                           (from_numpy, 1)                  {}
call_function  wrapped_add_1  <Wrapped function <original add>>                           (from_numpy_1, 1)                {}
output         output         output                                                      ((wrapped_add, wrapped_add_1),)  {}

```
### Changes

* `codegen.py`: reconstruct `numpy.ndarray` from `NumpyNdarrayVariable` by adding bytecode to call `utils.to_numpy_helper()`.
*  `output_graph.py`: getting rid of legacy code that does exactly what `codegen.py` does, which only handling return case but not graph break case.
*  `utils.py`: added helpers to convert `numpy.ndarray` to `torch.Tensor` and vice versa. Also adding a wrapper class that takes in a function. In `__call__` it calls the function and converts its out to `torch.Tensor` (or a list of it).
* `builder.py`: add method to wrap `numpy.ndarray` graph inputs into `NumpyNdarrayVariable`, by calling `torch.numpy` in the proxy.
* `misc.py`: `numpy` API calls goes into `NumpyVariable` and we find the function with the same name in `torch_np` module, then wrap it with the wrapper defined in `utils.py`.
* `tensor.py`, `torch.py`: proxy `tensor.numpy()` to be `torch.detach()` but wrap it with `NumpyNdarrayVariable`. Similarly, `torch.from_numpy()` -> `torch.detach()` but wrap it with `TensorVariable`. In `NumpyNdarrayVariable`, do the similar `torch_np.ndarray` to `torch.Tensor` wrapping for attributes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100839
Approved by: https://github.com/ezyang
2023-06-03 00:54:25 +00:00
Will Constable
e7cc41772d Add dynamo collections.deque support (#102412)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102412
Approved by: https://github.com/jansel, https://github.com/voznesenskym
2023-05-31 03:54:20 +00:00
Will Constable
c06d33ce43 Add dynamo itertools.combinations support (#102379)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102379
Approved by: https://github.com/jansel
2023-05-26 22:48:24 +00:00
Wanchao Liang
d40f4f12f6 [dynamo] add itertools.chain support (#102247)
This PR adds itertools chain support to dynamo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102247
Approved by: https://github.com/jansel
2023-05-25 21:26:09 +00:00
Michael Lazos
2434a205de Support unary not on lists (#102210)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102210
Approved by: https://github.com/anijain2305
2023-05-25 02:45:36 +00:00
Michael Lazos
23dbdd900f Full default dict support in dynamo (#102202)
Allows arbitrary default dict factories and construction of a default dict in a compiled function - needed for [this function](2e2a74670d/torch/utils/_foreach_utils.py (LL21C5-L21C395)) used to group params in the foreach optimizer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102202
Approved by: https://github.com/yanboliang
2023-05-25 01:41:38 +00:00
Yanbo Liang
369a256381 [Dynamo] Remove cross import in dynamo unit tests (#100851)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100851
Approved by: https://github.com/jansel
2023-05-11 17:07:25 +00:00
PyTorch MergeBot
d98d95fb9f Revert "[Dynamo] Remove cross import in dynamo unit tests (#100851)"
This reverts commit c4bbeb5b8a.

Reverted https://github.com/pytorch/pytorch/pull/100851 on behalf of https://github.com/jeanschmidt due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/100851#issuecomment-1540646623))
2023-05-09 18:30:01 +00:00
Yanbo Liang
c4bbeb5b8a [Dynamo] Remove cross import in dynamo unit tests (#100851)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100851
Approved by: https://github.com/jansel
2023-05-08 20:16:57 +00:00
Larry Liu
f5853342ea [dynamo][numpy] Handle return value being numpy ndarray (#99560)
On top of #95849 this PR is trying to handle the special case when dealing with numpy.

Consider the following example:

```
def f(x: torch.Tensor) -> np.ndarray:
	a = x.numpy()
	return a.T
```
In previous PR this will error out because we translate `a.T` to be a method call on `torch_np.ndarray.T` which is also a `torch_np.ndarray`.

This PR handles this case, by conditionally converting a `torch_np.ndarray` to `np.ndarray` before returning, to match the original behavior.

The compiled version will be:

```
def f(x):
    ___tmp_0 = __compiled_fn_0(x)
    if isinstance(___tmp_0, torch_np.ndarray):
        return ___tmp_0.tensor.numpy()
    else:
        return ___tmp_0
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99560
Approved by: https://github.com/jansel, https://github.com/yanboliang
2023-04-27 16:18:35 +00:00
Larry Liu
687afeb686 [dynamo][numpy] Add NumpyTensorVariable to translate ndarray attribute calls to tensor attributes (#95849)
Issue: #93684

# Problem

Reduce graph breaks when dynamo compiles python functions containing numpy functions and ndarray operations.

# Design (as I know it)

* Use torch_np.ndarray(a wrapper of tensor) to back a `VariableTracker`: `NumpyTensorVariable`.
* Translate all attributes and methods calls, on ndarray, to torch_np.ndarray equivalent.

This PR adds `NumpyTensorVariable` and supports:
1.  tensor to ndarray, ndarray to tensor
2. numpy functions such as numpy.meshgrid()
3. ndarray attributes such as `itemsize`, `stride`

Next PR will handle returning `np.ndarray` and add support for ndarray methods
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95849
Approved by: https://github.com/ezyang
2023-04-27 16:18:35 +00:00
Animesh Jain
3dcc7b396c [easy] iterate dict with sorted keys for accuracy checking (#99793)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99793
Approved by: https://github.com/jansel
2023-04-24 21:26:35 +00:00
Jason Ansel
f4354b2a5e [dynamo] Support dict kwargs constructor (#98660)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98660
Approved by: https://github.com/yanboliang
2023-04-20 15:40:00 +00:00
Jason Ansel
baa06790f8 Unbreak torch.compile on macos (#99119)
It seems like #96980 made torch.compile() completely ignore the `backend=` arg on macos rendering the entire API useless even if the user wasn't using mps tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99119
Approved by: https://github.com/msaroufim
2023-04-14 15:30:27 +00:00
Jason Ansel
0c162adfa8 [dynamo] Support callable() on user defined functions (#98662)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98662
Approved by: https://github.com/yanboliang
2023-04-11 05:43:46 +00:00
Yanbo Liang
9c7b03d51e [Dynamo] Fix bug of torch.is_floating_point & is_complex (#98393)
Fixes #95192

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98393
Approved by: https://github.com/ezyang
2023-04-05 20:58:16 +00:00
Jason Ansel
55afaa46a4 Support functools.partial and itertools.product (#98120)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98120
Approved by: https://github.com/anijain2305
2023-04-03 18:23:25 +00:00
Mark Saroufim
1b08a01361 Default to aot_eager for torch.compile on MPS (#96980)
Fixes https://github.com/pytorch/pytorch/issues/96976

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96980
Approved by: https://github.com/kulinseth, https://github.com/albanD, https://github.com/ZainRizvi
2023-03-25 14:21:39 +00:00
Yanbo Liang
12ab4f08b7 [Dynamo] No graph break on namedtuple and potential other functions (#96122)
```collections.namedtuple``` caused 40+ ```dynamo.export``` testing failing in 14k github models.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96122
Approved by: https://github.com/jansel, https://github.com/mlazos
2023-03-07 08:00:21 +00:00
Yanbo Liang
6ca286df69 [Dynamo] Support call dict with list/tuple as input (#95928)
Fixes Meta internal use case

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95928
Approved by: https://github.com/jansel
2023-03-04 05:52:33 +00:00
Yanbo Liang
b5ff41a47a [Dynamo] No graph break on calling dict & collections.OrderedDict() (#95250)
It's common to call ```dict()``` or ```collections.OrderedDict()``` inside of ```forward``` function, so we should not graph break.

This pattern has been used in many places including:
* The use case in [torchvision](
928b05cad3/torchvision/models/_utils.py (L66-L73)).
* It causes ~100 model failures(nopython=True) in the 14k github models.
* Also it hits several Meta internal use cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95250
Approved by: https://github.com/jansel
2023-02-23 09:03:07 +00:00
Yanbo Liang
4f257a507c [Dynamo] Support Python builtin sorted function (#94949)
Fixes #94750

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94949
Approved by: https://github.com/jansel, https://github.com/Skylion007
2023-02-16 21:27:11 +00:00
Xuehai Pan
046e88a291 [BE] [3/3] Rewrite super() calls in test (#94592)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94592
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-12 22:20:53 +00:00
David Berard
3e6978172e [dynamo] Handle general tensor attributes with a getattr proxy node (#91840)
**Background:** Before this PR, support in dynamo for tensor attributes (e.g. `x.H`, `x.T`, ...) need to be individually implemented one-by-one. This could potentially lead to errors, e.g. if the implementation in [variables/tensor.py](21c7c7c72f/torch/_dynamo/variables/tensor.py (L160)) differs from the implementation from a direct call to the attribute. For attributes that were not special-cased in tensor.py, dynamo tracing would fail. This PR adds generic support for tensor attributes that return tensors without needing to specially handle them. (Notably, for x.real and x.imag, which previously weren't supported).

**In this PR:** This directly creates a proxy node for a `"call_function"` node with `target=getattr`, and feeds it into wrap_fx_proxy. This will produce a TensorVariable for the attribute returned.

This also removes the implementations for H, T, mH, mT which were broken (previously `torch.relu(x.T)` would fail). They now fall back to this default implementation (for which `torch.relu(x.T)` passes).

**Further context**:

* Ed's original suggestion in [90463](https://github.com/pytorch/pytorch/pull/90463#discussion_r1043398340) is to use `torch.Tensor.H.__get__(x)`. I wasn't able to get this to work; fx compilation fails with `getset_descriptor does not have attribute __module__`. Basically, the `__module__` attribute which is available on most python attributes, is not available on `getset_descriptor` objects. (i.e., these are implemented in C++ as attributes on torch.Tensor, so they don't obey some assumptions made by fx)
* Although both tensor attributes and methods (like `x.relu()`) both go through this, this PR should only handle attributes (e.g. see the `"getset_descriptor"` in variables/tensor.py). Methods are handled already by by GetAttrVariable.
* Prior to this PR, we already returned GetAttrVariables for unsupported attrs: the parent caller would catch the NotImplementedError and fallback to returning a GetAttrVariable. But if this GetAttrVariable was ever passed into a torch.\* function (as it could quite possibly be, since most of these attrs are tensors), it would fail because its proxy node would be missing an [example_value](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/utils.py#L1017). So: before, for some tensor x, `x.real` would work fine; but `torch.relu(x.real)` would fail.

**Testing**: added tests in test_misc.py for x.real, x.imag, x.T, x.real.T.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91840
Approved by: https://github.com/ezyang
2023-02-01 22:34:03 +00:00
David Berard
58acab4616 [dynamo] support [tensor].type(torch.FloatTensor) (#93043)
for some tensor x, x.type(torch.FloatTensor) will essentially do the same thing as x.to(torch.float). x.type can be called with at least 3 types of inputs:
* a string "torch.FloatTensor"
* a dtype torch.float
* a tensor type torch.FloatTensor

the third option (torch.FloatTensor) fails in fx, because fx cannot trace torch.FloatTensor objects.  So this PR will replace the torch.FloatTensor type with a string "torch.FloatTensor"

Why not fix this in fx? Well, it's possible, but I'm not sure a nice way to do it. We would want to update [torch.fx.node.BaseArgumentTypes](d88bc38b0c/torch/fx/node.py (L17)) to contain torch.FloatTensor etc. We could hard-code a list of tensor types there (the types vary depending on build type, e.g. whether or not cuda tensors are available), but that's not great in case our hardcoded list differs from the actual list registered by python_tensor.cpp. Another option is to dynamically populate the list of types with `Union[tuple(...)])`, and fill the tuple with `torch._tensor_classes` (which is directly populated by python_tensor.cpp), but apparently this breaks most typecheckers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93043
Approved by: https://github.com/jansel
2023-01-27 21:27:13 +00:00
Yanbo Liang
c9ce0e63e8 [Dynamo] Support context wrapping(e.g, torch.no_grad) on nested functions w/o closure (#92922)
Fixes 14k github models: https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_ELEKTRONN_elektronn3.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92922
Approved by: https://github.com/jansel, https://github.com/mlazos
2023-01-26 04:23:35 +00:00
Will Constable
e665f03ad8 Fix dynamo func defaults handling for torch.device, size, dtype (#92880)
Previously, these torch types were not handled in the wrap_bound_arg
handler.

Add a unit test and verify it is fixed.

Fixes #91084

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92880
Approved by: https://github.com/ezyang
2023-01-24 21:50:43 +00:00
Yanbo Liang
0ab4ab9f8d [Dynamo] Fix calling UserDefinedObject.func should pass self object (#92050)
Fixes #90834

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92050
Approved by: https://github.com/jansel
2023-01-21 05:47:01 +00:00
Will Constable
6cfaa92239 Handle tensor default func args when inlining (#90575)
Handle tensor default func/method args when inlining

    Previously, when inlining a function, its default arguments
    were only wrapped with VariableTrackers if non-tensor. Now,
    tensor default args are also handled by adding them to the
    parent InstructionTranslator as an attribute.

    - also patches up a missing source in nnmodule call_function,
      needed to properly guard on a default arg in its methods
    - adds new 'DefaultsSource' type which guards either a `__defaults__`
      or `__kwdefaults__` entry on a function

Fixes #90361  https://github.com/pytorch/torchdynamo/issues/1968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90575
Approved by: https://github.com/voznesenskym
2023-01-12 05:04:18 +00:00
Yanbo Liang
490c1cf650 [Dynamo] Support torch.get_default_dtype (#89790)
Fixes https://github.com/pytorch/torchdynamo/issues/1930

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89790
Approved by: https://github.com/soumith
2022-12-19 04:14:11 +00:00
Yanbo Liang
5c133c5744 [Dynamo] Supports two torch.distributed.* functions (#90683)
Fixes Meta internal user cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90683
Approved by: https://github.com/jansel
2022-12-13 19:06:38 +00:00
Michael Lazos
76f440f20a [dynamo] Rewrite inplace addcdiv and inplace add (#90330)
Rewrite inplace addcdiv to a div, mul and inplace add to avoid graph break
Rewrite inplace add to a mul and inplace add to avoid graph break

Needed to close optimizer graph breaks

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90330
Approved by: https://github.com/jansel
2022-12-08 21:19:23 +00:00
Michael Lazos
2d9267ba30 [dynamo] Rewrite addcdiv in dynamo to its constituent ops (#90227)
This avoids a graph break when `value` is used. This fixes a graph break in the variants of Adam and Adagrad optimizers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90227
Approved by: https://github.com/jansel
2022-12-06 05:08:44 +00:00
Yanbo Liang
3916d729c8 [Dynamo] tensor.type() should return tensor types with CPU and GPU variants (#90021)
Fix errors from [7k github models](https://github.com/pytorch/torchdynamo/issues/1884)
```
Traceback (most recent call last):
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1062, in get_fake_value
    return wrap_fake_exception(
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 739, in wrap_fake_exception
    return fn()
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1063, in <lambda>
    lambda: run_node(tx.output, node, args, kwargs, nnmodule)
  File "/scratch/ybliang/work/repos/pytorch/torch/_dynamo/utils.py", line 1112, in run_node
    raise RuntimeError(
RuntimeError: Failed running call_function <function einsum at 0x7fd8f246a4c0>(*('i,j->ij', FakeTensor(FakeTensor(..., device='meta', size=(4,)), cpu), FakeTensor(FakeTensor(..., device='meta', size=(2,)), cuda:0)), **{}):
Unhandled FakeTensor Device Propagation for aten.mul.Tensor, found two different devices cpu, cuda:0
(scroll up for backtrace)
```

The root cause is: ```tensor.type()``` should return ```torch.cuda.FloatTensor``` rather than ```torch.FloatTensor``` if it's on GPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90021
Approved by: https://github.com/jansel
2022-12-02 18:57:43 +00:00
Yanbo Liang
37e46a5035 [Dynamo] Fix several bugs & code refactor in RangeVariable (#89322)
Fix bug in [7k github models](https://github.com/pytorch/torchdynamo/issues/1884): https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_clovaai_stargan_v2.py
```
E       TypeError: 'list' object cannot be interpreted as an integer
E
E       from user code:
E          File "/scratch/ybliang/work/repos/pytorch-jit-paritybench/generated/test_clovaai_stargan_v2.py", line 335, in forward
E           idx = torch.LongTensor(range(y.size(0)))
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89322
Approved by: https://github.com/jansel
2022-11-23 19:44:48 +00:00
Yanbo Liang
9eed6b7f9a [Dynamo] Several fixes on TensorVariable & TorchVariable (#89486)
This is a group of bug fixes for [7k github models](https://github.com/pytorch/torchdynamo/issues/1884), it would fix 30+ model tests.
* Support ```tensor.type()```.
* Support ```tensor.get_device()```.
* Support ```torch.nn.functional._Reduction.get_enum```.
* Support ```torch._utils._get_device_index()```.
* Fallback ```tensor.data_ptr()```.
  * ```FakeTensor``` always returns 0
  * For no fake tensor propagation, we ```clone``` the input tensor, which makes no sense to track the original ```data_ptr```. And I don't think this is a very popular API.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89486
Approved by: https://github.com/jansel
2022-11-23 19:39:45 +00:00
Akshit Khurana
7006ac6ee5 [Dynamo] Fix Tensor.T trace (#88642)
Summary:

Tensor.T considered T as a GetAttr and didn't progate "example_value"

Via https://pytorch.org/docs/stable/tensors.html#torch.Tensor.T
> If n is the number of dimensions in x, x.T is equivalent to
> x.permute(n-1, n-2, ..., 0).

Fixes pytorch/torchdynamo#1476

Test Plan:

pytest test/dynamo/test_functions.py::FunctionTests::test_T

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D41130306](https://our.internmc.facebook.com/intern/diff/D41130306)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88642
Approved by: https://github.com/tugsbayasgalan, https://github.com/yanboliang, https://github.com/jansel
2022-11-09 23:44:30 +00:00
Michael Voznesensky
bc19494814 [Dynamo] Symbolic shape guards (#87570)
**Introduces symbolic shape guards into dynamo.**

In this PR, we take the existing fake tensor infra and plumbing in dynamo and we start passing a shape_env around. This shape_env does not get plumbed down to middle layers / backend yet - it only collects expressions from frontend invocations at the moment. We then translate these expressions into guards at the point where we take other guards installed throughout dynamo - and add them to check_fn.

Part 1 of https://docs.google.com/document/d/1QJ-M4zfMkD-fjHIqW089RptjLl9EgozZGCceUbvmgfY/edit#

cc @jansel @lezcano @fdrocha @mlazos @soumith @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87570
Approved by: https://github.com/ezyang
2022-10-25 21:15:40 +00:00
Jason Ansel
8f71e8de7e Sync changes from pytorch/torchdynamo, enable tests (#86950)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86950
Approved by: https://github.com/Chillee
2022-10-14 23:08:58 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00