Animesh Jain
03e058189e
[dynamo] Support dict unpack of MutableMapping objects ( #131961 )
...
Fixes https://github.com/pytorch/pytorch/issues/128067
The basic functionality was alredy introduced earlier. This just ensures
that we support UserDefinedObjectVariable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131961
Approved by: https://github.com/williamwen42 , https://github.com/mlazos , https://github.com/yanboliang
ghstack dependencies: #131827 , #131956
2024-07-30 05:49:58 +00:00
William Wen
b6c1490cc0
[dynamo] make more unpack_var_sequence calls forced ( #132069 )
...
Fixes [T197204962](https://www.internalfb.com/intern/tasks/?t=197204962 ) (example failure: https://www.internalfb.com/intern/testinfra/diagnostics/11540474088277914.281475138576374.1722221031/ )
Added tests contain a simple repro for the observed failure (`test_map_unpack_vars`).
Also fixes https://github.com/pytorch/pytorch/issues/132044
Differential Revision: [D60420335](https://our.internmc.facebook.com/intern/diff/D60420335 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132069
Approved by: https://github.com/anijain2305
2024-07-30 02:30:08 +00:00
Chengji Yao
d47c470f47
[dynamo] implement var_getattr in UserFunctionVariable ( #130413 )
...
This PR addresses the `getattr` of UserFunctionVariable. Although this usage is uncommon, it does appear in [Megatron's code](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/core/tensor_parallel/layers.py#L635 ).
```
def linear_with_grad_accumulation_and_async_allreduce(...):
....
if not linear_with_grad_accumulation_and_async_allreduce.warned:
....
....
linear_with_grad_accumulation_and_async_allreduce.warned = False
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130413
Approved by: https://github.com/yanboliang
2024-07-29 08:29:59 +00:00
Xuehai Pan
918ece4f4d
[BE][Easy][11/19] enforce style for empty lines in import segments in test/dy*/ ( #129762 )
...
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501 . Most changes are auto-generated by linter.
You can review these PRs via:
```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129762
Approved by: https://github.com/anijain2305
2024-07-27 17:43:53 +00:00
William Wen
2576dbbc35
[dynamo] implement IteratorVariable and polyfill fallbacks for enumerate ( #131725 )
...
Fixes https://github.com/pytorch/pytorch/issues/112794 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131725
Approved by: https://github.com/anijain2305
ghstack dependencies: #131413 , #131716
2024-07-26 17:17:09 +00:00
William Wen
35b4de32fa
[dynamo] add itertools repeat/count bytecode reconstruction ( #131716 )
...
Also fix bugs in the count iterator variable implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131716
Approved by: https://github.com/anijain2305
ghstack dependencies: #131413
2024-07-26 17:17:09 +00:00
Yanbo Liang
e76e566cfb
[Dynamo] Support zip_longest ( #131497 )
...
Fixes #121348
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131497
Approved by: https://github.com/mlazos , https://github.com/jansel , https://github.com/zou3519
2024-07-26 14:06:10 +00:00
William Wen
7d282d8755
[dynamo] add lazy IteratorVariable implementations for map and zip ( #131413 )
...
Fixes https://github.com/pytorch/pytorch/issues/130750 .
Repro of lazy/eager `map` discrepancy without `islice`:
```python
def fn(a, b):
y = 1
def f(x):
nonlocal y
y += 1
return x
l = list(zip([a, b], map(f, [1, 2, 3, 4])))
return a + y
```
The major change is that we implement `MapVariable` and `ZipVariable` based on `IteratorVariable`. Before, `map` and `zip` were being traced by immediately unpacking the result as a `TupleVariable`, which is wrong in cases such as the example above.
`MapVariable`s are not allowed to be unpacked while `ZipVariable`s can only be unpacked if all of its iterables can also be unpacked.
We also add new `[has_]force_unpack_var_sequence` methods to `VariableTracker` for the case where it is safe to unpack the entire sequence lazily, e.g., when building a list from a map (i.e. `list(map(f, ...))`).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131413
Approved by: https://github.com/anijain2305
2024-07-26 10:47:38 +00:00
Yidi Wu
ffc6bf8149
[dynamo] lazily guard and specialize on the symint when used in f-string. ( #131529 )
...
Fixes https://github.com/pytorch/pytorch/issues/103602 .
This PR implements the idea of "if someone creates a string and then ends up not using it, we would prefer to NOT have specialized." mentioned in above issue. Specifically, we create a lazy variable tracker instead of ConstantVariable when we're in FORMAT_VALUE, and when the lazy variable tracker is realized (i.e. it's going to be used), we create a ConstantVariable and the specialization/guarding happens at the time of realization.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131529
Approved by: https://github.com/ezyang
2024-07-25 16:16:34 +00:00
Animesh Jain
e2b941a1b4
[dynamo] Rename TENSOR_ALIASING to OBJECT_ALIASING. Permit OBJECT_ALIASING for dict guards ( #131480 )
...
Fixes https://github.com/pytorch/pytorch/issues/129667
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131480
Approved by: https://github.com/williamwen42
ghstack dependencies: #131347 , #131367 , #131378 , #131389 , #131405
2024-07-24 00:06:53 +00:00
Animesh Jain
6bbef2a06b
[dynamo] Support set on KeysView ( #131389 )
...
Fixes https://github.com/pytorch/pytorch/issues/129664
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131389
Approved by: https://github.com/mlazos
ghstack dependencies: #131347 , #131367 , #131378
2024-07-23 14:15:26 +00:00
Animesh Jain
e7c5e06772
[dynamo] Support __contains__ on __dict__ on UserDefinedClassVariable ( #131378 )
...
Fixes https://github.com/pytorch/pytorch/issues/129665
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131378
Approved by: https://github.com/mlazos
ghstack dependencies: #131347 , #131367
2024-07-23 14:15:26 +00:00
Animesh Jain
0bc5e26067
[dynamo] Support dict conversion of objects derived from MutableMapping ( #131367 )
...
Fixes - https://github.com/pytorch/pytorch/issues/129662
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131367
Approved by: https://github.com/williamwen42
ghstack dependencies: #131347
2024-07-23 14:15:20 +00:00
Animesh Jain
a944cce5b8
[dynamo] Support if callable on list ( #131347 )
...
Fixes https://github.com/pytorch/pytorch/issues/130720
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131347
Approved by: https://github.com/williamwen42 , https://github.com/mlazos
2024-07-23 14:15:15 +00:00
Alex Dennis
7d4f50de19
dynamo add support for defaultdict(set) ( #130745 )
...
Fixes #130554
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130745
Approved by: https://github.com/Skylion007
2024-07-15 22:23:33 +00:00
PyTorch MergeBot
dff9d68f18
Revert "Fix names conflict when lifting ( #129817 )"
...
This reverts commit 53cf46b8c6 .
Reverted https://github.com/pytorch/pytorch/pull/129817 on behalf of https://github.com/clee2000 due to Failing inductor/test_flex_attention.py https://github.com/pytorch/pytorch/actions/runs/9940532858/job/27478084137 74da2a467f Sorry for the churn, possibly a landrace? ([comment](https://github.com/pytorch/pytorch/pull/129817#issuecomment-2229519886 ))
2024-07-15 22:08:45 +00:00
Zhanghan Wang
53cf46b8c6
Fix names conflict when lifting ( #129817 )
...
## Bug description
When pending args that are potentially to be lift [here](58f346c874/torch/_dynamo/output_graph.py (L1866) ) having same base name, like `contiguous` and `contiguous_1`, the call into [create_graph_input](58f346c874/torch/_dynamo/output_graph.py (L2081) ) can finally create a name ([here](58f346c874/torch/fx/graph.py (L1008) )) that overwrite args to lift. And thus causing a wrong output of graph.
## Reproducing
Below is an reproduceable example,
```python
import logging
from typing import List
import torch
from functorch.compile import aot_module_simplified, make_boxed_func
@torch.library.custom_op("mylib::somefunc_forward", mutates_args=())
def somefunc_forward(
input_: torch.Tensor,
weight: torch.Tensor,
shape: List[int],
) -> torch.Tensor:
return torch.ones_like(input_)
@somefunc_forward.register_fake
def _(input_, shape, weight):
return torch.empty_like(input_)
@torch.library.custom_op("mylib::somefunc_backward", mutates_args=())
def somefunc_backward(
grad_output: torch.Tensor,
input_: torch.Tensor,
weight: torch.Tensor,
shape: List[int],
) -> torch.Tensor:
print(f"backward.{grad_output.shape=}")
print(f"backward.{input_.shape=}")
print(f"backward.{weight.shape=}")
print(f"backward.{shape=}")
assert list(weight.shape) == shape
return torch.ones_like(weight)
@somefunc_backward.register_fake
def _(grad_output, input_, weight, shape):
return torch.empty_like(weight)
def a_func(grad_output, input_, weight_, shape):
return torch.ones_like(input_.sum() * weight_)
class SomeFunc(torch.autograd.Function):
@staticmethod
def forward(ctx, input, weight, normalized_shape):
ctx.normalized_shape = normalized_shape
input_ = input.contiguous()
weight_ = weight.contiguous()
output = somefunc_forward(input_, weight_, ctx.normalized_shape)
ctx.save_for_backward(input_, weight_)
return output
@staticmethod
def backward(ctx, grad_output):
input_, weight_ = ctx.saved_tensors
# grad_weight = a_func(grad_output, input_, weight_, ctx.normalized_shape)
grad_weight = somefunc_backward(
grad_output.contiguous(),
input_,
weight_,
ctx.normalized_shape,
)
return None, grad_weight, None
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.weight = torch.nn.Parameter(torch.ones(7))
def forward(self, x):
return SomeFunc.apply(x, self.weight, [7])
model = MyModel()
torch._logging.set_logs(dynamo=logging.DEBUG, aot=logging.DEBUG, graph_code=True)
def aot_print_backend(gm, sample_inputs):
# Forward compiler capture
def fw(gm, sample_inputs):
print(f"----- fw")
gm.print_readable()
return make_boxed_func(gm.forward)
# Backward compiler capture
def bw(gm, sample_inputs):
print(f"----- bw")
gm.print_readable()
return make_boxed_func(gm.forward)
# Call AOTAutograd
gm_forward = aot_module_simplified(
gm, sample_inputs, fw_compiler=fw, bw_compiler=bw
)
return gm_forward
model = torch.compile(
model,
backend=aot_print_backend,
dynamic=False,
)
out = model(torch.rand((128, 4, 7)))
out.mean().backward()
```
I can see log that showing calling into create_graph_input like
```log
V0629 02:08:46.839914 8200981504 torch/_dynamo/output_graph.py:2042] [0/0] create_graph_input contiguous (none)
V0629 02:08:46.839998 8200981504 torch/_dynamo/output_graph.py:2042] [0/0] create_graph_input contiguous_1 (none)
```
And the backward graph generate will be like
```log
class GraphModule(torch.nn.Module):
def forward(self, function_ctx, somefunc_forward_default: "f32[128, 4, 7]", contiguous: "f32[128, 4, 7]", contiguous_1: "f32[7]"):
contiguous_1 = contiguous
contiguous_2 = contiguous_1
# No stacktrace found for following nodes
_set_grad_enabled = torch._C._set_grad_enabled(False)
# File: /Users/bytedance/testtorch/test_custom_op_bug.py:61 in backward, code: grad_output.contiguous(),
contiguous: "f32[128, 4, 7]" = somefunc_forward_default.contiguous(); somefunc_forward_default = None
# File: /opt/tiger/pytorch/torch/_library/custom_ops.py:506 in __call__, code: return self._opoverload(*args, **kwargs)
somefunc_backward_default: "f32[7]" = torch.ops.mylib.somefunc_backward.default(contiguous, contiguous_1, contiguous_2, [7]); contiguous = contiguous_1 = contiguous_2 = None
# No stacktrace found for following nodes
_set_grad_enabled_1 = torch._C._set_grad_enabled(True)
return (None, somefunc_backward_default)
```
The original code of `somefunc_backward` takes a input list of `grad_output`, `input_`, `weight` and `shape`, where `weight` should be shape of `torch.Size([7])`. However, in the graph, `contiguous1` and `contiguous_2` are assigned with `contiguous`, this leads to assertion failure I added in `somefunc_backward`.
## Environment
```log
Collecting environment information...
PyTorch version: 2.5.0a0+git0b7e8df
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.9.19 (main, May 6 2024, 14:39:30) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] numpy==2.0.0
[pip3] optree==0.11.0
[pip3] torch==2.5.0a0+git0b7e8df
[pip3] torchgraph==0.0.1
[conda] numpy 2.0.0 pypi_0 pypi
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.5.0a0+git0b7e8df dev_0 <develop>
[conda] torchgraph 0.0.1 dev_0 <develop>
```
## How to fix?
I put a naive fix that add the potential args to lift into the used_names. This visits private variables, will fix that if this issue makes sense to you.
@zou3519 @oulgen
Co-authored-by: rzou <zou3519@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129817
Approved by: https://github.com/zou3519
2024-07-15 18:49:12 +00:00
PyTorch MergeBot
1e897a0ca4
Revert "Fix names conflict when lifting ( #129817 )"
...
This reverts commit 74da2a467f .
Reverted https://github.com/pytorch/pytorch/pull/129817 on behalf of https://github.com/clee2000 due to broke dynamo/test_inline_inbuilt_nn_modules.py https://github.com/pytorch/pytorch/actions/runs/9940532858/job/27461141919 74da2a467f . Test passed on PR, possibly a landrace? ([comment](https://github.com/pytorch/pytorch/pull/129817#issuecomment-2228993570 ))
2024-07-15 17:09:52 +00:00
Zhanghan Wang
74da2a467f
Fix names conflict when lifting ( #129817 )
...
## Bug description
When pending args that are potentially to be lift [here](58f346c874/torch/_dynamo/output_graph.py (L1866) ) having same base name, like `contiguous` and `contiguous_1`, the call into [create_graph_input](58f346c874/torch/_dynamo/output_graph.py (L2081) ) can finally create a name ([here](58f346c874/torch/fx/graph.py (L1008) )) that overwrite args to lift. And thus causing a wrong output of graph.
## Reproducing
Below is an reproduceable example,
```python
import logging
from typing import List
import torch
from functorch.compile import aot_module_simplified, make_boxed_func
@torch.library.custom_op("mylib::somefunc_forward", mutates_args=())
def somefunc_forward(
input_: torch.Tensor,
weight: torch.Tensor,
shape: List[int],
) -> torch.Tensor:
return torch.ones_like(input_)
@somefunc_forward.register_fake
def _(input_, shape, weight):
return torch.empty_like(input_)
@torch.library.custom_op("mylib::somefunc_backward", mutates_args=())
def somefunc_backward(
grad_output: torch.Tensor,
input_: torch.Tensor,
weight: torch.Tensor,
shape: List[int],
) -> torch.Tensor:
print(f"backward.{grad_output.shape=}")
print(f"backward.{input_.shape=}")
print(f"backward.{weight.shape=}")
print(f"backward.{shape=}")
assert list(weight.shape) == shape
return torch.ones_like(weight)
@somefunc_backward.register_fake
def _(grad_output, input_, weight, shape):
return torch.empty_like(weight)
def a_func(grad_output, input_, weight_, shape):
return torch.ones_like(input_.sum() * weight_)
class SomeFunc(torch.autograd.Function):
@staticmethod
def forward(ctx, input, weight, normalized_shape):
ctx.normalized_shape = normalized_shape
input_ = input.contiguous()
weight_ = weight.contiguous()
output = somefunc_forward(input_, weight_, ctx.normalized_shape)
ctx.save_for_backward(input_, weight_)
return output
@staticmethod
def backward(ctx, grad_output):
input_, weight_ = ctx.saved_tensors
# grad_weight = a_func(grad_output, input_, weight_, ctx.normalized_shape)
grad_weight = somefunc_backward(
grad_output.contiguous(),
input_,
weight_,
ctx.normalized_shape,
)
return None, grad_weight, None
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.weight = torch.nn.Parameter(torch.ones(7))
def forward(self, x):
return SomeFunc.apply(x, self.weight, [7])
model = MyModel()
torch._logging.set_logs(dynamo=logging.DEBUG, aot=logging.DEBUG, graph_code=True)
def aot_print_backend(gm, sample_inputs):
# Forward compiler capture
def fw(gm, sample_inputs):
print(f"----- fw")
gm.print_readable()
return make_boxed_func(gm.forward)
# Backward compiler capture
def bw(gm, sample_inputs):
print(f"----- bw")
gm.print_readable()
return make_boxed_func(gm.forward)
# Call AOTAutograd
gm_forward = aot_module_simplified(
gm, sample_inputs, fw_compiler=fw, bw_compiler=bw
)
return gm_forward
model = torch.compile(
model,
backend=aot_print_backend,
dynamic=False,
)
out = model(torch.rand((128, 4, 7)))
out.mean().backward()
```
I can see log that showing calling into create_graph_input like
```log
V0629 02:08:46.839914 8200981504 torch/_dynamo/output_graph.py:2042] [0/0] create_graph_input contiguous (none)
V0629 02:08:46.839998 8200981504 torch/_dynamo/output_graph.py:2042] [0/0] create_graph_input contiguous_1 (none)
```
And the backward graph generate will be like
```log
class GraphModule(torch.nn.Module):
def forward(self, function_ctx, somefunc_forward_default: "f32[128, 4, 7]", contiguous: "f32[128, 4, 7]", contiguous_1: "f32[7]"):
contiguous_1 = contiguous
contiguous_2 = contiguous_1
# No stacktrace found for following nodes
_set_grad_enabled = torch._C._set_grad_enabled(False)
# File: /Users/bytedance/testtorch/test_custom_op_bug.py:61 in backward, code: grad_output.contiguous(),
contiguous: "f32[128, 4, 7]" = somefunc_forward_default.contiguous(); somefunc_forward_default = None
# File: /opt/tiger/pytorch/torch/_library/custom_ops.py:506 in __call__, code: return self._opoverload(*args, **kwargs)
somefunc_backward_default: "f32[7]" = torch.ops.mylib.somefunc_backward.default(contiguous, contiguous_1, contiguous_2, [7]); contiguous = contiguous_1 = contiguous_2 = None
# No stacktrace found for following nodes
_set_grad_enabled_1 = torch._C._set_grad_enabled(True)
return (None, somefunc_backward_default)
```
The original code of `somefunc_backward` takes a input list of `grad_output`, `input_`, `weight` and `shape`, where `weight` should be shape of `torch.Size([7])`. However, in the graph, `contiguous1` and `contiguous_2` are assigned with `contiguous`, this leads to assertion failure I added in `somefunc_backward`.
## Environment
```log
Collecting environment information...
PyTorch version: 2.5.0a0+git0b7e8df
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.9.19 (main, May 6 2024, 14:39:30) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] numpy==2.0.0
[pip3] optree==0.11.0
[pip3] torch==2.5.0a0+git0b7e8df
[pip3] torchgraph==0.0.1
[conda] numpy 2.0.0 pypi_0 pypi
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.5.0a0+git0b7e8df dev_0 <develop>
[conda] torchgraph 0.0.1 dev_0 <develop>
```
## How to fix?
I put a naive fix that add the potential args to lift into the used_names. This visits private variables, will fix that if this issue makes sense to you.
@zou3519 @oulgen
Co-authored-by: rzou <zou3519@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129817
Approved by: https://github.com/zou3519
2024-07-15 13:41:46 +00:00
awayzjj
dcaa111dc8
support intersection by polyfill ( #130672 )
...
Fixes https://github.com/pytorch/pytorch/issues/130557
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130672
Approved by: https://github.com/anijain2305
2024-07-14 10:44:26 +00:00
Tom Ritchford
b0a597fcb4
Fix #121334 : graph break on constant method call ( #130158 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130158
Approved by: https://github.com/lezcano
2024-07-12 17:34:46 +00:00
Xuehai Pan
973037be6a
[BE][Easy] apply autofix for ruff rules unnecessary-collection-call (C408): list() / tuple() / dict() ( #130199 )
...
This PR changes the empty collection factory call to Python literals:
- `list()` -> `[]`
- `tuple()` -> `()`
- `dict()` -> `{}`
The Python literals are more performant and safer. For example, the bytecode for building an empty dictionary:
```bash
$ python3 -m dis - <<EOS
import collections
d1 = {}
d2 = dict()
dict = collections.OrderedDict
d3 = dict()
EOS
```
```text
0 0 RESUME 0
1 2 LOAD_CONST 0 (0)
4 LOAD_CONST 1 (None)
6 IMPORT_NAME 0 (collections)
8 STORE_NAME 0 (collections)
3 10 BUILD_MAP 0
12 STORE_NAME 1 (d1)
4 14 PUSH_NULL
16 LOAD_NAME 2 (dict)
18 CALL 0
26 STORE_NAME 3 (d2)
6 28 LOAD_NAME 0 (collections)
30 LOAD_ATTR 8 (OrderedDict)
50 STORE_NAME 2 (dict)
7 52 PUSH_NULL
54 LOAD_NAME 2 (dict)
56 CALL 0
64 STORE_NAME 5 (d3)
66 RETURN_CONST 1 (None)
```
The dict literal `{}` only has one bytecode `BUILD_MAP`, while the factory call `dict()` has three `PUSH_NULL + LOAD_NAME + CALL`. Also, the factory call is not safe if users override the `dict` name in `locals` or `globals` (see the example of replacing with `OrderedDict` above).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130199
Approved by: https://github.com/malfet
2024-07-11 17:30:28 +00:00
Animesh Jain
6b5fbc544e
[dynamo] Use polyfill to trace through the attributes of torch.jit.* and lru_cache_wrapper ( #128336 )
...
Earlier we were taking the vt for `obj` and then monkeypatching that `vt.source` to be `obj._torchdynamo_inline`. If one accesses `obj.attr_a`, this would cause problems because Dynamo would then search it in `obj._torchdynamo_inline.attr_a`. This PR makes it more functional, so that we have different vts for obj and `ob._torchdynamo_inline`.
Fixes https://github.com/pytorch/pytorch/issues/93698
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128336
Approved by: https://github.com/jansel , https://github.com/yanboliang
ghstack dependencies: #129117
2024-06-21 07:44:44 +00:00
Laith Sakka
4c84af0f5d
Fix indexing and slicing of ranges in dynamo ( #128567 )
...
Fix https://github.com/pytorch/pytorch/issues/128520
Dynamo does not handle range()[binary subscript] or range()[trinary_subscript] correctly. Right now it calls
the get_item function which basically applies the subscript operation on top of the list of [start, end, step]! which is completely not related to what is expected.
in python, range()[complex subscript] is another range, ex:
range(1, 10, 2)[1:4:1] is range(3, 9, 2)
and range(1, 10, 2)[1:4:1] is range(-9, 9, 2)
This diff fix index and slice applications on range.
it mimics implementations from (https://github.com/python/cpython/blob/main/Objects/rangeobject.c )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128567
Approved by: https://github.com/anijain2305
2024-06-14 16:49:49 +00:00
PyTorch MergeBot
48a54146e7
Revert "[dynamo] Support ndarray.dtype attribute access ( #124490 )"
...
This reverts commit 4adee71155 .
Reverted https://github.com/pytorch/pytorch/pull/124490 on behalf of https://github.com/atalman due to Breaks internal builds ([comment](https://github.com/pytorch/pytorch/pull/124490#issuecomment-2152664749 ))
2024-06-06 14:21:29 +00:00
Andrew M. James
4adee71155
[dynamo] Support ndarray.dtype attribute access ( #124490 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124490
Approved by: https://github.com/lezcano
ghstack dependencies: #125717
2024-06-05 17:20:01 +00:00
laithsakka
029af29e6d
support operator.index function ( #127440 )
...
Fix https://github.com/pytorch/pytorch/issues/127426
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127440
Approved by: https://github.com/mlazos
ghstack dependencies: #126444 , #127146 , #127424
2024-05-30 22:44:18 +00:00
Andrew M. James
80a8fc07b2
[dynamo] Handle np.iinfo/finfo/dtype as input ( #124482 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124482
Approved by: https://github.com/lezcano
ghstack dependencies: #124481
2024-05-29 16:00:15 +00:00
Andrew M. James
ade075444f
[dynamo] Support numpy.dtype ( #124481 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124481
Approved by: https://github.com/lezcano
2024-05-29 14:45:14 +00:00
Yanbo Liang
da9bf77f0a
[Dynamo] Support SET_UPDATE ( #126243 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126243
Approved by: https://github.com/anijain2305 , https://github.com/Skylion007 , https://github.com/jansel
2024-05-16 20:05:34 +00:00
Yanbo Liang
f91cae461d
[Dynamo] SizeVariable supports hasattr ( #126222 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126222
Approved by: https://github.com/williamwen42 , https://github.com/anijain2305
2024-05-15 17:16:36 +00:00
Yanbo Liang
51ed4c46cf
[Dynamo] Supports torch._C._is_any_autocast_enabled ( #126196 )
...
Fixes #126026
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126196
Approved by: https://github.com/anijain2305
2024-05-15 03:16:13 +00:00
Yanbo Liang
bdaa9b2981
[Dynamo] Wrap set as SetVariable and support isdisjoint by polyfill ( #126046 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126046
Approved by: https://github.com/anijain2305 , https://github.com/jansel
2024-05-14 04:56:06 +00:00
Edward Z. Yang
ecd62746e3
Also pull size/stride info from example_value ( #125505 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125505
Approved by: https://github.com/jansel
2024-05-05 22:27:46 +00:00
Animesh Jain
1a0b247762
[dynamo] Bug fix for LOAD_GLOBAL and STORE_GLOBAL ( #125002 )
...
Earlier globals of inlined functions from other files were not handled correctly. We were not tracking mutations on them. They were colliding with the same global name in the parent function etc. This PR overrides the LOAD/STORE_GLOBAL for inline tx and tracks mutation on them separately.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125002
Approved by: https://github.com/jansel
ghstack dependencies: #125097 , #125107
2024-04-28 15:24:17 +00:00
YangQun1
91d565da0c
[dynamo] Add support for tensor's is_complex method ( #124927 )
...
This PR is to add support for tensor's is_complex method in dynamo. Take the following code as an example:
```python
def test_tensor_is_complex(x):
if x.is_complex():
return x + 1
else:
return x - 1
```
Before this fix, the is_complex() call will cause a graph break "torch.* op returned non-Tensor bool call_method is_complex". After this fix, the graph break can be avoided.
Fixes #122692
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124927
Approved by: https://github.com/ezyang
2024-04-26 18:28:14 +00:00
Yanbo Liang
0d90d4d613
[Dynamo] Fix NamedTuple hasattr bug ( #124531 )
...
Fixes #124402
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124531
Approved by: https://github.com/jansel
2024-04-21 04:36:22 +00:00
Jason Ansel
6bac183dc2
[dynamo] Support numpy.iinfo/finfo ( #123803 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123803
Approved by: https://github.com/anijain2305
ghstack dependencies: #123700 , #123705 , #123786 , #123790
2024-04-12 19:03:13 +00:00
Jason Ansel
6b0ba6bbd3
[dynamo] Improve constant-prop for regex/torch.__version__ ( #123705 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123705
Approved by: https://github.com/anijain2305
ghstack dependencies: #123700
2024-04-12 19:03:13 +00:00
Guilherme Leobas
84658d9c4f
Enable capture_func_transforms by default ( #122211 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122211
Approved by: https://github.com/zou3519
2024-04-05 03:29:11 +00:00
Jason Ansel
2a137f7af1
[dynamo] Support hasattr on UserDefinedClassVariable ( #122564 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122564
Approved by: https://github.com/anijain2305
2024-03-29 17:34:14 +00:00
Jason Ansel
069270db60
[dynamo] Fix list comparison ops ( #122559 )
...
Fixes #122376
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122559
Approved by: https://github.com/Skylion007
2024-03-25 07:03:23 +00:00
Jason Ansel
07caea5c12
[dynamo] Refactor COMPARE_OP and comparison builtins ( #122043 )
...
This removes the duplicate handling of comparison ops between symbolic_convert and bultin and refactors the handling to use the binop infrastructure. This change regresses overheads a bit, but this is fixed in the next PR.
New test skips are variants of `type(e) is np.ndarray` previously falling back to eager.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122043
Approved by: https://github.com/anijain2305
ghstack dependencies: #122039
2024-03-19 04:23:17 +00:00
Aaron Gokaslan
d55d803812
Add operator length hint support ( #121495 )
...
Seemed like an easy operator to squeeze into Python 2.3 . Added a simple test. Partially addresses #116396
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121495
Approved by: https://github.com/albanD
2024-03-08 19:08:33 +00:00
laith sakka
d21c6eb215
Do not wrap output with input device inside _to_copy ( #119868 )
...
Fixing https://github.com/pytorch/pytorch/issues/118790
This diff revert a small part of the code that was introduced in https://github.com/pytorch/pytorch/pull/104689
The PR above added a comment that "In case of dtype promotion, fake tensor converted into tensor"
but its not always the case that a conversion in dtype causes a fake tensor to be a tensor.
When such conversion does not happen we get the following error
```
Creating a new Tensor subclass FakeTensor but the raw Tensor object is already associated to
a python object of type FakeTensor
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119868
Approved by: https://github.com/ezyang , https://github.com/thiagocrepaldi
2024-02-28 01:51:43 +00:00
Yanbo Liang
5a0a964444
[Dynamo] Fix guards for script_if_tracing or lru_cache fn with default args ( #120390 )
...
Fixes #120387
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120390
Approved by: https://github.com/anijain2305
2024-02-26 19:40:14 +00:00
laith sakka
ea8e4fd5ac
Support FunctoolsPartialVariable::get_function, fix NamedTupleVariable::as_proxy and handle call_function in get_fake_values_from_nodes ( #119435 )
...
partially address https://github.com/pytorch/pytorch/issues/118785
This diff fixes three things:
1. add get_function to FunctoolsPartialVariable note that it will be available only if all args constant otherwise,
it would throw unimplemented in the call to asPythonConstant.
2. NamedTupleVariable takes args dispatched not as list ex: NamedTuple(a, b, c) vs NamedTuple([a, b, c]),
hence fix that by specializing asProxy.
3. A call to create_arg from within create_proxy, changes a python NamedTuple to a function call node without
associating an example value! Updated get_fake_values_from_nodes to handle such case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119435
Approved by: https://github.com/jansel , https://github.com/anijain2305
ghstack dependencies: #119314
2024-02-13 01:44:08 +00:00
Jason Ansel
74d55b0e63
[dynamo] Support torch.distributed.fsdp._flat_param._same_storage_size ( #119627 )
...
Replaces #117690
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119627
Approved by: https://github.com/Skylion007
2024-02-13 01:27:37 +00:00
laith sakka
c814d8e5c2
Fix handling random() calls encountered inside inlined code. ( #119218 )
...
Fix https://github.com/pytorch/pytorch/issues/118787
In the compiled function, calls to random() are replaced with a single function call
to a function that generates all the random variables .
The random calls encountered during compilation used to be tracked inside a variable
stored inside the instruction translator. And when there are nested translators, the tracked
calls used to get lost when the inner instructions translator popped out.
This diff fixes that by moving the tracked calla to the output graph which is shared across translators that are generating the same function.
More details about the issue and why this solution is picked are in the github issue above.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119218
Approved by: https://github.com/jansel , https://github.com/anijain2305
2024-02-06 23:48:21 +00:00
Jason Ansel
5e78c4b0f4
[dynamo] Functools partial reconstruct ( #118583 )
...
Replaces #117721
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118583
Approved by: https://github.com/yanboliang
ghstack dependencies: #118901 , #118616
2024-02-06 23:42:43 +00:00
laith sakka
923a7c7572
add test elipsis to dynamo test functions ( #118754 )
...
add tests to ensure the reported bug in #117563 is not failing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118754
Approved by: https://github.com/anijain2305
2024-02-01 19:05:01 +00:00
rzou
318e6ff40e
Fix __name__ on a reconstructed NestedUserFunctionVariable ( #118768 )
...
```
def f():
def g():
return ()
print(g.__name__)
f()
```
The following script should print `g` (with or without torch.compile),
but prints `f.<locals>.g` with torch.compile.
The problem looks like we use the co_qualname when reconstructing the
NestedUserFunctionVariable. I switched this over to use the co_name.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118768
Approved by: https://github.com/yanboliang , https://github.com/jansel
2024-02-01 18:59:01 +00:00
Yanbo Liang
4fc4f5eb06
[Dynamo] Support tensor is not tensor ( #118840 )
...
Fixes Meta internal use case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118840
Approved by: https://github.com/yf225
2024-02-01 07:32:43 +00:00
laith sakka
8455447972
Support builtin callable with object arguments in dynamo ( #118678 )
...
Fix issue #117556
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118678
Approved by: https://github.com/anijain2305
2024-01-31 17:54:08 +00:00
laith sakka
1bf9ddf130
add test_truth ( #118597 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118597
Approved by: https://github.com/anijain2305
2024-01-31 15:10:58 +00:00
ydwu4
fc5cde7579
[dynamo] constant fold torch.cuda.get_device_properties to avoid graph break ( #118422 )
...
Before the PR, we have a graph break for code like this,
```python
def test_get_device_properties_tensor_device(a):
x = a.to("cuda")
prop = torch.cuda.get_device_properties(x.device)
if prop.major == 8:
return x + prop.multi_processor_count
return x + prop.max_threads_per_multi_processor
```
This PR constant folds the torch.cuda.get_device_properties and we'll get a following dynamo graph:
```python
[2024-01-26 13:28:13,253] [0/0] torch._dynamo.output_graph.__graph: [DEBUG] <eval_with_key>.0 class GraphModule(torch.nn.Module):
[2024-01-26 13:28:13,253] [0/0] torch._dynamo.output_graph.__graph: [DEBUG] def forward(self, L_a_ : torch.Tensor):
[2024-01-26 13:28:13,253] [0/0] torch._dynamo.output_graph.__graph: [DEBUG] l_a_ = L_a_
[2024-01-26 13:28:13,253] [0/0] torch._dynamo.output_graph.__graph: [DEBUG]
[2024-01-26 13:28:13,253] [0/0] torch._dynamo.output_graph.__graph: [DEBUG] # File: /home/yidi/local/pytorch/test/dynamo/test_functions.py:544 in test_get_device_properties_tensor_device, code: x = a.to("cuda")
[2024-01-26 13:28:13,253] [0/0] torch._dynamo.output_graph.__graph: [DEBUG] x = l_a_.to('cuda'); l_a_ = None
[2024-01-26 13:28:13,253] [0/0] torch._dynamo.output_graph.__graph: [DEBUG]
[2024-01-26 13:28:13,253] [0/0] torch._dynamo.output_graph.__graph: [DEBUG] # File: /home/yidi/local/pytorch/test/dynamo/test_functions.py:547 in test_get_device_properties_tensor_device, code: return x + prop.multi_processor_count
[2024-01-26 13:28:13,253] [0/0] torch._dynamo.output_graph.__graph: [DEBUG] add = x + 108; x = None
[2024-01-26 13:28:13,253] [0/0] torch._dynamo.output_graph.__graph: [DEBUG] return (add,)
[2024-01-26 13:28:13,253] [0/0] torch._dynamo.output_graph.__graph: [DEBUG]
```
The signature of get_device_properties is:
```python
def get_device_properties(device: _device_t) -> _CudaDeviceProperties:
```
I think it's safe to constant fold get_device_properties():
1. torch.cuda.get_device_properties(tensor.device). In this case, tensor.device.index is guarded in _check_tensor
2. torch.cuda.get_device_properties(device_int_id). We don't expect the GPU properties for a particular index changes during a torch.compile run and it make sense to specialize the properties for a concrete device_int_id.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118422
Approved by: https://github.com/yanboliang , https://github.com/jansel
2024-01-29 20:26:40 +00:00
ydwu4
5b31516008
[dynamo] inline torch.jit._unwrap_optional ( #118434 )
...
Before this pr, torch.jit._unwrap_optional is in the skipfile list thus causing a graph break. Check its implementation it's just a normal python function [here](ff8e33556e/torch/jit/_script.py (L1681-L1683) ):
```python
def _unwrap_optional(x):
assert x is not None, "Unwrapping null optional"
return x
```
We could safely inline it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118434
Approved by: https://github.com/yanboliang
2024-01-27 02:22:14 +00:00
ydwu4
71757093c5
[dynamo] avoid graph break on torch.backends.cuda.matmul.allow_tf32 ( #118236 )
...
Before the PR, we have a graph break for the following test:
```python
def test_cublas_allow_tf32(x):
if torch.backends.cuda.matmul.allow_tf32:
return x.sin() + 1
return x.cos() - 1
```
In this PR, we first add "torch.backends.cuda" to MOD_INLINELIST to trace through the python binding and get the actual call torch._C._get_cublas_allow_tf32, where it's already a TorchInGraphVariable. Because _get_cublas_allow_tf32 is accessing the same variable as at::globalContext().allowTF32CuBLAS(), which is guarded by dynamo as a global state [here](https://github.com/pytorch/pytorch/blob/main/torch/csrc/dynamo/guards.cpp#L443 ), we could safely assume it returns a ConstantVariable during tracing.
After this pr, we get the following graph:
```python
[2024-01-24 15:31:01,501] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] <eval_with_key>.0 class GraphModule(torch.nn.Module):
[2024-01-24 15:31:01,501] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] def forward(self, L_x_ : torch.Tensor):
[2024-01-24 15:31:01,501] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] l_x_ = L_x_
[2024-01-24 15:31:01,501] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2024-01-24 15:31:01,501] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: /home/yidi/local/pytorch/test/dynamo/test_functions.py:515 in test_cublas_allow_tf32, code: return x.cos() - 1
[2024-01-24 15:31:01,501] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] cos = l_x_.cos(); l_x_ = None
[2024-01-24 15:31:01,501] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] sub = cos - 1; cos = None
[2024-01-24 15:31:01,501] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] return (sub,)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118236
Approved by: https://github.com/yanboliang , https://github.com/anijain2305
2024-01-25 23:40:23 +00:00
ydwu4
fae569b4f2
[dynamo] avoid graph break on tensor.element_size() ( #118229 )
...
Before this PR, for the following code, we have a graph break `torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor int call_method element_size`
```python
import torch
def f(x):
return x.sin().element_size() + x.sin()
x = torch.randn(2, 2)
torch.compile(f, backend="eager", fullgraph=True)(x)
```
After this PR, we got the following graph, where element_size() is baked in as a constant.
```python
[2024-01-24 13:49:02,814] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] <eval_with_key>.0 class GraphModule(torch.nn.Module):
[2024-01-24 13:49:02,814] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] def forward(self, L_x_ : torch.Tensor):
[2024-01-24 13:49:02,814] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] l_x_ = L_x_
[2024-01-24 13:49:02,814] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2024-01-24 13:49:02,814] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: /home/yidi/local/pytorch/test.py:4 in f, code: return x.sin().element_size() + x.sin()
[2024-01-24 13:49:02,814] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] sin = l_x_.sin()
[2024-01-24 13:49:02,814] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] sin_1 = l_x_.sin(); l_x_ = None
[2024-01-24 13:49:02,814] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] add = 4 + sin_1; sin_1 = None
[2024-01-24 13:49:02,814] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] return (add,)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118229
Approved by: https://github.com/yanboliang , https://github.com/jansel , https://github.com/anijain2305
2024-01-25 22:28:37 +00:00
laith sakka
b47cf4182e
Fix support non tensor inputs to operator.pos function ( #118251 )
...
Fixes #118231
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118251
Approved by: https://github.com/Skylion007 , https://github.com/anijain2305
2024-01-25 20:37:40 +00:00
Animesh Jain
6e4e81a9ef
[dynamo] Extend LazyVariableTracker to tuples ( #117426 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117426
Approved by: https://github.com/lezcano , https://github.com/jansel
2024-01-18 15:51:28 +00:00
lezcano
4ba5318d3f
[dynamo] Add DictView variable tracker ( #108420 )
...
This also starts a comparison pattern where we don't ask variables
what's their type, but what are their capabilities.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108420
Approved by: https://github.com/jansel
ghstack dependencies: #112252 , #117630 , #110524
2024-01-18 09:37:33 +00:00
Aaron Gokaslan
62496ffd0d
[dynamo][easy]: Add support for operator.truth ( #117463 )
...
* This is an old builtin function equivalent to the bool constructor. it is easy enough to add support for.
* I also realized the tests were in the wrong class (the one reserved for testing default args) so I moved them.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117463
Approved by: https://github.com/jansel
2024-01-14 19:08:31 +00:00
Aaron Gokaslan
bf27dd6df9
Add dynamo support for operator.abs ( #117442 )
...
A test case for operator.abs and allows for constant folding with it. Partially applies to #116396
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117442
Approved by: https://github.com/jansel , https://github.com/malfet
2024-01-13 21:38:55 +00:00
Guilherme Leobas
4f3d698cac
Impl. call_hasattr for BaseUserFunctionVariable ( #116049 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116049
Approved by: https://github.com/zou3519
2024-01-09 22:58:58 +00:00
Aaron Gokaslan
1dd4813328
[BE][dynamo]: Add operator is and is not tests to dynamo tests ( #116397 )
...
Adds an operator that was unit not tested in our test suite - improves coverage. Inspired by looking into https://github.com/pytorch/pytorch/pull/116397 after @XuehaiPan brought up some issues with builtins in #116389
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116397
Approved by: https://github.com/albanD , https://github.com/jansel
2024-01-09 21:13:22 +00:00
Guoliang He
0159e3abbd
[dynamo] add a handler for itertools_chain_from_iterable and test ( #116849 )
...
1. add a handler for itertools_chain_from_iterable
2. a test for itertools_chain_from_iterable
Fixes #116463
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116849
Approved by: https://github.com/ezyang
2024-01-05 15:14:18 +00:00
Xuehai Pan
3149e4a667
[dynamo] fix sum() function with start argument ( #116389 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116389
Approved by: https://github.com/Skylion007 , https://github.com/malfet
2023-12-27 20:42:27 +00:00
PyTorch MergeBot
e0e90bc0d4
Revert "[dynamo] fix sum() function with start argument ( #116389 )"
...
This reverts commit 3c9076f070 .
Reverted https://github.com/pytorch/pytorch/pull/116389 on behalf of https://github.com/kit1980 due to Breaks Meta-internal tests, but the issue could have been caught on GitHub ([comment](https://github.com/pytorch/pytorch/pull/116389#issuecomment-1870556927 ))
2023-12-27 19:05:55 +00:00
Oguz Ulgen
8abeacda6f
Refactor user defined triton kernel tests ( #116425 )
...
I will be adding more triton tests of different types, so I'm moving them to a brand new file. While doing this, I also cleaned up some flake linting opt outs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116425
Approved by: https://github.com/aakhundov
2023-12-26 23:54:26 +00:00
Xuehai Pan
3c9076f070
[dynamo] fix sum() function with start argument ( #116389 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116389
Approved by: https://github.com/Skylion007
2023-12-26 06:37:55 +00:00
Xuehai Pan
039fbeb016
[dynamo] fix functools.reduce() function with None as initial ( #116398 )
...
The `initial` argument in `functools.reduce` can be `None`.
```python
initial_missing = object()
def reduce(function, iterable, initial=initial_missing, /):
it = iter(iterable)
if initial is initial_missing:
value = next(it)
else:
value = initial
for element in it:
value = function(value, element)
return value
```
Reference:
- python/cpython#102759
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116398
Approved by: https://github.com/Skylion007
2023-12-25 21:23:28 +00:00
Tugsbayasgalan Manlaibaatar
76b1d44d57
pre_dispatch aot_export ( #115188 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115188
Approved by: https://github.com/bdhirsh
2023-12-25 04:51:21 +00:00
Shunting Zhang
99f7e721fe
[inductor] make inductor work with new triton compile interface ( #115878 )
...
Recent 2 triton PRs (https://github.com/openai/triton/pull/2701 , https://github.com/openai/triton/pull/2756 ) change the interface for triton.compile, this PR added the necessary change on inductor side to work with both old and new compile API.
Also there is some simplification between compilation call in subprocess and the one in main process
- previously we pass warm_cache_only=True if the compilation happens in subprocess. But triton never use that argument in the currently used pin. So I removed that
- previously we only pass compute_capability if compilation happens in subprocess. The PR change that to always passing compute_capability to triton.compile no matter if the compilation happens in main or sub process.
Updated:
There are more interface change from triton side. E.g.
- tl.math.{min, max} now requires a propagate_nan argument
- JITFunction.run now requires a warmup argument. This affect the benchmarking phase of matmul max-autotune; on the other hand, JITFunction.run forbids stream argument now. Simply removing passing this in when benchmarking matmul triton kernel will work for both old and new version of triton.
- triton Autotuner change attribute name from 'warmup' to 'num_warmup' and from 'rep' to 'num_rep'. This cause dynamo failed to handle triton Autotuner object since dynamo TritonKernelVariable makes assumption about attribute names. It's used in some test cases that a model call triton Autotuner directly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115878
Approved by: https://github.com/jansel
2023-12-22 00:09:29 +00:00
Adnan Akhundov
247f9c3de4
Preserve strides of custom Triton kernel args ( #116219 )
...
Summary: Currently, we [`clone`](19207b9183/torch/_inductor/lowering.py (L5273) ) every `TensorBox` argument of custom Triton kernels while lowering them to the Inductor IR, during which the stride information of the kernel inputs is lost. This is problematic in the common case when the strides of a `torch.Tensor` argument are passed as scalars to a custom Triton kernel alongside the tensor itself (due to the underlying Triton code interpreting the tensors as raw pointers, so the contained stride semantics of the `torch.Tensor` is lost).
In this PR, we add an extended version of the existing [`clone` lowering](19207b9183/torch/_inductor/lowering.py (L2289) )---`clone_preserve_reinterpret_view`---which carries over the `ir.ReinterpretVew` layers (if any) from the source `TensorBox` to the cloned one. The rationale behind adding a new function (and switching to it in the `triton_kernel_wrap` only for now) as opposed to extending the existing `clone` is keeping the semantics of the latter untouched, as it is a lowering of `torch.clone` (albeit incomplete, as the `memory_format` is currently ignored). Changing the existing `clone` would change the semantics which is not necessarily desirable in general. Open to suggestions, though.
Test Plan:
```
$ python test/dynamo/test_functions.py -k test_triton_kernel_strided_input
...
----------------------------------------------------------------------
Ran 1 test in 5.568s
OK
```
Reviewers:
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116219
Approved by: https://github.com/jansel
2023-12-21 22:46:32 +00:00
PyTorch MergeBot
0567f71ac6
Revert " pre_dispatch aot_export ( #115188 )"
...
This reverts commit a267d67350 .
Reverted https://github.com/pytorch/pytorch/pull/115188 on behalf of https://github.com/jeanschmidt due to sadly, it is required to revert this commit in order to revert https://github.com/pytorch/pytorch/pull/115454 ([comment](https://github.com/pytorch/pytorch/pull/115188#issuecomment-1866310014 ))
2023-12-21 14:03:18 +00:00
Tugsbayasgalan Manlaibaatar
a267d67350
pre_dispatch aot_export ( #115188 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115188
Approved by: https://github.com/bdhirsh
2023-12-20 21:36:25 +00:00
Oguz Ulgen
01b979fc9a
[Inductor] Fix constant folding and extern kernel mutation tracking bugs ( #115908 )
...
This PR fixes two bugs
1) Constant folding a triton kernel results in the kernel's inputs to be returned back without any modification. Disable constant folding for triton kernels. Need more investigation
2) NoneLayout buffers should not be deleted as they do not exist
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115908
Approved by: https://github.com/aakhundov , https://github.com/jansel
2023-12-19 02:06:50 +00:00
Yanbo Liang
eb3aa424ce
[Reland][Dynamo] Added support for math.radians on ints with dynamic shapes ( #115477 )
...
Reland #114507
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115477
Approved by: https://github.com/larryliu0820
2023-12-09 08:58:18 +00:00
Oguz Ulgen
c9c4cdf9a9
[AOTAutograd] Do not call ctx.mark_dirty on mutations hidden from autograd ( #115324 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115324
Approved by: https://github.com/bdhirsh
2023-12-09 02:23:13 +00:00
rzou
2847045ed9
Set _dynamo.config.capture_func_transforms=False ( #115267 )
...
Due to not all tests in the Dynamo shard actually running in CI, we've
started to bitrot on this implementation. Since our plan is to trace
into the functorch implementations instead of construct a HOP
(which is what capture_func_transforms=True does), let's turn off this
config by default.
Test Plan:
- Tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115267
Approved by: https://github.com/voznesenskym , https://github.com/guilhermeleobas
2023-12-07 18:42:15 +00:00
Yanbo Liang
4620170008
[Dynamo] Revert multiple PRs since they triggered compilation stuck internally ( #115126 )
...
Revert the following PRs to mitigate internal compilation stuck:
#113432
#114016
#114507
#114196
#114739
#114669
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115126
Approved by: https://github.com/xush6528
2023-12-05 22:35:37 +00:00
Jason Ansel
fe690f430a
[dynamo] Fix dict.get with no default ( #115048 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115048
Approved by: https://github.com/eellison , https://github.com/oulgen
ghstack dependencies: #114830 , #115047
2023-12-05 01:31:33 +00:00
Xuehai Pan
3fbfa8cd0a
[dynamo] support dict.copy() / OrderedDict.copy() / defaultdict.copy() ( #115012 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115012
Approved by: https://github.com/jansel
ghstack dependencies: #115010 , #115011
2023-12-04 01:50:10 +00:00
Xuehai Pan
917a52d2a2
[dynamo] support dict.update(seq2) / OrderedDict.update(seq2) / defaultdict.update(seq2) ( #115011 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115011
Approved by: https://github.com/jansel
ghstack dependencies: #115010
2023-12-04 01:50:10 +00:00
Xuehai Pan
2e8ac5ea93
[dynamo] support dict.fromkeys() / OrderedDict.fromkeys() / defaultdict.fromkeys() ( #115010 )
...
Add support for `dict.fromkeys`, `OrderedDict.fromkeys`, and `defaultdict.fromkeys`.
Fixes #114963
- #114963
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115010
Approved by: https://github.com/jansel
2023-12-04 01:49:59 +00:00
Jason Ansel
7b3429d97c
Fix error with int+SymBool ( #114828 )
...
Fixes #104797
```
File "/home/jansel/pytorch/torch/_dynamo/utils.py", line 1486, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/home/jansel/pytorch/torch/_dynamo/utils.py", line 1591, in run_node
raise RuntimeError(fn_str + str(e)).with_traceback(e.__traceback__) from e
File "/home/jansel/pytorch/torch/_dynamo/utils.py", line 1570, in run_node
return node.target(*args, **kwargs)
File "/home/jansel/conda/envs/pytorch/lib/python3.10/site-packages/einops/packing.py", line 153, in unpack
n_unknown_composed_axes = sum(x == -1 for x in lengths_of_composed_axes)
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <function unpack at 0x7f644b962710>(*(FakeTensor(..., device='cuda:0', size=(1, s0*s1, 128)), [(s0, s1)], 'b * c'), **{}):
unsupported operand type(s) for +: 'int' and 'SymBool'
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114828
Approved by: https://github.com/lezcano
2023-11-30 18:30:36 +00:00
vfdev
f93ea14309
[dynamo] Added support for math ops on ints with dynamic shapes ( #114507 )
...
Fixes #114218
```
import math
import torch
def func(x, a):
b = math.floor(a + 0.5)
b = math.radians(a) + b
y = x + b
return y
cfunc = torch.compile(func, dynamic=True, fullgraph=True, backend="eager")
x = torch.tensor([0, 1, 2, 3], dtype=torch.float32)
a = 12
out = cfunc(x, a)
```
```
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] TRACED GRAPH
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] ===== __compiled_fn_0 =====
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] <eval_with_key>.0 class GraphModule(torch.nn.Module):
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] def forward(self, L_a_ : torch.SymInt, s1 : torch.SymInt, L_x_ : torch.Tensor):
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] l_a_ = L_a_
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] l_x_ = L_x_
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: check_math_ops.py:7, code: b = math.floor(a + 0.5)
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] add = l_a_ + 0.5
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] floor = math_floor(add); add = None
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: /pytorch/torch/_dynamo/polyfill.py:28, code: return math.pi / 180.0 * x
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] mul = 0.017453292519943295 * l_a_; l_a_ = None
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: check_math_ops.py:9, code: b = math.radians(a) + b
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] add_1 = mul + floor; mul = floor = None
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] # File: check_math_ops.py:13, code: y = x + b
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] y = l_x_ + add_1; l_x_ = add_1 = None
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG] return (y,)
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
[2023-11-29 18:10:08,385] [0/0] torch._dynamo.output_graph.__graph_code: [DEBUG]
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114507
Approved by: https://github.com/lezcano
2023-11-30 14:11:57 +00:00
Jon Chuang
172a103857
[dynamo] strict=True kwarg for zip ( #114047 )
...
Fixes https://github.com/pytorch/pytorch/issues/113894
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114047
Approved by: https://github.com/ezyang
2023-11-22 08:48:51 +00:00
Isuru Fernando
e4a88d9581
Convert SymInts to SymFloats with SymPy ( #113683 )
...
Fixes #109365
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113683
Approved by: https://github.com/ezyang , https://github.com/lezcano
2023-11-20 23:35:40 +00:00
Sijia Chen
7afceb9f64
[AOTI] add float support of triton ( #114014 )
...
Summary: As the title
Test Plan: buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/dynamo:test_dynamo -- --exact 'caffe2/test/dynamo:test_dynamo - test_functions.py::DefaultsTests::test_triton_kernel_None_arg' --print-passing-details
Differential Revision: D51421325
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114014
Approved by: https://github.com/oulgen , https://github.com/aakhundov
2023-11-20 23:03:37 +00:00
PyTorch MergeBot
e3eca4c49f
Revert "Convert SymInts to SymFloats with SymPy ( #113683 )"
...
This reverts commit 0ec66b3be5 .
Reverted https://github.com/pytorch/pytorch/pull/113683 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing in trunk 0ec66b3be5 , probably a landrace as this is not failing on your PR ([comment](https://github.com/pytorch/pytorch/pull/113683#issuecomment-1817759130 ))
2023-11-19 06:09:15 +00:00
Isuru Fernando
0ec66b3be5
Convert SymInts to SymFloats with SymPy ( #113683 )
...
Fixes #109365
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113683
Approved by: https://github.com/ezyang
2023-11-18 22:18:24 +00:00
Oguz Ulgen
11857e9a64
[Inductor] Allow autotuned argument to be anywhere in the argument list ( #114002 )
...
Prior to this PR, autotuned arguments could only be at the back of the argument list. This is an inductor limitation and not triton limitation. Fixing this allows more MRS kernels to use user defined triton kernels.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114002
Approved by: https://github.com/aakhundov
ghstack dependencies: #113967
2023-11-18 18:19:32 +00:00
Oguz Ulgen
e0c3936843
[Inductor] Support ReinterpretView in inductor codegen ( #113967 )
...
Adding support for ReinterpretView in inductor so that jagged MRS kernels can use native triton kernels
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113967
Approved by: https://github.com/aakhundov
2023-11-18 18:19:32 +00:00
Oguz Ulgen
a450c784da
[AotAutograd] Move mutations hidden from autograd in graph ( #113454 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113454
Approved by: https://github.com/bdhirsh
2023-11-17 22:47:06 +00:00
vfdev-5
a56af02913
[dynamo] Added support for is_contiguous with dynamic shapes ( #113645 )
...
Description:
- Added support for `x.is_contiguous` with dynamic shapes
On `main` the following code is giving a graph break:
```python
import torch
@torch.compile(backend="eager", dynamic=True, fullgraph=True)
def f(x):
if x.is_contiguous():
return x
else:
return 0
x = torch.randn(13, 14)
f(x)
```
with the error message:
```
File "pytorch/torch/_dynamo/variables/builder.py", line 1541, in wrap_fx_proxy_cls
unimplemented(
File "pytorch/torch/_dynamo/exc.py", line 193, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor bool call_method is_contiguous
from user code:
File "check_is_contig_dynamic_true.py", line 37, in f
if x.is_contiguous():
```
This PR fixes the issue.
```
TORCH_COMPILE_DEBUG=1 python check_is_contig_dynamic_true.py
[2023-11-14 15:49:04,399] [0/0] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing f check_is_contig_dynamic_true.py:34
[2023-11-14 15:49:04,403] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line check_is_contig_dynamic_true.py:34 in f ()
[2023-11-14 15:49:04,403] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] @torch.compile(backend="eager", dynamic=True, fullgraph=True)
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line check_is_contig_dynamic_true.py:37 in f (f)
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] if x.is_contiguous():
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST x []
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR is_contiguous [LazyVariableTracker()]
[2023-11-14 15:49:04,804] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input L_x_ L['x']
[2023-11-14 15:49:04,805] [0/0] torch._dynamo.variables.builder: [DEBUG] wrap_to_fake L['x'] (5, 4) [<DimDynamic.DUCK: 1>, <DimDynamic.DUCK: 1>] [None, None]
[2023-11-14 15:49:04,839] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s0 L['x'].size()[0]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s1 L['x'].size()[1]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s2 L['x'].stride()[0]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s1 L['x'].stride()[1]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE CALL_FUNCTION 0 [GetAttrVariable(TensorVariable(), is_contiguous)]
[2023-11-14 15:49:04,843] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE POP_JUMP_IF_FALSE 12 [ConstantVariable(bool)]
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line check_is_contig_dynamic_true.py:42 in f (f)
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] return 0
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_CONST 0 []
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE RETURN_VALUE None [ConstantVariable(int)]
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.convert_frame: [DEBUG] Skipping frame because no content in function call f check_is_contig_dynamic_true.py 34
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.convert_frame: [DEBUG] No graph captured with one_graph=True
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] TorchDynamo compilation metrics:
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] Function Runtimes (s)
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] ------------------------------- --------------
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] _compile.<locals>.compile_inner 1.2083
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113645
Approved by: https://github.com/lezcano
2023-11-17 12:32:38 +00:00
Brian Hirsh
cebad9867b
graph break on intermediate leaves that require grad ( #113277 )
...
fixes https://github.com/pytorch/pytorch/issues/90552 . This is a simpler fix that just detects the situation where AOTAutograd can't create a proper backward graph for the situation and graph breaks. This was technically a silent correctness issue before.
This PR tries to always graph break when we see a factory function that returns a tensor requiring grad. I check this by seeing if the op returned a `TensorVariable` in dynamo, and if one of the input arguments was a `requires_grad=True` kwarg. I think this is high-fidelity enough, and I'm also hoping that this is uncommon enough that a graph break is reasonable here.
The fix to avoid the graph break in user land is also pretty easy - just instantiate your tensor outside of the compiled region and plumb it in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113277
Approved by: https://github.com/eellison
ghstack dependencies: #113267 , #113416 , #113584
2023-11-16 02:47:45 +00:00
PyTorch MergeBot
1e60174891
Revert "[dynamo] Add run_inductor_tests entrypoint ( #113278 )"
...
This reverts commit b00311ce9e .
Reverted https://github.com/pytorch/pytorch/pull/113278 on behalf of https://github.com/huydhn due to Sorry for reverting your stack, but it is failing to list test internally with buck2 ([comment](https://github.com/pytorch/pytorch/pull/113278#issuecomment-1811646325 ))
2023-11-15 01:19:48 +00:00
Ken Jin
70064ac416
[Dynamo] Match closures by code ID ( #109427 )
...
Closes https://github.com/pytorch/pytorch/issues/107866
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109427
Approved by: https://github.com/ezyang , https://github.com/jansel
2023-11-12 08:20:14 +00:00