PyTorch MergeBot
30094bedbc
Revert "[dynamo][dicts] Support hasattr on dicts ( #134590 )"
...
This reverts commit d23c0150f3 .
Reverted https://github.com/pytorch/pytorch/pull/134590 on behalf of https://github.com/anijain2305 due to causing trunk CI failures ([comment](https://github.com/pytorch/pytorch/pull/134590#issuecomment-2313705582 ))
2024-08-27 22:52:52 +00:00
Animesh Jain
d23c0150f3
[dynamo][dicts] Support hasattr on dicts ( #134590 )
...
Fixes - https://github.com/pytorch/pytorch/issues/134577
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134590
Approved by: https://github.com/Skylion007
ghstack dependencies: #134039
2024-08-27 20:43:40 +00:00
Yanbo Liang
7868b65c4d
[Dynamo] Support dict.setdefault ( #134083 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134083
Approved by: https://github.com/williamwen42
2024-08-22 01:57:33 +00:00
Animesh Jain
bd0db490bf
[dynamo][set] Fix EQUALS_MATCH guard for constant sets and lists ( #134016 )
...
Fixes https://github.com/pytorch/pytorch/issues/133509
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134016
Approved by: https://github.com/laithsakka , https://github.com/jansel
ghstack dependencies: #133742
2024-08-21 12:41:52 +00:00
Isuru Fernando
e554f71d7e
Implement filter in dynamo ( #131674 )
...
Fixes https://github.com/pytorch/pytorch/issues/128944
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131674
Approved by: https://github.com/amjames , https://github.com/jansel
2024-08-14 14:54:13 +00:00
Yanbo Liang
9de023d44d
[Dynamo] Make torch.Size can be reconstructed by LOAD_CONST ( #133342 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133342
Approved by: https://github.com/mlazos , https://github.com/jansel
2024-08-13 23:18:38 +00:00
xinyu-intel
5ae979ab10
[Dynamo] Support torch.autograd._is_checkpoint_valid ( #132611 )
...
Hi, we got `torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor bool call_function <function _is_checkpoint_valid at 0x7f0b0d22e290>` while tracing activation [checkpointing function in deepspeed](324ee65cb0/deepspeed/runtime/activation_checkpointing/checkpointing.py (L630) ). Consider to add it to constant_folding list which is similar with https://github.com/pytorch/pytorch/pull/126196
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132611
Approved by: https://github.com/anijain2305 , https://github.com/williamwen42
2024-08-08 04:05:08 +00:00
Animesh Jain
194ec49d27
[dynamo][lists][stable diffusion] Do not add source on list slice ( #132912 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132912
Approved by: https://github.com/williamwen42
ghstack dependencies: #132806 , #132899
2024-08-08 02:23:07 +00:00
William Wen
01cdcbf7c8
[dynamo] revert map/zip iterator related changes ( #132528 )
...
Need to revert due to internal hangs: S437700
This reverts commit b6c1490cc0 .
Revert "[dynamo] implement IteratorVariable and polyfill fallbacks for enumerate (#131725 )"
This reverts commit 2576dbbc35 .
Revert "[dynamo] add itertools repeat/count bytecode reconstruction (#131716 )"
This reverts commit 35b4de32fa .
Revert "[dynamo] add lazy IteratorVariable implementations for map and zip (#131413 )"
This reverts commit 7d282d8755 .
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132528
Approved by: https://github.com/ZainRizvi
2024-08-04 18:46:55 +00:00
PyTorch MergeBot
0a25666f92
Revert "[dynamo] revert map/zip iterator related changes ( #132528 )"
...
This reverts commit e81e74ca6c .
Reverted https://github.com/pytorch/pytorch/pull/132528 on behalf of https://github.com/ZainRizvi due to This stack entered a weird state in the diff train. Reverting and relanding to clean the state ([comment](https://github.com/pytorch/pytorch/pull/132528#issuecomment-2267628475 ))
2024-08-04 18:26:09 +00:00
William Wen
e81e74ca6c
[dynamo] revert map/zip iterator related changes ( #132528 )
...
Need to revert due to internal hangs: S437700
This reverts commit b6c1490cc0 .
Revert "[dynamo] implement IteratorVariable and polyfill fallbacks for enumerate (#131725 )"
This reverts commit 2576dbbc35 .
Revert "[dynamo] add itertools repeat/count bytecode reconstruction (#131716 )"
This reverts commit 35b4de32fa .
Revert "[dynamo] add lazy IteratorVariable implementations for map and zip (#131413 )"
This reverts commit 7d282d8755 .
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132528
Approved by: https://github.com/ZainRizvi
2024-08-02 19:40:57 +00:00
Oguz Ulgen
920f0426ae
Add None return type to init -- tests rest ( #132376 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132376
Approved by: https://github.com/jamesjwu
ghstack dependencies: #132335 , #132351 , #132352
2024-08-01 15:44:51 +00:00
datagero
bdd7a0322d
[Dynamo] Fix - str handler for UserDefinedObjectVariable ( #130506 )
...
Fixes #130301
Adjusted the call_str method to handle str conversion for UserDefinedObjectVariable.
Attempt in a clean branch for unrelated test errors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130506
Approved by: https://github.com/oulgen , https://github.com/anijain2305
2024-07-31 16:39:59 +00:00
Animesh Jain
03e058189e
[dynamo] Support dict unpack of MutableMapping objects ( #131961 )
...
Fixes https://github.com/pytorch/pytorch/issues/128067
The basic functionality was alredy introduced earlier. This just ensures
that we support UserDefinedObjectVariable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131961
Approved by: https://github.com/williamwen42 , https://github.com/mlazos , https://github.com/yanboliang
ghstack dependencies: #131827 , #131956
2024-07-30 05:49:58 +00:00
William Wen
b6c1490cc0
[dynamo] make more unpack_var_sequence calls forced ( #132069 )
...
Fixes [T197204962](https://www.internalfb.com/intern/tasks/?t=197204962 ) (example failure: https://www.internalfb.com/intern/testinfra/diagnostics/11540474088277914.281475138576374.1722221031/ )
Added tests contain a simple repro for the observed failure (`test_map_unpack_vars`).
Also fixes https://github.com/pytorch/pytorch/issues/132044
Differential Revision: [D60420335](https://our.internmc.facebook.com/intern/diff/D60420335 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132069
Approved by: https://github.com/anijain2305
2024-07-30 02:30:08 +00:00
Chengji Yao
d47c470f47
[dynamo] implement var_getattr in UserFunctionVariable ( #130413 )
...
This PR addresses the `getattr` of UserFunctionVariable. Although this usage is uncommon, it does appear in [Megatron's code](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/core/tensor_parallel/layers.py#L635 ).
```
def linear_with_grad_accumulation_and_async_allreduce(...):
....
if not linear_with_grad_accumulation_and_async_allreduce.warned:
....
....
linear_with_grad_accumulation_and_async_allreduce.warned = False
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130413
Approved by: https://github.com/yanboliang
2024-07-29 08:29:59 +00:00
Xuehai Pan
918ece4f4d
[BE][Easy][11/19] enforce style for empty lines in import segments in test/dy*/ ( #129762 )
...
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501 . Most changes are auto-generated by linter.
You can review these PRs via:
```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129762
Approved by: https://github.com/anijain2305
2024-07-27 17:43:53 +00:00
William Wen
2576dbbc35
[dynamo] implement IteratorVariable and polyfill fallbacks for enumerate ( #131725 )
...
Fixes https://github.com/pytorch/pytorch/issues/112794 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131725
Approved by: https://github.com/anijain2305
ghstack dependencies: #131413 , #131716
2024-07-26 17:17:09 +00:00
William Wen
35b4de32fa
[dynamo] add itertools repeat/count bytecode reconstruction ( #131716 )
...
Also fix bugs in the count iterator variable implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131716
Approved by: https://github.com/anijain2305
ghstack dependencies: #131413
2024-07-26 17:17:09 +00:00
Yanbo Liang
e76e566cfb
[Dynamo] Support zip_longest ( #131497 )
...
Fixes #121348
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131497
Approved by: https://github.com/mlazos , https://github.com/jansel , https://github.com/zou3519
2024-07-26 14:06:10 +00:00
William Wen
7d282d8755
[dynamo] add lazy IteratorVariable implementations for map and zip ( #131413 )
...
Fixes https://github.com/pytorch/pytorch/issues/130750 .
Repro of lazy/eager `map` discrepancy without `islice`:
```python
def fn(a, b):
y = 1
def f(x):
nonlocal y
y += 1
return x
l = list(zip([a, b], map(f, [1, 2, 3, 4])))
return a + y
```
The major change is that we implement `MapVariable` and `ZipVariable` based on `IteratorVariable`. Before, `map` and `zip` were being traced by immediately unpacking the result as a `TupleVariable`, which is wrong in cases such as the example above.
`MapVariable`s are not allowed to be unpacked while `ZipVariable`s can only be unpacked if all of its iterables can also be unpacked.
We also add new `[has_]force_unpack_var_sequence` methods to `VariableTracker` for the case where it is safe to unpack the entire sequence lazily, e.g., when building a list from a map (i.e. `list(map(f, ...))`).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131413
Approved by: https://github.com/anijain2305
2024-07-26 10:47:38 +00:00
Yidi Wu
ffc6bf8149
[dynamo] lazily guard and specialize on the symint when used in f-string. ( #131529 )
...
Fixes https://github.com/pytorch/pytorch/issues/103602 .
This PR implements the idea of "if someone creates a string and then ends up not using it, we would prefer to NOT have specialized." mentioned in above issue. Specifically, we create a lazy variable tracker instead of ConstantVariable when we're in FORMAT_VALUE, and when the lazy variable tracker is realized (i.e. it's going to be used), we create a ConstantVariable and the specialization/guarding happens at the time of realization.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131529
Approved by: https://github.com/ezyang
2024-07-25 16:16:34 +00:00
Animesh Jain
e2b941a1b4
[dynamo] Rename TENSOR_ALIASING to OBJECT_ALIASING. Permit OBJECT_ALIASING for dict guards ( #131480 )
...
Fixes https://github.com/pytorch/pytorch/issues/129667
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131480
Approved by: https://github.com/williamwen42
ghstack dependencies: #131347 , #131367 , #131378 , #131389 , #131405
2024-07-24 00:06:53 +00:00
Animesh Jain
6bbef2a06b
[dynamo] Support set on KeysView ( #131389 )
...
Fixes https://github.com/pytorch/pytorch/issues/129664
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131389
Approved by: https://github.com/mlazos
ghstack dependencies: #131347 , #131367 , #131378
2024-07-23 14:15:26 +00:00
Animesh Jain
e7c5e06772
[dynamo] Support __contains__ on __dict__ on UserDefinedClassVariable ( #131378 )
...
Fixes https://github.com/pytorch/pytorch/issues/129665
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131378
Approved by: https://github.com/mlazos
ghstack dependencies: #131347 , #131367
2024-07-23 14:15:26 +00:00
Animesh Jain
0bc5e26067
[dynamo] Support dict conversion of objects derived from MutableMapping ( #131367 )
...
Fixes - https://github.com/pytorch/pytorch/issues/129662
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131367
Approved by: https://github.com/williamwen42
ghstack dependencies: #131347
2024-07-23 14:15:20 +00:00
Animesh Jain
a944cce5b8
[dynamo] Support if callable on list ( #131347 )
...
Fixes https://github.com/pytorch/pytorch/issues/130720
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131347
Approved by: https://github.com/williamwen42 , https://github.com/mlazos
2024-07-23 14:15:15 +00:00
Alex Dennis
7d4f50de19
dynamo add support for defaultdict(set) ( #130745 )
...
Fixes #130554
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130745
Approved by: https://github.com/Skylion007
2024-07-15 22:23:33 +00:00
PyTorch MergeBot
dff9d68f18
Revert "Fix names conflict when lifting ( #129817 )"
...
This reverts commit 53cf46b8c6 .
Reverted https://github.com/pytorch/pytorch/pull/129817 on behalf of https://github.com/clee2000 due to Failing inductor/test_flex_attention.py https://github.com/pytorch/pytorch/actions/runs/9940532858/job/27478084137 74da2a467f Sorry for the churn, possibly a landrace? ([comment](https://github.com/pytorch/pytorch/pull/129817#issuecomment-2229519886 ))
2024-07-15 22:08:45 +00:00
Zhanghan Wang
53cf46b8c6
Fix names conflict when lifting ( #129817 )
...
## Bug description
When pending args that are potentially to be lift [here](58f346c874/torch/_dynamo/output_graph.py (L1866) ) having same base name, like `contiguous` and `contiguous_1`, the call into [create_graph_input](58f346c874/torch/_dynamo/output_graph.py (L2081) ) can finally create a name ([here](58f346c874/torch/fx/graph.py (L1008) )) that overwrite args to lift. And thus causing a wrong output of graph.
## Reproducing
Below is an reproduceable example,
```python
import logging
from typing import List
import torch
from functorch.compile import aot_module_simplified, make_boxed_func
@torch.library.custom_op("mylib::somefunc_forward", mutates_args=())
def somefunc_forward(
input_: torch.Tensor,
weight: torch.Tensor,
shape: List[int],
) -> torch.Tensor:
return torch.ones_like(input_)
@somefunc_forward.register_fake
def _(input_, shape, weight):
return torch.empty_like(input_)
@torch.library.custom_op("mylib::somefunc_backward", mutates_args=())
def somefunc_backward(
grad_output: torch.Tensor,
input_: torch.Tensor,
weight: torch.Tensor,
shape: List[int],
) -> torch.Tensor:
print(f"backward.{grad_output.shape=}")
print(f"backward.{input_.shape=}")
print(f"backward.{weight.shape=}")
print(f"backward.{shape=}")
assert list(weight.shape) == shape
return torch.ones_like(weight)
@somefunc_backward.register_fake
def _(grad_output, input_, weight, shape):
return torch.empty_like(weight)
def a_func(grad_output, input_, weight_, shape):
return torch.ones_like(input_.sum() * weight_)
class SomeFunc(torch.autograd.Function):
@staticmethod
def forward(ctx, input, weight, normalized_shape):
ctx.normalized_shape = normalized_shape
input_ = input.contiguous()
weight_ = weight.contiguous()
output = somefunc_forward(input_, weight_, ctx.normalized_shape)
ctx.save_for_backward(input_, weight_)
return output
@staticmethod
def backward(ctx, grad_output):
input_, weight_ = ctx.saved_tensors
# grad_weight = a_func(grad_output, input_, weight_, ctx.normalized_shape)
grad_weight = somefunc_backward(
grad_output.contiguous(),
input_,
weight_,
ctx.normalized_shape,
)
return None, grad_weight, None
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.weight = torch.nn.Parameter(torch.ones(7))
def forward(self, x):
return SomeFunc.apply(x, self.weight, [7])
model = MyModel()
torch._logging.set_logs(dynamo=logging.DEBUG, aot=logging.DEBUG, graph_code=True)
def aot_print_backend(gm, sample_inputs):
# Forward compiler capture
def fw(gm, sample_inputs):
print(f"----- fw")
gm.print_readable()
return make_boxed_func(gm.forward)
# Backward compiler capture
def bw(gm, sample_inputs):
print(f"----- bw")
gm.print_readable()
return make_boxed_func(gm.forward)
# Call AOTAutograd
gm_forward = aot_module_simplified(
gm, sample_inputs, fw_compiler=fw, bw_compiler=bw
)
return gm_forward
model = torch.compile(
model,
backend=aot_print_backend,
dynamic=False,
)
out = model(torch.rand((128, 4, 7)))
out.mean().backward()
```
I can see log that showing calling into create_graph_input like
```log
V0629 02:08:46.839914 8200981504 torch/_dynamo/output_graph.py:2042] [0/0] create_graph_input contiguous (none)
V0629 02:08:46.839998 8200981504 torch/_dynamo/output_graph.py:2042] [0/0] create_graph_input contiguous_1 (none)
```
And the backward graph generate will be like
```log
class GraphModule(torch.nn.Module):
def forward(self, function_ctx, somefunc_forward_default: "f32[128, 4, 7]", contiguous: "f32[128, 4, 7]", contiguous_1: "f32[7]"):
contiguous_1 = contiguous
contiguous_2 = contiguous_1
# No stacktrace found for following nodes
_set_grad_enabled = torch._C._set_grad_enabled(False)
# File: /Users/bytedance/testtorch/test_custom_op_bug.py:61 in backward, code: grad_output.contiguous(),
contiguous: "f32[128, 4, 7]" = somefunc_forward_default.contiguous(); somefunc_forward_default = None
# File: /opt/tiger/pytorch/torch/_library/custom_ops.py:506 in __call__, code: return self._opoverload(*args, **kwargs)
somefunc_backward_default: "f32[7]" = torch.ops.mylib.somefunc_backward.default(contiguous, contiguous_1, contiguous_2, [7]); contiguous = contiguous_1 = contiguous_2 = None
# No stacktrace found for following nodes
_set_grad_enabled_1 = torch._C._set_grad_enabled(True)
return (None, somefunc_backward_default)
```
The original code of `somefunc_backward` takes a input list of `grad_output`, `input_`, `weight` and `shape`, where `weight` should be shape of `torch.Size([7])`. However, in the graph, `contiguous1` and `contiguous_2` are assigned with `contiguous`, this leads to assertion failure I added in `somefunc_backward`.
## Environment
```log
Collecting environment information...
PyTorch version: 2.5.0a0+git0b7e8df
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.9.19 (main, May 6 2024, 14:39:30) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] numpy==2.0.0
[pip3] optree==0.11.0
[pip3] torch==2.5.0a0+git0b7e8df
[pip3] torchgraph==0.0.1
[conda] numpy 2.0.0 pypi_0 pypi
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.5.0a0+git0b7e8df dev_0 <develop>
[conda] torchgraph 0.0.1 dev_0 <develop>
```
## How to fix?
I put a naive fix that add the potential args to lift into the used_names. This visits private variables, will fix that if this issue makes sense to you.
@zou3519 @oulgen
Co-authored-by: rzou <zou3519@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129817
Approved by: https://github.com/zou3519
2024-07-15 18:49:12 +00:00
PyTorch MergeBot
1e897a0ca4
Revert "Fix names conflict when lifting ( #129817 )"
...
This reverts commit 74da2a467f .
Reverted https://github.com/pytorch/pytorch/pull/129817 on behalf of https://github.com/clee2000 due to broke dynamo/test_inline_inbuilt_nn_modules.py https://github.com/pytorch/pytorch/actions/runs/9940532858/job/27461141919 74da2a467f . Test passed on PR, possibly a landrace? ([comment](https://github.com/pytorch/pytorch/pull/129817#issuecomment-2228993570 ))
2024-07-15 17:09:52 +00:00
Zhanghan Wang
74da2a467f
Fix names conflict when lifting ( #129817 )
...
## Bug description
When pending args that are potentially to be lift [here](58f346c874/torch/_dynamo/output_graph.py (L1866) ) having same base name, like `contiguous` and `contiguous_1`, the call into [create_graph_input](58f346c874/torch/_dynamo/output_graph.py (L2081) ) can finally create a name ([here](58f346c874/torch/fx/graph.py (L1008) )) that overwrite args to lift. And thus causing a wrong output of graph.
## Reproducing
Below is an reproduceable example,
```python
import logging
from typing import List
import torch
from functorch.compile import aot_module_simplified, make_boxed_func
@torch.library.custom_op("mylib::somefunc_forward", mutates_args=())
def somefunc_forward(
input_: torch.Tensor,
weight: torch.Tensor,
shape: List[int],
) -> torch.Tensor:
return torch.ones_like(input_)
@somefunc_forward.register_fake
def _(input_, shape, weight):
return torch.empty_like(input_)
@torch.library.custom_op("mylib::somefunc_backward", mutates_args=())
def somefunc_backward(
grad_output: torch.Tensor,
input_: torch.Tensor,
weight: torch.Tensor,
shape: List[int],
) -> torch.Tensor:
print(f"backward.{grad_output.shape=}")
print(f"backward.{input_.shape=}")
print(f"backward.{weight.shape=}")
print(f"backward.{shape=}")
assert list(weight.shape) == shape
return torch.ones_like(weight)
@somefunc_backward.register_fake
def _(grad_output, input_, weight, shape):
return torch.empty_like(weight)
def a_func(grad_output, input_, weight_, shape):
return torch.ones_like(input_.sum() * weight_)
class SomeFunc(torch.autograd.Function):
@staticmethod
def forward(ctx, input, weight, normalized_shape):
ctx.normalized_shape = normalized_shape
input_ = input.contiguous()
weight_ = weight.contiguous()
output = somefunc_forward(input_, weight_, ctx.normalized_shape)
ctx.save_for_backward(input_, weight_)
return output
@staticmethod
def backward(ctx, grad_output):
input_, weight_ = ctx.saved_tensors
# grad_weight = a_func(grad_output, input_, weight_, ctx.normalized_shape)
grad_weight = somefunc_backward(
grad_output.contiguous(),
input_,
weight_,
ctx.normalized_shape,
)
return None, grad_weight, None
class MyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.weight = torch.nn.Parameter(torch.ones(7))
def forward(self, x):
return SomeFunc.apply(x, self.weight, [7])
model = MyModel()
torch._logging.set_logs(dynamo=logging.DEBUG, aot=logging.DEBUG, graph_code=True)
def aot_print_backend(gm, sample_inputs):
# Forward compiler capture
def fw(gm, sample_inputs):
print(f"----- fw")
gm.print_readable()
return make_boxed_func(gm.forward)
# Backward compiler capture
def bw(gm, sample_inputs):
print(f"----- bw")
gm.print_readable()
return make_boxed_func(gm.forward)
# Call AOTAutograd
gm_forward = aot_module_simplified(
gm, sample_inputs, fw_compiler=fw, bw_compiler=bw
)
return gm_forward
model = torch.compile(
model,
backend=aot_print_backend,
dynamic=False,
)
out = model(torch.rand((128, 4, 7)))
out.mean().backward()
```
I can see log that showing calling into create_graph_input like
```log
V0629 02:08:46.839914 8200981504 torch/_dynamo/output_graph.py:2042] [0/0] create_graph_input contiguous (none)
V0629 02:08:46.839998 8200981504 torch/_dynamo/output_graph.py:2042] [0/0] create_graph_input contiguous_1 (none)
```
And the backward graph generate will be like
```log
class GraphModule(torch.nn.Module):
def forward(self, function_ctx, somefunc_forward_default: "f32[128, 4, 7]", contiguous: "f32[128, 4, 7]", contiguous_1: "f32[7]"):
contiguous_1 = contiguous
contiguous_2 = contiguous_1
# No stacktrace found for following nodes
_set_grad_enabled = torch._C._set_grad_enabled(False)
# File: /Users/bytedance/testtorch/test_custom_op_bug.py:61 in backward, code: grad_output.contiguous(),
contiguous: "f32[128, 4, 7]" = somefunc_forward_default.contiguous(); somefunc_forward_default = None
# File: /opt/tiger/pytorch/torch/_library/custom_ops.py:506 in __call__, code: return self._opoverload(*args, **kwargs)
somefunc_backward_default: "f32[7]" = torch.ops.mylib.somefunc_backward.default(contiguous, contiguous_1, contiguous_2, [7]); contiguous = contiguous_1 = contiguous_2 = None
# No stacktrace found for following nodes
_set_grad_enabled_1 = torch._C._set_grad_enabled(True)
return (None, somefunc_backward_default)
```
The original code of `somefunc_backward` takes a input list of `grad_output`, `input_`, `weight` and `shape`, where `weight` should be shape of `torch.Size([7])`. However, in the graph, `contiguous1` and `contiguous_2` are assigned with `contiguous`, this leads to assertion failure I added in `somefunc_backward`.
## Environment
```log
Collecting environment information...
PyTorch version: 2.5.0a0+git0b7e8df
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.9.19 (main, May 6 2024, 14:39:30) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] numpy==2.0.0
[pip3] optree==0.11.0
[pip3] torch==2.5.0a0+git0b7e8df
[pip3] torchgraph==0.0.1
[conda] numpy 2.0.0 pypi_0 pypi
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.5.0a0+git0b7e8df dev_0 <develop>
[conda] torchgraph 0.0.1 dev_0 <develop>
```
## How to fix?
I put a naive fix that add the potential args to lift into the used_names. This visits private variables, will fix that if this issue makes sense to you.
@zou3519 @oulgen
Co-authored-by: rzou <zou3519@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129817
Approved by: https://github.com/zou3519
2024-07-15 13:41:46 +00:00
awayzjj
dcaa111dc8
support intersection by polyfill ( #130672 )
...
Fixes https://github.com/pytorch/pytorch/issues/130557
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130672
Approved by: https://github.com/anijain2305
2024-07-14 10:44:26 +00:00
Tom Ritchford
b0a597fcb4
Fix #121334 : graph break on constant method call ( #130158 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130158
Approved by: https://github.com/lezcano
2024-07-12 17:34:46 +00:00
Xuehai Pan
973037be6a
[BE][Easy] apply autofix for ruff rules unnecessary-collection-call (C408): list() / tuple() / dict() ( #130199 )
...
This PR changes the empty collection factory call to Python literals:
- `list()` -> `[]`
- `tuple()` -> `()`
- `dict()` -> `{}`
The Python literals are more performant and safer. For example, the bytecode for building an empty dictionary:
```bash
$ python3 -m dis - <<EOS
import collections
d1 = {}
d2 = dict()
dict = collections.OrderedDict
d3 = dict()
EOS
```
```text
0 0 RESUME 0
1 2 LOAD_CONST 0 (0)
4 LOAD_CONST 1 (None)
6 IMPORT_NAME 0 (collections)
8 STORE_NAME 0 (collections)
3 10 BUILD_MAP 0
12 STORE_NAME 1 (d1)
4 14 PUSH_NULL
16 LOAD_NAME 2 (dict)
18 CALL 0
26 STORE_NAME 3 (d2)
6 28 LOAD_NAME 0 (collections)
30 LOAD_ATTR 8 (OrderedDict)
50 STORE_NAME 2 (dict)
7 52 PUSH_NULL
54 LOAD_NAME 2 (dict)
56 CALL 0
64 STORE_NAME 5 (d3)
66 RETURN_CONST 1 (None)
```
The dict literal `{}` only has one bytecode `BUILD_MAP`, while the factory call `dict()` has three `PUSH_NULL + LOAD_NAME + CALL`. Also, the factory call is not safe if users override the `dict` name in `locals` or `globals` (see the example of replacing with `OrderedDict` above).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130199
Approved by: https://github.com/malfet
2024-07-11 17:30:28 +00:00
Animesh Jain
6b5fbc544e
[dynamo] Use polyfill to trace through the attributes of torch.jit.* and lru_cache_wrapper ( #128336 )
...
Earlier we were taking the vt for `obj` and then monkeypatching that `vt.source` to be `obj._torchdynamo_inline`. If one accesses `obj.attr_a`, this would cause problems because Dynamo would then search it in `obj._torchdynamo_inline.attr_a`. This PR makes it more functional, so that we have different vts for obj and `ob._torchdynamo_inline`.
Fixes https://github.com/pytorch/pytorch/issues/93698
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128336
Approved by: https://github.com/jansel , https://github.com/yanboliang
ghstack dependencies: #129117
2024-06-21 07:44:44 +00:00
Laith Sakka
4c84af0f5d
Fix indexing and slicing of ranges in dynamo ( #128567 )
...
Fix https://github.com/pytorch/pytorch/issues/128520
Dynamo does not handle range()[binary subscript] or range()[trinary_subscript] correctly. Right now it calls
the get_item function which basically applies the subscript operation on top of the list of [start, end, step]! which is completely not related to what is expected.
in python, range()[complex subscript] is another range, ex:
range(1, 10, 2)[1:4:1] is range(3, 9, 2)
and range(1, 10, 2)[1:4:1] is range(-9, 9, 2)
This diff fix index and slice applications on range.
it mimics implementations from (https://github.com/python/cpython/blob/main/Objects/rangeobject.c )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128567
Approved by: https://github.com/anijain2305
2024-06-14 16:49:49 +00:00
PyTorch MergeBot
48a54146e7
Revert "[dynamo] Support ndarray.dtype attribute access ( #124490 )"
...
This reverts commit 4adee71155 .
Reverted https://github.com/pytorch/pytorch/pull/124490 on behalf of https://github.com/atalman due to Breaks internal builds ([comment](https://github.com/pytorch/pytorch/pull/124490#issuecomment-2152664749 ))
2024-06-06 14:21:29 +00:00
Andrew M. James
4adee71155
[dynamo] Support ndarray.dtype attribute access ( #124490 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124490
Approved by: https://github.com/lezcano
ghstack dependencies: #125717
2024-06-05 17:20:01 +00:00
laithsakka
029af29e6d
support operator.index function ( #127440 )
...
Fix https://github.com/pytorch/pytorch/issues/127426
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127440
Approved by: https://github.com/mlazos
ghstack dependencies: #126444 , #127146 , #127424
2024-05-30 22:44:18 +00:00
Andrew M. James
80a8fc07b2
[dynamo] Handle np.iinfo/finfo/dtype as input ( #124482 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124482
Approved by: https://github.com/lezcano
ghstack dependencies: #124481
2024-05-29 16:00:15 +00:00
Andrew M. James
ade075444f
[dynamo] Support numpy.dtype ( #124481 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124481
Approved by: https://github.com/lezcano
2024-05-29 14:45:14 +00:00
Yanbo Liang
da9bf77f0a
[Dynamo] Support SET_UPDATE ( #126243 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126243
Approved by: https://github.com/anijain2305 , https://github.com/Skylion007 , https://github.com/jansel
2024-05-16 20:05:34 +00:00
Yanbo Liang
f91cae461d
[Dynamo] SizeVariable supports hasattr ( #126222 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126222
Approved by: https://github.com/williamwen42 , https://github.com/anijain2305
2024-05-15 17:16:36 +00:00
Yanbo Liang
51ed4c46cf
[Dynamo] Supports torch._C._is_any_autocast_enabled ( #126196 )
...
Fixes #126026
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126196
Approved by: https://github.com/anijain2305
2024-05-15 03:16:13 +00:00
Yanbo Liang
bdaa9b2981
[Dynamo] Wrap set as SetVariable and support isdisjoint by polyfill ( #126046 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126046
Approved by: https://github.com/anijain2305 , https://github.com/jansel
2024-05-14 04:56:06 +00:00
Edward Z. Yang
ecd62746e3
Also pull size/stride info from example_value ( #125505 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125505
Approved by: https://github.com/jansel
2024-05-05 22:27:46 +00:00
Animesh Jain
1a0b247762
[dynamo] Bug fix for LOAD_GLOBAL and STORE_GLOBAL ( #125002 )
...
Earlier globals of inlined functions from other files were not handled correctly. We were not tracking mutations on them. They were colliding with the same global name in the parent function etc. This PR overrides the LOAD/STORE_GLOBAL for inline tx and tracks mutation on them separately.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125002
Approved by: https://github.com/jansel
ghstack dependencies: #125097 , #125107
2024-04-28 15:24:17 +00:00
YangQun1
91d565da0c
[dynamo] Add support for tensor's is_complex method ( #124927 )
...
This PR is to add support for tensor's is_complex method in dynamo. Take the following code as an example:
```python
def test_tensor_is_complex(x):
if x.is_complex():
return x + 1
else:
return x - 1
```
Before this fix, the is_complex() call will cause a graph break "torch.* op returned non-Tensor bool call_method is_complex". After this fix, the graph break can be avoided.
Fixes #122692
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124927
Approved by: https://github.com/ezyang
2024-04-26 18:28:14 +00:00
Yanbo Liang
0d90d4d613
[Dynamo] Fix NamedTuple hasattr bug ( #124531 )
...
Fixes #124402
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124531
Approved by: https://github.com/jansel
2024-04-21 04:36:22 +00:00