Aaron Orenstein
57d8278ab9
pickler for GraphModule ( #141659 )
...
Pickling GraphModule needs some special handling for wrapping things that normally can't be pickled - but async compile needs to pass them across a wire so we need to be able to serialize it - add some helpers to enable that.
Differential Revision: [D68921318](https://our.internmc.facebook.com/intern/diff/D68921318 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141659
Approved by: https://github.com/jamesjwu
2025-01-31 05:34:28 +00:00
Pian Pawakapan
ffb424eab6
[dynamo/export] call local_scalar_dense when full() value is scalar tensor ( #144999 )
...
Fixes https://github.com/pytorch/pytorch/issues/144907
```
class Foo(torch.nn.Module):
def forward(self, val):
return torch.full((80, 2), val, dtype=torch.float32)
export(Foo(), args=(torch.tensor(1),))
```
When we have a `torch.full` call like above, where the fill value is a scalar Tensor and not a scalar value, the FX graph from `_dynamo.export()` contains a single node: the full op. We run into a `PendingUnbackedSymbolNotFound` error, because the `item()` call is implicit; the UnbackedSymInt is extracted but goes directly into the data of the output tensor value, and we're then unable to locate it when we try to compute unbacked bindings.
On the other hand, non-strict export doesn't face this, because an explicit `item()`, or `local_scalar_dense` node is inserted, and the unbacked binding is directly the example value of that node.
This adds a dynamo handler to imitate what happens in non-strict.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144999
Approved by: https://github.com/angelayi
2025-01-31 02:45:43 +00:00
Animesh Jain
e7bb608d02
[dynamo][dicts] Support construction of types.MappingProxyType ( #145994 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145994
Approved by: https://github.com/StrongerXi , https://github.com/jansel
ghstack dependencies: #145986 , #145987
2025-01-31 00:47:31 +00:00
Animesh Jain
4665bc2cc0
[dynamo][functions] Support id on function ( #145987 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145987
Approved by: https://github.com/StrongerXi , https://github.com/jansel , https://github.com/mlazos
ghstack dependencies: #145986
2025-01-31 00:47:23 +00:00
Animesh Jain
56307dc370
[dynamo][dicts] Raise exception on pop ( #145986 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145986
Approved by: https://github.com/Skylion007 , https://github.com/williamwen42 , https://github.com/StrongerXi , https://github.com/jansel
2025-01-31 00:47:13 +00:00
Aaron Orenstein
23695ea002
Fix dynamo use of list[int] in graph break ( #145554 )
...
This reintroduces the change backed out by #145393 and fixes the underlying problem.
Although using a BuiltinVariable was better than nothing when we saw a GenericAlias it had problems if there was a graph break and we had to reconstruct the original python code which BuiltinVariable did as a simple `list` instead of a `list[int]`.
This changes it to use a TypingVariable instead and then teaches TypingVariable how to reconstruct.
Original commit changeset: 77b9193acb23
python test/dynamo/test_repros.py ReproTests.test_graph_break_on_jit_isinstance
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145554
Approved by: https://github.com/anijain2305
ghstack dependencies: #145551 , #145552 , #145553
2025-01-30 22:21:40 +00:00
Aaron Orenstein
fbb076cc45
Fix call to create_load_global ( #145553 )
...
There is no version of create_load_global() that takes three parameters - any use of this function will fail. I think this is probably the correct fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145553
Approved by: https://github.com/anijain2305
ghstack dependencies: #145551 , #145552
2025-01-30 22:21:40 +00:00
Aaron Orenstein
ccbbc88bbb
Turn on mypy for _dynamo/variables/builtin.py ( #145552 )
...
The fact that mypy errors were ignored was hiding several bugs in builtin.py (for example the previous diff's incorrect override and use of `call_getattr`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145552
Approved by: https://github.com/anijain2305 , https://github.com/Skylion007
ghstack dependencies: #145551
2025-01-30 22:21:32 +00:00
Aaron Orenstein
f3120f6d26
Remove incorrect BuiltinVariable.call_hasattr() ( #145551 )
...
BuiltinVariable.call_hasattr() overrides the base class - but actually behaves differently. The base is `obj.call_hasattr(tx, attr)` but BuiltinVariable's version is `<unused>.call_hasattr(tx, obj, attr)`.
The BuiltinVariable version is used as a pattern from `call_self_handler()` for `BuiltinVariable(hasattr)`. I think the other version is just used for internal `hasattr(obj, name)` so I renamed that one to `call_obj_hasattr`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145551
Approved by: https://github.com/anijain2305
2025-01-30 22:21:19 +00:00
clr
d100e9ae74
inductor: Don't throw an internal error when a nn.module is missing a attribute ( #145122 )
...
If a nn.module getattr call throws, we should make sure that we don't crash with an internal error
Note that I couldn't figure out how to test this, so advice would be awesome. I have my best case attempt at https://github.com/pytorch/pytorch/pull/145799 , but it doesn't seem to reproduce the crash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145122
Approved by: https://github.com/jansel
2025-01-30 21:55:29 +00:00
Yidi Wu
7e7341bddd
[hop] fix unbacked_bindings meta for while_loop ( #143559 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143559
Approved by: https://github.com/zou3519
2025-01-30 21:33:09 +00:00
Thomas Bohnstingl
9f9904172d
[scan] scan dim handling in user-facing scan() ( #145179 )
...
This PR introduces the capability that the scan dim is handled in the user facing scan() call. Internally, the scan dim is always shifted to dim 0 and then the scan is performed over that dim.
This is a follow-up PR from https://github.com/bohnstingl/pytorch/pull/3
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145179
Approved by: https://github.com/ydwu4
2025-01-30 21:09:07 +00:00
Yidi Wu
a3698ebd5c
[while_loop] specialize when cond_fn return constants ( #144515 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144515
Approved by: https://github.com/zou3519
2025-01-30 19:02:34 +00:00
PyTorch MergeBot
1185b81c51
Revert "[dynamo] Use polyfill to implement comparison operators ( #144485 )"
...
This reverts commit d1f82de2bf .
Reverted https://github.com/pytorch/pytorch/pull/144485 on behalf of https://github.com/huydhn due to This seems to break dynamo tests in trunk after landing ([comment](https://github.com/pytorch/pytorch/pull/144485#issuecomment-2622893294 ))
2025-01-29 21:30:42 +00:00
Animesh Jain
d1f82de2bf
[dynamo] Use polyfill to implement comparison operators ( #144485 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144485
Approved by: https://github.com/jansel
2025-01-29 17:37:40 +00:00
Thomas Bohnstingl
82859f6185
[associative_scan] scan dim handling in user-facing associative_scan() ( #139864 )
...
This PR implements the user-facing dim change, i.e., that the scan dim provided by the user is always moved to dim 0 and then the associative_scan operation always operates on dim 0.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139864
Approved by: https://github.com/ydwu4
2025-01-28 23:58:10 +00:00
PyTorch MergeBot
3481c2aec4
Revert "[dynamo] save/restore system random state more carefully ( #145750 )"
...
This reverts commit e3d3f2b22e .
Reverted https://github.com/pytorch/pytorch/pull/145750 on behalf of https://github.com/eellison due to bisected perf regression ([comment](https://github.com/pytorch/pytorch/pull/145750#issuecomment-2620028414 ))
2025-01-28 20:51:07 +00:00
Animesh Jain
6824a4a75d
[dynamo][builtin-skipfiles-cleanup] Remove re ( #145826 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145826
Approved by: https://github.com/zou3519
ghstack dependencies: #145744 , #145753
2025-01-28 16:14:34 +00:00
Animesh Jain
5c5306e8bc
[dynamo][builtin-skiplist-cleanup] Remove weakref ( #145744 )
...
WeakKeyDictionary already works very nicely with the UserDefinedObject Variable Tracker.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145744
Approved by: https://github.com/jansel
2025-01-28 07:55:12 +00:00
William Wen
e3d3f2b22e
[dynamo] save/restore system random state more carefully ( #145750 )
...
Reattempt of https://github.com/pytorch/pytorch/pull/145435 since the state of the linked internal diff appears to be messed up.
Note: I have verified that the previously failing internal tests now pass internally.
Differential Revision: [D68723334](https://our.internmc.facebook.com/intern/diff/D68723334 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145750
Approved by: https://github.com/StrongerXi
2025-01-28 01:34:13 +00:00
Ryan Guo
5a4d959cdb
[dynamo] Properly model torch profiler context objects ( #145537 )
...
Prior to this patch, Dynamo conveniently modelled torch profiler context
objects (e.g., `torch.profiler.profile`) as `NullContextVariable`
because `torch.compile` ignore the effect of these profiler contexts.
However, the semantics of these profiler contexts diverges from
`contextlib.nullcontext` in the `__enter__` function, where the former
returns `self` and the latter returns `None`. This causes subtle error
as observed in #125021 .
This patch adds back a `ProfilerContextVariable`, which addresses the
aforementioned semantic discrepency.
Fixes #125021 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145537
Approved by: https://github.com/zou3519 , https://github.com/williamwen42
2025-01-28 00:03:36 +00:00
PyTorch MergeBot
2de53b3b65
Revert "pickler for GraphModule ( #141659 )"
...
This reverts commit c6ad08357b .
Reverted https://github.com/pytorch/pytorch/pull/141659 on behalf of https://github.com/ZainRizvi due to Sorry but this is breaking internally, please take a look at D68694181 for more details. ([comment](https://github.com/pytorch/pytorch/pull/141659#issuecomment-2617045120 ))
2025-01-27 22:39:30 +00:00
Animesh Jain
993b229665
[dynamo][dicts] Fix dict.__new__ bug ( #145723 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145723
Approved by: https://github.com/jansel , https://github.com/StrongerXi
ghstack dependencies: #145519 , #145547 , #145558
2025-01-27 21:42:43 +00:00
Animesh Jain
7e1c7253e9
[dynamo][builtin-skipfile-cleanup] Support tuple.__new__ ( #145558 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145558
Approved by: https://github.com/jansel , https://github.com/StrongerXi
ghstack dependencies: #145519 , #145547
2025-01-27 21:42:43 +00:00
Aaron Orenstein
c6ad08357b
pickler for GraphModule ( #141659 )
...
Pickling GraphModule needs some special handling for wrapping things that normally can't be pickled - but async compile needs to pass them across a wire so we need to be able to serialize it - add some helpers to enable that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141659
Approved by: https://github.com/jamesjwu
2025-01-26 19:29:13 +00:00
Xuehai Pan
0afdee4c39
[dynamo] raise IndexError when inserting into a full deque ( #139379 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139379
Approved by: https://github.com/jansel
2025-01-25 18:04:49 +00:00
Yuanhao Ji
cc1ecead07
[Dynamo] Allow format() to handle int ( #144956 )
...
Fixes #144830
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144956
Approved by: https://github.com/jansel
2025-01-25 04:12:45 +00:00
Michael Lazos
8eea554332
[Dynamo] Fix names collisions with foreach decomps ( #145479 )
...
Fixes https://github.com/pytorch/pytorch/issues/138698
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145479
Approved by: https://github.com/yanboliang
2025-01-24 18:46:58 +00:00
Animesh Jain
74cfb4f364
[dynamo][refactor] Move collections.namedtuple out of SkipFunctionVariable ( #145547 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145547
Approved by: https://github.com/zou3519
ghstack dependencies: #145519
2025-01-24 17:39:33 +00:00
Animesh Jain
53fc921ce2
[dynamo][trace-rules-cleanup] Remove functools from the Builtins skiplist ( #145519 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145519
Approved by: https://github.com/yanboliang , https://github.com/zou3519
2025-01-24 06:02:03 +00:00
Animesh Jain
5a18f1e1eb
[dynamo] Support fx map_aggregate ( #145351 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145351
Approved by: https://github.com/zou3519
2025-01-23 03:19:30 +00:00
Aaron Orenstein
1ce533867f
Teach dynamo to handle GenericAlias without a graph break ( #145240 )
...
Dynamo wasn't handling the new PEP585 type annotations:
```
x = list[Foo]
```
Although this worked in py3.9 this was causing an `unimplemented` (Unexpected type in sourceless builder) in py3.12.
This fixes it to treat them as a BuiltinVariable.
Fixes #145226
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145240
Approved by: https://github.com/anijain2305
2025-01-22 01:55:51 +00:00
Animesh Jain
19584b28fd
[dynamo][dicts] Consolidate dict(..) construction ( #144342 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144342
Approved by: https://github.com/StrongerXi
2025-01-20 04:42:06 +00:00
Aaron Orenstein
a79100ab11
PEP585 update - torch/_dynamo ( #145105 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145105
Approved by: https://github.com/bobrenjc93
2025-01-18 20:47:11 +00:00
Yanbo Liang
43a00d73b3
[Trace Python Dispatcher] Support FuncTorchInterpreter ( #144444 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144444
Approved by: https://github.com/williamwen42 , https://github.com/zou3519
ghstack dependencies: #144439
2025-01-17 02:26:37 +00:00
Yanbo Liang
5d02575aa1
[Trace Python dispatcher] Support torch.DispatchKey & torch.DispatchKeySet ( #144439 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144439
Approved by: https://github.com/zou3519
2025-01-17 02:26:36 +00:00
PyTorch MergeBot
5e6e6200bf
Revert "[dynamo][dicts] Consolidate dict(..) construction ( #144342 )"
...
This reverts commit a54a784b82 .
Reverted https://github.com/pytorch/pytorch/pull/144342 on behalf of https://github.com/kit1980 due to breaking internal builds, see D68125388 ([comment](https://github.com/pytorch/pytorch/pull/144342#issuecomment-2597184167 ))
2025-01-17 00:32:09 +00:00
Colin L. Rice
b88dcb4835
dynamo: Don't crash when tracing a missing attr on a constant. ( #144593 )
...
dynamo: Don't crash when tracing a missing attr on a constant.
This throws a InternalTorchDynamoError: AttributeError: 'NoneType' object has no attribute 'max'
instead of just skipping the bad call when tracing, and throwing a
normal AttributeError instead.
There are two questions that I would love reviewer comment on.
1) Is throwing unimplemented the right thing here? or should I throw
something like ObservedAttributeError
2) Do we need to worry about performance with this code? In particular,
should we just catch the exception? Or maybe cache the lookup result?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144593
Approved by: https://github.com/jansel
2025-01-15 20:23:43 +00:00
Simon Fan
898a90c6bb
[dynamo][hop] Introduce FlexAttentionBackwardHighOrderVariable ( #144533 )
...
FIXES https://github.com/pytorch/pytorch/issues/143180
This PR adds a new variable mapping to SourcelessBuilder to represent the flex attention intermediates. The variable proxies a call to HOP, and carryovers the graph state (subgraphs represented as UnspecializedNNModuleVariable) to the dynamo output graph. This is safe to do because the nn modules used in flex attention have either been speculated on before, or are outputs of make_fx of the forward.
tlparse of `TestCompiledAutograd.test_flex_attention`: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpiWendk/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
```python
class GraphModule(torch.nn.Module):
def forward(self, L_inputs_ : list):
...
# File: /data/users/xmfan/core/b/pytorch/torch/_dynamo/compiled_autograd.py:832 in set_node_origin, code: CompiledFunctionBackward0 (NodeCall 1)
...
fw_graph0_0 = self.fw_graph0_0
joint_graph0_0 = self.joint_graph0_0
mask_graph0_0 = self.mask_graph0_0
flex_attention_backward = torch.ops.higher_order.flex_attention_backward(aot0_primals_1, aot0_primals_1, aot0_primals_1, aot0_detach_3, aot0_detach_5, aot0_expand_5, aot0_zeros_1, fw_graph0_0, joint_graph0_0, (1, 1, aot0_ones, aot0_zeros, None, None, aot0__to_copy_1, aot0__to_copy_2, None, None, 1073741824, 1073741824, mask_graph0_0), 0.125, {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'WRITE_DQ': True, 'OUTPUT_LOGSUMEXP': True}, (), ()); aot0_primals_1 = aot0_detach_3 = aot0_detach_5 = aot0_expand_5 = aot0_zeros_1 = fw_graph0_0 = joint_graph0_0 = aot0_ones = aot0_zeros = aot0__to_copy_1 = aot0__to_copy_2 = mask_graph0_0 = None
aot0_getitem_4: "bf16[1, 1, s0, s1][s0*s1, s0*s1, s1, 1]cuda:0" = flex_attention_backward[0]
aot0_getitem_5: "bf16[1, 1, s0, s1][s0*s1, s0*s1, s1, 1]cuda:0" = flex_attention_backward[1]
aot0_getitem_6: "bf16[1, 1, s0, s1][s0*s1, s0*s1, s1, 1]cuda:0" = flex_attention_backward[2]; flex_attention_backward = None
...
class fw_graph0_0(torch.nn.Module):
def forward(self, arg0_1: "bf16[][]cuda:0", arg1_1: "i32[][]cuda:0", arg2_1: "i32[][]cuda:0", arg3_1: "i32[][]cuda:0", arg4_1: "i32[][]cuda:0"):
return arg0_1
class joint_graph0_0(torch.nn.Module):
def forward(self, arg0_1: "bf16[][]cuda:0", arg1_1: "i32[][]cuda:0", arg2_1: "i32[][]cuda:0", arg3_1: "i32[][]cuda:0", arg4_1: "i32[][]cuda:0", arg5_1: "bf16[][]cuda:0"):
return [arg5_1, None, None, None, None]
class mask_graph0_0(torch.nn.Module):
def forward(self, arg0_1: "i32[][]cuda:0", arg1_1: "i32[][]cuda:0", arg2_1: "i32[][]cuda:0", arg3_1: "i32[][]cuda:0"):
# File: /data/users/xmfan/core/b/pytorch/torch/_dynamo/compiled_autograd.py:832 in set_node_origin, code: CompiledFunctionBackward0 (NodeCall 1)
new_ones: "b8[][]cuda:0" = torch.ops.aten.new_ones.default(arg0_1, [], dtype = torch.bool, device = device(type='cuda', index=0), pin_memory = False); arg0_1 = None
return new_ones
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144533
Approved by: https://github.com/zou3519
2025-01-15 18:40:57 +00:00
Sujoy Saraswati
7e1c1e65eb
Graph freezing preparation for non-Inductor backends ( #139902 )
...
Enable preparing module named parameters and buffers in tracing context for non-Inductor backends to implement graph freezing.
Fixes #139272
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139902
Approved by: https://github.com/eellison , https://github.com/masnesral , https://github.com/gujinghui
2025-01-15 11:25:04 +00:00
Animesh Jain
a54a784b82
[dynamo][dicts] Consolidate dict(..) construction ( #144342 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144342
Approved by: https://github.com/StrongerXi
2025-01-13 22:24:56 +00:00
Ryan Guo
4ceca4d60f
[dynamo] Avoid graph break on updates to obj.__dict__ ( #144419 )
...
`obj.__dict__` is handled specially in Dynamo, and prior to this patch
we only support read and membership check on that dictionary object.
This patch adds support for writes and some documentation.
Fixes #143756 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144419
Approved by: https://github.com/jansel , https://github.com/anijain2305
2025-01-13 21:04:10 +00:00
Yanbo Liang
3355103233
[Dynamo] Supports autograd.Function forward returns constant ( #144597 )
...
Fixes #144142
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144597
Approved by: https://github.com/jansel
2025-01-12 03:53:10 +00:00
Sam Ginzburg
074aca3ed2
[user triton] add support for @triton.heuristics after @triton.autotune ( #142208 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142208
Approved by: https://github.com/zou3519
2025-01-11 02:18:26 +00:00
PyTorch MergeBot
473b745cb9
Revert "[dynamo] Avoid graph break on updates to obj.__dict__ ( #144419 )"
...
This reverts commit c8595ba7d0 .
Reverted https://github.com/pytorch/pytorch/pull/144419 on behalf of https://github.com/clee2000 due to newly added test fails internally D68004708 ([comment](https://github.com/pytorch/pytorch/pull/144419#issuecomment-2583265412 ))
2025-01-10 16:59:38 +00:00
bobrenjc93
1fe3af2c68
Migrate from Tuple -> tuple in torch/_dynamo ( #144261 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144261
Approved by: https://github.com/aorenste , https://github.com/zou3519
2025-01-10 07:45:57 +00:00
Ryan Guo
c8595ba7d0
[dynamo] Avoid graph break on updates to obj.__dict__ ( #144419 )
...
`obj.__dict__` is handled specially in Dynamo, and prior to this patch
we only support read and membership check on that dictionary object.
This patch adds support for writes and some documentation.
Fixes #143756 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144419
Approved by: https://github.com/jansel , https://github.com/anijain2305
2025-01-10 05:22:04 +00:00
Guilherme Leobas
bf6dd955cd
Fix max(map(...)) ( #142443 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142443
Approved by: https://github.com/zou3519
2025-01-10 01:44:37 +00:00
Animesh Jain
2ac41404a8
[dynamo][dicts] Guarding lazily on dict keys ( #143997 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143997
Approved by: https://github.com/jansel
2025-01-08 03:56:33 +00:00
Animesh Jain
f6488d85a0
[dynamo][user-defined] Remove __getattribute__ checks and add getsetdescriptor ( #144173 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144173
Approved by: https://github.com/jansel
2025-01-05 13:48:15 +00:00