Commit Graph

2184 Commits

Author SHA1 Message Date
David Berard
132babe7e0 [user triton] dynamo support for new host-side TMA API (#155662)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155662
Approved by: https://github.com/aakhundov
ghstack dependencies: #155510
2025-06-12 12:56:23 +00:00
Laith Sakka
0029259bdf Add view_simple as meta function for view, and avoid calling reshape_view_helper. (#154757)
address https://github.com/pytorch/pytorch/issues/153303

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154757
Approved by: https://github.com/bobrenjc93, https://github.com/leslie-fang-intel
2025-06-12 09:58:15 +00:00
Animesh Jain
a9d5157e25 [dynamo] Use BINARY_SUBSCR for pre-graph bytecode for regular dict accesses (#155727)
vLLM profiler sets with_stack=True that shows the dict_getitem on the profiler, both inflating the numbers and confusing compile users. This PR keeps BINARY_SUBSCR for regular dicts, while using `dict.__getitem__` only for dict subclasses.

Using binary_subscr is little bit faster, but not enough to make any major latency improvements.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155727
Approved by: https://github.com/zou3519, https://github.com/StrongerXi, https://github.com/jansel
2025-06-12 04:02:29 +00:00
Oguz Ulgen
d1947a8707 Migrate from lru_cache to cache (#155613)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155613
Approved by: https://github.com/ezyang
ghstack dependencies: #155612
2025-06-11 19:44:18 +00:00
Pian Pawakapan
247f83e0a4 [dynamic shapes] guard individual terms in sym_and; user-code-friendly sym_and/sym_or (#154737)
Previously when processing `sym_and(a, b, c)`, symbolic shapes wouldn't individually process a, b, and c and store their implications. This would lead us to data-dependent error on individual checks, e.g. we stored `u0 >= 0 & u0 <= 10`, but then couldn't figure out `u0 <= 10`.

This handles that, and also makes `sym_and/or` user-code friendly, for testing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154737
Approved by: https://github.com/laithsakka
2025-06-11 18:08:06 +00:00
Ryan Guo
07eb374e7e [dynamo] Avoid unncessary caching source codegen (#155376)
We only need to cache a source (e.g., `x.y.z`) into a temporary local if
it's used multiple times in the codegen, otherwise we'd just be creating
redundant `DUP` and `STORE_FAST tmp_...` instructions, which might
degrade perf and definitely makes generated bytecode harder to read.

Example:
```python
import torch

@torch.compile(backend="eager")
def fn(x, y):
    return x + y

fn(torch.ones(2), torch.ones(1))
```

Original bytecode:
```verbatim
[0/0] [__bytecode]   3           0 RESUME                   0
[0/0] [__bytecode]
[0/0] [__bytecode]   5           2 LOAD_FAST                0 (x)
[0/0] [__bytecode]               4 LOAD_FAST                1 (y)
[0/0] [__bytecode]               6 BINARY_OP                0 (+)
[0/0] [__bytecode]              10 RETURN_VALUE
```

Modified bytecode (before this patch):
```verbatim
[__bytecode]   3           0 RESUME                   0
[__bytecode]               2 LOAD_GLOBAL              1 (NULL + __compiled_fn_1_578c8d9a_2a9b_4d15_bac7_267591cdee32)
[__bytecode]              14 LOAD_FAST                0 (x)
[__bytecode]              16 COPY                     1
[__bytecode]              18 STORE_FAST               3 (tmp_1)
[__bytecode]              20 LOAD_FAST                1 (y)
[__bytecode]              22 COPY                     1
[__bytecode]              24 STORE_FAST               4 (tmp_2)
[__bytecode]              26 PRECALL                  2
[__bytecode]              30 CALL                     2
[__bytecode]              40 STORE_FAST               2 (graph_out_0)
[__bytecode]              42 LOAD_FAST                2 (graph_out_0)
[__bytecode]              44 LOAD_CONST               1 (0)
[__bytecode]              46 BINARY_SUBSCR
[__bytecode]              56 DELETE_FAST              2 (graph_out_0)
[__bytecode]              58 RETURN_VALUE
```

Modified bytecode (after this patch):
```verbatim
[__bytecode]   3           0 RESUME                   0
[__bytecode]               2 LOAD_GLOBAL              1 (NULL + __compiled_fn_1_2c498af2_ce5c_49cb_abba_a0c7489b09ce)
[__bytecode]              14 LOAD_FAST                0 (x)
[__bytecode]              16 LOAD_FAST                1 (y)
[__bytecode]              18 PRECALL                  2
[__bytecode]              22 CALL                     2
[__bytecode]              32 STORE_FAST               2 (graph_out_0)
[__bytecode]              34 LOAD_FAST                2 (graph_out_0)
[__bytecode]              36 LOAD_CONST               1 (0)
[__bytecode]              38 BINARY_SUBSCR
[__bytecode]              48 DELETE_FAST              2 (graph_out_0)
[__bytecode]              50 RETURN_VALUE
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155376
Approved by: https://github.com/williamwen42
2025-06-10 19:38:15 +00:00
Yuanhao Ji
9968c854b6 [Dynamo] Replace unimplemented with unimplemented_v2 in torch/_dynamo/variables/tensor.py (#153146)
Part of #147913

Replace `unimplemented` with`unimplemented_v2` in `torch/_dynamo/variables/tensor.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153146
Approved by: https://github.com/williamwen42

Co-authored-by: William Wen <william.wen42@gmail.com>
2025-06-09 06:27:50 +00:00
Bob Ren
b981fb6744 Add docblock to torch/_dynamo/variables/builtin.py (#155402)
Add comprehensive module docstring explaining built-in function and type
variable tracking, including handling of Python built-ins, type constructors,
operators, and special constructs during symbolic execution.

Originally generated by claude but reviewed and edited by me.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155402
Approved by: https://github.com/Skylion007
ghstack dependencies: #155403
2025-06-08 15:24:29 +00:00
Animesh Jain
db491825e0 [invoke_subgraph] Add logging (#155284)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155284
Approved by: https://github.com/zou3519
ghstack dependencies: #155270
2025-06-07 11:31:53 +00:00
Animesh Jain
0f3f59784d [invoke_subgraph] Throw assertion on uncaptured speculate_subgraph (#155270)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155270
Approved by: https://github.com/zou3519
2025-06-07 11:31:53 +00:00
William Wen
81b0b308ca [dynamo] constant fold torch.cuda.is_initialized (#155300)
Fixes https://github.com/pytorch/pytorch/issues/129659

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155300
Approved by: https://github.com/StrongerXi, https://github.com/jansel
2025-06-07 02:21:11 +00:00
Animesh Jain
13ea0f2c0a [dynamo][dynamic] Recompilation hint for nn module integer attributes (#154867)
For program like this

```
class Mod(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.c = 0

    def forward(self, x):
        self.c += 1
        return x * self.c
```

You can check the recompile reasons at https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpzv9z6Q/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000

![image](https://github.com/user-attachments/assets/856a95fd-0533-4abc-a213-1f73ae2cb766)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154867
Approved by: https://github.com/zou3519
2025-06-05 16:37:22 +00:00
drisspg
80703ca332 [FlexAttention] Allow dispatch to SAC for flex (#150080)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150080
Approved by: https://github.com/zou3519
2025-06-05 04:34:27 +00:00
Animesh Jain
c881f2ddf3 [reland][dynamo] Mark a vt unspecialized nn module variable source earlier (#155099)
Reland of https://github.com/pytorch/pytorch/pull/154780

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155099
Approved by: https://github.com/williamwen42
2025-06-04 23:05:36 +00:00
Thomas Bohnstingl
b084e1b81c [HOP] Rework Autograd DispatchKey for scan and map (#153336)
This PR introduces the `py_autograd_impl` instead of the `DispatchKey.Autograd` for some HOPs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153336
Approved by: https://github.com/ydwu4
2025-06-04 20:54:02 +00:00
sandishkumarhn
e9c31fb86d [torch.compile] handle a custom __delattr__ method correctly (#150899)
Fixes #150765
- handle a custom __delattr__ method correctly

Test:
```
import torch

class MyObject:
    def __init__(self, val):
        self.val = val
        # Flag to track deletion attempts instead of using print
        self.deletion_attempted = False

    def __delattr__(self, attr):
        if attr == "val":
            # Set flag instead of printing
            self.deletion_attempted = True
        else:
            super().__delattr__(attr)

@torch.compile(fullgraph=True, backend="eager")
def test(input_tensor):
    instance_a = MyObject(1)
    instance_b = MyObject(2)

    del instance_a.val
    del instance_b.val
    exists_a = hasattr(instance_a, 'val')
    exists_b = hasattr(instance_b, 'val')
    deletion_attempted_a = instance_a.deletion_attempted
    deletion_attempted_b = instance_b.deletion_attempted

    return input_tensor + 1, exists_a, exists_b, deletion_attempted_a, deletion_attempted_b

# Run the test
result = test(torch.ones(1))
print(f"Result tensor: {result[0]}")
print(f"val attribute still exists on instance_a: {result[1]}")
print(f"val attribute still exists on instance_b: {result[2]}")
print(f"Deletion was attempted on instance_a: {result[3]}")
print(f"Deletion was attempted on instance_b: {result[4]}")

```

output:
```
(base) sany@sandishs-Laptop pytorch % python3 test_delattr_fix.py
Result tensor: tensor([2.])
val attribute still exists on instance_a: True
val attribute still exists on instance_b: True
Deletion was attempted on instance_a: True
Deletion was attempted on instance_b: True
```

```
(pytorch-dev) sany@sandishs-Laptop pytorch % python3 -m pytest test/dynamo/test_repros.py::ReproTests::test_delattr_return -v
========================================================= test session starts =========================================================
platform darwin -- Python 3.12.5, pytest-8.3.5, pluggy-1.5.0 -- /Library/Frameworks/Python.framework/Versions/3.12/bin/python3
cachedir: .pytest_cache
rootdir: /Users/sany/git/pytorch
configfile: pytest.ini
plugins: typeguard-4.3.0
collected 1 item
Running 1 items in this shard

test/dynamo/test_repros.py::ReproTests::test_delattr_return PASSED [0.0659s]                                                    [100%]

========================================================== 1 passed in 1.71s ==========================================================
(pytorch-dev) sany@sandishs-Laptop pytorch %
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150899
Approved by: https://github.com/jansel, https://github.com/StrongerXi
2025-06-04 17:27:20 +00:00
PyTorch MergeBot
a99a01a677 Revert "[dynamo] Mark a vt unspecialized nn module variable source earlier (#154780)"
This reverts commit cc96febb97.

Reverted https://github.com/pytorch/pytorch/pull/154780 on behalf of https://github.com/seemethere due to This fails internal testing see, https://fburl.com/diff/b0yuxk4w ([comment](https://github.com/pytorch/pytorch/pull/154780#issuecomment-2940381691))
2025-06-04 15:03:34 +00:00
PyTorch MergeBot
a0f2544502 Revert "[dynamo][dynamic] Recompilation hint for nn module integer attributes (#154867)"
This reverts commit 6c2f941e25.

Reverted https://github.com/pytorch/pytorch/pull/154867 on behalf of https://github.com/seemethere due to This fails internal testing see, https://fburl.com/diff/b0yuxk4w ([comment](https://github.com/pytorch/pytorch/pull/154780#issuecomment-2940381691))
2025-06-04 15:03:34 +00:00
Animesh Jain
6c2f941e25 [dynamo][dynamic] Recompilation hint for nn module integer attributes (#154867)
For program like this

```
class Mod(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.c = 0

    def forward(self, x):
        self.c += 1
        return x * self.c
```

You can check the recompile reasons at https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpzv9z6Q/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000

![image](https://github.com/user-attachments/assets/856a95fd-0533-4abc-a213-1f73ae2cb766)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154867
Approved by: https://github.com/zou3519
ghstack dependencies: #154780
2025-06-04 00:05:53 +00:00
Animesh Jain
cc96febb97 [dynamo] Mark a vt unspecialized nn module variable source earlier (#154780)
I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154780
Approved by: https://github.com/StrongerXi, https://github.com/jansel
2025-06-03 19:19:47 +00:00
Ryan Guo
6f7694f18f [dynamo] Reconstruct defaultdict properly (#154931)
`DefaultDictVariable` inherited `ConstDictVariable.reconstruct`, causing
dynamo to reconstruct a `DefaultDictVariable` into a dict rather than
defaultdict. This patch fixes that.

Fixes #138412.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154931
Approved by: https://github.com/williamwen42, https://github.com/zou3519
ghstack dependencies: #154930
2025-06-03 18:18:40 +00:00
Animesh Jain
1258aac1c2 [dynamo] Upcast torch.Size + tuple to be of size torch.Size (#154830)
Fixes https://github.com/pytorch/pytorch/issues/154432

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154830
Approved by: https://github.com/StrongerXi, https://github.com/Skylion007, https://github.com/williamwen42
2025-06-02 17:57:23 +00:00
Animesh Jain
7368eeba5e [dynamo][guards] Prevent LENGTH guard on nn modules (#154763)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154763
Approved by: https://github.com/williamwen42
2025-05-31 05:32:31 +00:00
Ryan Guo
f9dc20c7a3 [dynamo] Fix syntax error in aot graph from kwarg-less torch.Tensor.[random_|uniform_] calls (#154645)
As title, fixes #151432, see more context in the issue discussion.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154645
Approved by: https://github.com/zou3519
2025-05-30 18:50:58 +00:00
Sidharth
3bdceab124 [dynamo] fix: added star operator for graph_break_hints (#154713)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154713
Approved by: https://github.com/zou3519, https://github.com/williamwen42
2025-05-30 17:31:03 +00:00
Ryan Guo
7183f52675 [dynamo] Support namedtuple subclass (#153982)
Fixes #133762. This involves
1. support tuple subclass constructed inside compile region.
2. handle the "fake" global scope associated with NamedTuple-generated
   `__new__`.
3. handle `namedtuple._tuplegetter` more faithfully.

Differential Revision: [D75488091](https://our.internmc.facebook.com/intern/diff/D75488091)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153982
Approved by: https://github.com/jansel
ghstack dependencies: #154176
2025-05-30 16:14:37 +00:00
Ryan Guo
8002d22ce3 [dynamo] Trace into descriptor with __set__ (#154176)
As title, this patch basically implements
https://github.com/python/cpython/blob/3.11/Objects/object.c#L1371-L1452,
and make the `__get__` handling more robust.

I ran into this while fixing #133762.

Differential Revision: [D75488090](https://our.internmc.facebook.com/intern/diff/D75488090)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154176
Approved by: https://github.com/jansel
2025-05-30 16:14:37 +00:00
bobrenjc93
9c06dff1ce [multigraph] use specializations in compile_and_call_fx_graph (#153449)
The goal of this multigraph work is to enable a compiled region that has a single dynamo trace but multiple backend specializations. This work was inspired by vLLM which does this in a somewhat hacky way where they use a custom backend to capture a dynamo graph and then manually invoke compile_fx multiple times to get specialized graphs.

There's really two parts of this work:

**The frontend changes:**
1) we introduce an optional kwarg `specialize_on` to mark_{dynamic,unbacked} that takes in a list of specializations. I debated other methods including specifying specializations via decorators, but ultimately decided this approach was more harmonious. The big issue with decorators is the difficulty of composing well with the rest of the torch.compile ecosystem including graph breaks, lazy initialization of variable trackers and symbolic variables, etc.

**The backend changes (this PR):**
1) We capture the backend_specialization specified in the mark_{dynamic,unbacked} API into a SymbolicContext. See changes in `/_dynamo/variables/builder.py`
2) After we are done dynamo tracing, we will lazily (more on this later) invoke `call_user_compiler` up to N + 1 times for N specializations and 1 generic graph. Under the hood this will call compile_fx, which composes nicely with both Async Compile and AOTAutogradCache. We do this by using a context manager to patch in specialization specific axioms into the ShapeEnv before invoking the user compiler.
3) When we have specializations, we install a lazy specialized dispatch function that checks each specialization and dispatches to the first one that matches. Instead of doing all of the specialization compiles up front, we do the compiles lazily. The first time a specialization is invoked, we will do the compilation and save it in a cache so subsequent invocations are fast. If none of the specializations match, we dispatch to the generic graph. I decided to do this over returning N different GuardedCodes since 1) it doesn't pollute the dynamo cache (eg. if you have 8 specializations, you would hit the cache limit) 2) it naturally incorporates the hierarchical lattice structure of the guards since the specializations are always necessarily stricter than the generic region's guards.

I benchmarked this PR stack with #152596 and found around a 50% reduction when dispatching to the specialized regions:

![495269647_576053105510082_9189856138964956774_n](https://github.com/user-attachments/assets/66030fed-d62e-4d87-940f-aa13c99b1a73)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153449
Approved by: https://github.com/zou3519
ghstack dependencies: #153433
2025-05-30 03:19:49 +00:00
William Wen
81b7c96697 [dynamo, nested graph breaks] add skip_frame debugging function (#153773)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153773
Approved by: https://github.com/jansel
ghstack dependencies: #151056, #153510, #153772
2025-05-28 23:29:37 +00:00
bobrenjc93
d865b784e4 Support unbacked whitelist (#154295)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154295
Approved by: https://github.com/angelayi
2025-05-28 23:01:22 +00:00
Sidharth
70539308ac [dynamo] updating gb_type names for uniqueness (#154452)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154452
Approved by: https://github.com/williamwen42
2025-05-28 16:54:10 +00:00
PyTorch MergeBot
e86439ed5b Revert "[dynamo, nested graph breaks] add skip_frame debugging function (#153773)"
This reverts commit aadf9eae63.

Reverted https://github.com/pytorch/pytorch/pull/153773 on behalf of https://github.com/malfet due to Not sure which one, but it broke test_error_messages, see 203b0efd63/1 ([comment](https://github.com/pytorch/pytorch/pull/151056#issuecomment-2916437433))
2025-05-28 13:53:50 +00:00
William Wen
aadf9eae63 [dynamo, nested graph breaks] add skip_frame debugging function (#153773)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153773
Approved by: https://github.com/jansel
ghstack dependencies: #151056, #153510, #153772
2025-05-28 08:54:09 +00:00
Pian Pawakapan
1d9b7dd2d1 [PGO] suggest dynamic whitelist for recompilations (#154189)
suggests `TORCH_COMPILE_DYNAMIC_SOURCES` based off tensor size changes in PGO code state, including parameters.

Closing #153442 which took the dynamo guards approach.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154189
Approved by: https://github.com/bobrenjc93
2025-05-28 07:11:43 +00:00
Sidharth
54f1f29fed [dynamo] dynamic gb_type -> static gb_type (#154435)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154435
Approved by: https://github.com/williamwen42
2025-05-28 03:14:26 +00:00
Ryan Guo
75bbd4989c [dynamo] Support using symint from dispatcher-style tensor subclass (#154130)
Fixes #146932.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154130
Approved by: https://github.com/laithsakka
2025-05-27 19:05:46 +00:00
bobrenjc93
53ecb8159a Introduce statically_known_false (#154291)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154291
Approved by: https://github.com/mengluy0125
2025-05-24 14:23:55 +00:00
William Wen
5bb156a7fd [dynamo] raise observed exception for module attribute errors (#153659)
Fixes https://github.com/pytorch/pytorch/issues/153605

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153659
Approved by: https://github.com/StrongerXi
2025-05-23 03:56:26 +00:00
Yidi Wu
fc859077a0 [export][cond] support merging constant ints as unbacked symint (#152742)
@pianpwk points out that this will be helpful to address several data dependent issues in huggingface [models](e23705e557/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py (L332)) with the following pattern:
```python
idx = return 0 if u0 else return 1
return  x[idx]
```
We could preserve the conditional with a cond.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152742
Approved by: https://github.com/zou3519
2025-05-22 17:25:38 +00:00
Sidharth
c1b7dbc52a [dynamo] unimplemented -> unimplemented_v2 in variables/dict.py (#154040)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154040
Approved by: https://github.com/williamwen42, https://github.com/StrongerXi
2025-05-22 06:46:10 +00:00
soulitzer
f2af30fee5 Add a HOP to bypass tracing of a wrapper function while tracing the wrapped function (#153487)
Usage:
```python
from torch._higher_order_ops.wrap import dynamo_bypassing_wrapper

# Your ordinary function wrapper
def my_hop_fn_impl(fn, *args, k=1, **kwargs):
    def wrapper(*args, **kwargs):
        out = fn(*args, **kwargs)
        if isinstance(out, tuple):
            return (out[0] + k,)
        return out + k

    return wrapper

# Calling `my_hop_fn` instead of the impl directly captures a HOP into the dynamo graph
def my_hop_fn(fn, *args, k=1, **kwargs):
    return dynamo_bypassing_wrapper(
        functools.partial(my_hop_fn_impl, k=k), fn, *args, **kwargs
    )
```

Notes:
- The dynamo captured graph now stashes arbitrary callable objects (the wrapper_fn) - this is equivalent to what SAC does today with policy_fn.
- The `wrapper_fn` passed to `dynamo_bypassing_wrapper ` should have signature `Callable -> Callable`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153487
Approved by: https://github.com/ydwu4
2025-05-22 04:24:38 +00:00
Ryan Guo
4c6f0fe22f [dynamo] Properly handle torch.script.jit under @staticmethod (#153984)
Fixes #153607.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153984
Approved by: https://github.com/williamwen42
2025-05-21 19:45:06 +00:00
Sidharth
89ebd29fdc [Dynamo] added warning message for tracing lru_cache wrapped functions (#153744)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153744
Approved by: https://github.com/williamwen42
2025-05-20 04:08:29 +00:00
PyTorch MergeBot
75eb2f3ff6 Revert "[Dynamo] added warning message for tracing lru_cache wrapped functions (#153744)"
This reverts commit aac30ef503.

Reverted https://github.com/pytorch/pytorch/pull/153744 on behalf of https://github.com/jeanschmidt due to Need to revert as it is breaking internal signals: [D74935585](https://www.internalfb.com/diff/D74935585) ([comment](https://github.com/pytorch/pytorch/pull/153744#issuecomment-2889187038))
2025-05-18 20:13:00 +00:00
Thomas Bohnstingl
68034198e5 [HOP] Mutation and alias rework (#146658)
This PR reworks the way the input mutations and various aliases are checked

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146658
Approved by: https://github.com/ydwu4
2025-05-18 08:05:22 +00:00
Sidharth
aac30ef503 [Dynamo] added warning message for tracing lru_cache wrapped functions (#153744)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153744
Approved by: https://github.com/williamwen42
2025-05-17 00:43:18 +00:00
clr
a952f42bdb dynamo: Log if we're using dynamic shapes via set_feature_usage (#153490)
This makes it extremely clear if a specific model didn't use dynamic shapes and
should have (except it had a bad config option).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153490
Approved by: https://github.com/jansel
2025-05-16 23:59:00 +00:00
Guilherme Leobas
f66a159db5 [Set] Raise TypeError if set is called with the wrong number of arguments (#152990)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152990
Approved by: https://github.com/anijain2305
ghstack dependencies: #150792, #152987, #152988, #152904, #152901, #152902, #152903, #152905, #152906, #152989, #152907, #152908
2025-05-16 14:28:32 +00:00
Guilherme Leobas
5a0ca65555 [Set] Add correct set/frozenset __init__ behavior (#152908)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152908
Approved by: https://github.com/anijain2305
ghstack dependencies: #150792, #152987, #152988, #152904, #152901, #152902, #152903, #152905, #152906, #152989, #152907
2025-05-16 14:28:32 +00:00
Guilherme Leobas
053025494f [Set] Raise KeyError on empty set.pop() (#152907)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152907
Approved by: https://github.com/anijain2305
ghstack dependencies: #150792, #152987, #152988, #152904, #152901, #152902, #152903, #152905, #152906, #152989
2025-05-16 14:28:32 +00:00