Commit Graph

648 Commits

Author SHA1 Message Date
Pian Pawakapan
54f736155b [dynamic shapes] guard_or_false for _reshape_view_helper, utils._infer_size for wildcard dims (#150127)
For reshape/view: removes fast paths for 0 elements, checking dimensions to skip. Modifies the loop accumulating input elements, to raise a UserError if we run out of dimensions, graph breaking for compile and erroring out for export.
For infer_size: assumes if user passes us an unbacked, it's probably not -1

Will think about changes in https://docs.google.com/document/d/1WYx6EZwVDXtBnWyrzoecgGWdiK0V3XZKftfpWwQ5i3E/edit?tab=t.0#heading=h.22k54zym11qp in a later PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150127
Approved by: https://github.com/laithsakka
2025-04-23 05:42:30 +00:00
angelayi
6a1b820255 [export] Enable symint inputs for AdditionalInputs and ShapesCollection (#151842)
With `AdditionalInputs`, the behavior is the same as with tensors:
```python
class M(torch.nn.Module):
    def forward(self, x, y):
        return x + y

additional_inputs = torch.export.AdditionalInputs()
additional_inputs.add((5, 5))
additional_inputs.add((3, 5))
additional_inputs.add((5, 4))
ep = torch.export.export(
    M(), (6, 7), dynamic_shapes=additional_inputs, strict=False
)
```

With `ShapesCollection`, we now need to wrap integer inputs as `_IntWrapper` so that we can have a unique identifier for each integer input.
```python
class M(torch.nn.Module):
    def forward(self, x, y):
        return x + y

from torch.export.dynamic_shapes import _IntWrapper

args = (_IntWrapper(5), _IntWrapper(5))
# Or we can do `args = pytree.tree_map_only(int, lambda a: _IntWrapper(a), orig_args)`
shapes_collection = torch.export.ShapesCollection()
shapes_collection[args[0]] = Dim.DYNAMIC
shapes_collection[args[1]] = Dim.DYNAMIC
ep = torch.export.export(
    M(), args, dynamic_shapes=shapes_collection, strict=False
)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151842
Approved by: https://github.com/pianpwk
2025-04-22 22:29:18 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
69ee6a9280 [Sana][HybridCache] Fix bug in detect_attr_assignment (#151824)
Summary: tree_flatten_with_map will internally call unflatten function with user supplied function. But this function was not returning anything causing the leaves to be None. This is wrong when the constructor is sensitive to this behaviour

Test Plan: CI

Differential Revision: D73388529

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151824
Approved by: https://github.com/bdhirsh
2025-04-22 19:39:50 +00:00
Tugsbayasgalan Manlaibaatar
2c27597d6a Infra for handling builtin ops (min, max, math.pow) (#151348)
Reapply of https://github.com/pytorch/pytorch/pull/150003

Differential Revision: [D73050801](https://our.internmc.facebook.com/intern/diff/D73050801/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151348
Approved by: https://github.com/zhxchen17
ghstack dependencies: #151347
2025-04-22 17:20:09 +00:00
PyTorch MergeBot
4504910843 Revert "[ez] Make relaxed constraint error message more user friendly (#151407)"
This reverts commit e0f05229e9.

Reverted https://github.com/pytorch/pytorch/pull/151407 on behalf of https://github.com/ZainRizvi due to Sorry but this is breaking internally (see D73198095). To validate your fixes internally, you can follow the instructions here: https://fburl.com/fixing-ghfirst-reverts. ([comment](https://github.com/pytorch/pytorch/pull/151407#issuecomment-2821819654))
2025-04-22 16:12:42 +00:00
PyTorch MergeBot
e76c0b159a Revert "[dynamic shapes] guard_or_false for _reshape_view_helper, utils._infer_size for wildcard dims (#150127)"
This reverts commit a02eae8142.

Reverted https://github.com/pytorch/pytorch/pull/150127 on behalf of https://github.com/malfet due to Caused TestDynamoTimed.test_dynamo_timed to fail on macOS, see https://github.com/pytorch/pytorch/actions/runs/14584536979/job/40908019050 ([comment](https://github.com/pytorch/pytorch/pull/150127#issuecomment-2820081721))
2025-04-22 05:05:50 +00:00
Pian Pawakapan
a02eae8142 [dynamic shapes] guard_or_false for _reshape_view_helper, utils._infer_size for wildcard dims (#150127)
For reshape/view: removes fast paths for 0 elements, checking dimensions to skip. Modifies the loop accumulating input elements, to raise a UserError if we run out of dimensions, graph breaking for compile and erroring out for export.
For infer_size: assumes if user passes us an unbacked, it's probably not -1

Will think about changes in https://docs.google.com/document/d/1WYx6EZwVDXtBnWyrzoecgGWdiK0V3XZKftfpWwQ5i3E/edit?tab=t.0#heading=h.22k54zym11qp in a later PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150127
Approved by: https://github.com/laithsakka
2025-04-22 01:14:15 +00:00
Tugsbayasgalan Manlaibaatar
adf5f38eae Don't specialize min/max (#151347)
address https://github.com/pytorch/pytorch/issues/149635
Differential Revision: [D73041489](https://our.internmc.facebook.com/intern/diff/D73041489/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151347
Approved by: https://github.com/bobrenjc93
2025-04-19 00:11:15 +00:00
PyTorch MergeBot
97d97aef24 Revert "[dynamic shapes] guard_or_false for _reshape_view_helper, utils._infer_size for wildcard dims (#150127)"
This reverts commit 1dd2033c0a.

Reverted https://github.com/pytorch/pytorch/pull/150127 on behalf of https://github.com/clee2000 due to maybe caused export test to fail? export/test_draft_export.py::TestDraftExport::test_masked_linear [GH job link](https://github.com/pytorch/pytorch/actions/runs/14538768138/job/40794985504) [HUD commit link](1dd2033c0a), bad TD ([comment](https://github.com/pytorch/pytorch/pull/150127#issuecomment-2816232086))
2025-04-18 21:38:47 +00:00
PyTorch MergeBot
1b267a58a1 Revert "[export] allow partially specifying keys for dynamic shapes dict spec (#151597)"
This reverts commit c8240e3492.

Reverted https://github.com/pytorch/pytorch/pull/151597 on behalf of https://github.com/clee2000 due to broke some export test export/test_converter.py::TestConverter::test_aten_len [GH job link](https://github.com/pytorch/pytorch/actions/runs/14538615968/job/40792673415) [HUD commit link](c8240e3492), bad TD ([comment](https://github.com/pytorch/pytorch/pull/151597#issuecomment-2816127271))
2025-04-18 20:17:44 +00:00
Pian Pawakapan
1dd2033c0a [dynamic shapes] guard_or_false for _reshape_view_helper, utils._infer_size for wildcard dims (#150127)
For reshape/view: removes fast paths for 0 elements, checking dimensions to skip. Modifies the loop accumulating input elements, to raise a UserError if we run out of dimensions, graph breaking for compile and erroring out for export.
For infer_size: assumes if user passes us an unbacked, it's probably not -1

Will think about changes in https://docs.google.com/document/d/1WYx6EZwVDXtBnWyrzoecgGWdiK0V3XZKftfpWwQ5i3E/edit?tab=t.0#heading=h.22k54zym11qp in a later PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150127
Approved by: https://github.com/laithsakka
2025-04-18 17:05:11 +00:00
Pian Pawakapan
c8240e3492 [export] allow partially specifying keys for dynamic shapes dict spec (#151597)
Fixes #148564

Should help with exporting HF-style models, so users don't have to specify 100 Nones

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151597
Approved by: https://github.com/angelayi
2025-04-18 16:53:01 +00:00
Tugsbayasgalan Manlaibaatar
eb1f85a2a0 Support C++ statically_known_true (#151346)
Differential Revision: [D73040543](https://our.internmc.facebook.com/intern/diff/D73040543/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151346
Approved by: https://github.com/laithsakka
2025-04-18 06:42:12 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
c2a202169d Fix implicit state dict modification (#151436)
Summary: Previously we were modyfing ep.state_dict while runnning decomp which it shouldn't

Test Plan: CI

Fixes: https://github.com/pytorch/pytorch/issues/151366

Differential Revision: D73102315

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151436
Approved by: https://github.com/angelayi
2025-04-18 00:58:55 +00:00
bobrenjc93
e0f05229e9 [ez] Make relaxed constraint error message more user friendly (#151407)
Fixes #151356

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151407
Approved by: https://github.com/Skylion007
2025-04-17 06:43:10 +00:00
PyTorch MergeBot
a582f04608 Revert "[ez] Make relaxed constraint error message more user friendly (#151407)"
This reverts commit bc934f57d7.

Reverted https://github.com/pytorch/pytorch/pull/151407 on behalf of https://github.com/izaitsevfb due to breaks export tests ([comment](https://github.com/pytorch/pytorch/pull/151407#issuecomment-2810716135))
2025-04-16 20:40:22 +00:00
bobrenjc93
bc934f57d7 [ez] Make relaxed constraint error message more user friendly (#151407)
Fixes #151356

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151407
Approved by: https://github.com/Skylion007
2025-04-16 17:00:06 +00:00
Shangdi Yu
83d88d128d [reland] Make export._trace._WrapperModule work in strict mode (#146919) (#151264)
Summary:

as title

`export._trace._WrapperModule` is used to wrap functions into a Module so we can export the function.

We add `export._wrapper_utils` to `dynamo`'s `MOD_INLINELIST` so dynamo traces into `_WrapperModule`

Fixes https://github.com/pytorch/pytorch/issues/146867

Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test:test_export -- -r wrapper_module
```

Differential Revision: D72986826

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151264
Approved by: https://github.com/angelayi
2025-04-15 18:35:34 +00:00
sandishkumarhn
61f127aac5 [Export] fix automatically convert instances of _check(u>=0) to check_is_size() (#148844)
Fixes #148826

Understanding:

1. PyTorch should automatically convert instances of _check(u>=0) to check_is_size()
2. The export mechanism should suggest using check_is_size() instead of _check(u>=0) when applicable

Changes made:
1. Added a helper function to detect non-negative checks: is_non_negative_check
2. Modified the suggestion logic in _suggest_torch_checks to detect and handle non-negative checks
3. unit tests test_is_non_negative_check_function, test_suggest_torch_checks_with_non_negative_check, and test_suggest_torch_checks_with_regular_check

unit tests:

base) sany@sandishs-Laptop pytorch % pytest test/export/test_export.py::TestExport::test_suggest_torch_checks_with_non_negative_check
=================================== test session starts ==================
platform darwin -- Python 3.9.19, pytest-7.3.2, pluggy-1.5.0
rootdir: /Users/sany/git/pytorch
configfile: pytest.ini
plugins: xdoctest-1.1.0, cpp-2.3.0, flakefinder-1.1.0, anyio-4.6.0, rerunfailures-14.0, hypothesis-5.35.1, xdist-3.3.1, subtests-0.13.1, typeguard-4.3.0
collected 1 item
Running 1 items in this shard

test/export/test_export.py .                                                                                           [100%]

======================== 1 passed in 1.67s =======================
(base) sany@sandishs-Laptop pytorch % pytest test/export/test_export.py::TestExport::test_suggest_torch_checks_with_regular_check
======================= test session starts =================
platform darwin -- Python 3.9.19, pytest-7.3.2, pluggy-1.5.0
rootdir: /Users/sany/git/pytorch
configfile: pytest.ini
plugins: xdoctest-1.1.0, cpp-2.3.0, flakefinder-1.1.0, anyio-4.6.0, rerunfailures-14.0, hypothesis-5.35.1, xdist-3.3.1, subtests-0.13.1, typeguard-4.3.0
collected 1 item
Running 1 items in this shard

test/export/test_export.py .                                                                                           [100%]

================================= 1 passed in 1.61s ================
(base) sany@sandishs-Laptop pytorch % pytest test/export/test_export.py::TestExport::test_is_non_negative_check_function
================================ test session starts =============
platform darwin -- Python 3.9.19, pytest-7.3.2, pluggy-1.5.0
rootdir: /Users/sany/git/pytorch
configfile: pytest.ini
plugins: xdoctest-1.1.0, cpp-2.3.0, flakefinder-1.1.0, anyio-4.6.0, rerunfailures-14.0, hypothesis-5.35.1, xdist-3.3.1, subtests-0.13.1, typeguard-4.3.0
collected 1 item
Running 1 items in this shard

test/export/test_export.py .                                                                                           [100%]

======================= 1 passed in 1.62s =========================
(base) sany@sandishs-Laptop pytorch %

Pull Request resolved: https://github.com/pytorch/pytorch/pull/148844
Approved by: https://github.com/laithsakka
2025-04-15 17:41:11 +00:00
PyTorch MergeBot
4a47dd9b3f Revert "[map] always turn on dynamo for map (#150962)"
This reverts commit a72d56cb6b.

Reverted https://github.com/pytorch/pytorch/pull/150962 on behalf of https://github.com/Camyll due to breaking internal builds {SHORT_REASON} ([comment](https://github.com/pytorch/pytorch/pull/150962#issuecomment-2803006282))
2025-04-14 21:09:22 +00:00
Pian Pawakapan
6dddd6520d [dynamic shapes] add sym_and, sym_or (#150456)
This has been pretty helpful for the size-oblivious rewrite. Wanted the variadic args version to avoid `sym_or(a, sym_or(b, sym_or(c, d)))` in favor of `sym_or(a, b, c, d)`. Happy to change this to ban the 1-arg version.

This is better than plain and/or because the whole symbolic expression gets preserved, and if we guard on it or defer as a runtime assert, we preserve all branches.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150456
Approved by: https://github.com/laithsakka
2025-04-14 18:18:06 +00:00
PyTorch MergeBot
2f899f07aa Revert "Make export._trace._WrapperModule work in strict mode (#146919)"
This reverts commit dad5e5e262.

Reverted https://github.com/pytorch/pytorch/pull/146919 on behalf of https://github.com/malfet due to Broke lint, see https://github.com/pytorch/pytorch/actions/runs/14415686353/job/40431799827 ([comment](https://github.com/pytorch/pytorch/pull/146919#issuecomment-2798446930))
2025-04-12 04:12:36 +00:00
Shangdi Yu
dad5e5e262 Make export._trace._WrapperModule work in strict mode (#146919)
Summary:
as title

`export._trace._WrapperModule` is used to wrap functions into a Module so we can export the function.

We add `export._wrapper_utils` to `dynamo`'s `MOD_INLINELIST` so dynamo traces into `_WrapperModule`

Fixes https://github.com/pytorch/pytorch/issues/146867

Test Plan:
```
 buck run fbcode//mode/dev-nosan //caffe2/test:test_export -- -r wrapper_module
```

Differential Revision: D69434316

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146919
Approved by: https://github.com/angelayi
2025-04-12 03:22:08 +00:00
Yidi Wu
a72d56cb6b [map] always turn on dynamo for map (#150962)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150962
Approved by: https://github.com/zou3519
2025-04-11 23:28:06 +00:00
William Wen
183bca41de [dynamo] unimplemented -> unimplemented_v2 in variables/builder.py (#151044)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151044
Approved by: https://github.com/anijain2305, https://github.com/zou3519
2025-04-11 09:07:01 +00:00
xinan.lin
6f6ff8837a [Inductor UT][Break XPU] Fix UTs for XPU broken by community. (#150830)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150830
Approved by: https://github.com/anmyachev, https://github.com/desertfire, https://github.com/jansel
ghstack dependencies: #149862
2025-04-11 04:30:46 +00:00
Pian Pawakapan
389cd15265 [export] check tuple length mismatch for dynamic_shapes spec (#150976)
Summary: weren't checking this

Test Plan: test_export

Differential Revision: D72761995

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150976
Approved by: https://github.com/angelayi
2025-04-10 19:08:43 +00:00
angelayi
e6969c1bd8 [export] Symint support (nonstrict, Dim.DYNAMIC) (#150198)
Fixes https://github.com/pytorch/pytorch/issues/113682 only in the non-strict export case. Also we only support Dim.DYNAMIC/AUTO, not named-Dims

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150198
Approved by: https://github.com/pianpwk
2025-04-10 15:06:23 +00:00
Shangdi Yu
92e81cf41a Add real_tensor to the FakeTensor in node.meta["val"] (#150948)
Summary: We need real_tensor on the FakeTensor in node.meta["val"] in order to aot_compile the draft exported programs. Otherwise, we cannot propagate real tensors even when fake_mode.propagate_real_tensors = True.

This also fixes real tensor propagation in `run_decomposition()`.

Test Plan:
```
 buck2 run @mode/dev-nosan  caffe2/test:test_export -- -r test_dedup_data_dependent_failure
```

Differential Revision: D72732714

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150948
Approved by: https://github.com/angelayi
2025-04-10 00:11:46 +00:00
Sherlock Huang
81f60f3880 Expand allowed_getattr_types_for_subgm to torch.Tensor (#150867)
Summary:
att

regular weight has the type of torch.nn.parameter.Parameter
buffer and tensor constant has the type of torch.Tensor

both types are valid.

Test Plan: CI

Differential Revision: D72657275

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150867
Approved by: https://github.com/zhxchen17
2025-04-09 11:01:45 +00:00
Pian Pawakapan
f8b53f4a75 [export] raise when Dim.DYNAMIC 0/1 specializes (#150716)
Previously we didn't catch this, mark_dynamic() just doesn't allocate a symbol for it

Differential Revision: D72486930

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150716
Approved by: https://github.com/angelayi
2025-04-07 18:58:42 +00:00
PyTorch MergeBot
c93e34d7b5 Revert "bound sympy accuracy (#150383)"
This reverts commit 1bc2b2b12a.

Reverted https://github.com/pytorch/pytorch/pull/150383 on behalf of https://github.com/laithsakka due to big regression ([comment](https://github.com/pytorch/pytorch/pull/150383#issuecomment-2779227548))
2025-04-04 16:26:00 +00:00
Avik Chaudhuri
1bc2b2b12a bound sympy accuracy (#150383)
Differential Revision: D72215735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150383
Approved by: https://github.com/pianpwk
2025-04-04 00:15:32 +00:00
Pian Pawakapan
90ddb33141 [export] specialize for aten.to (#149235)
Changes decomposition behavior of `aten.to` to respect the aliasing/non-aliasing behavior in eager, and to specialize to the input/conversion dtype & device.

Before change: we always decompose `aten.to` into `_to_copy`, regardless of aliasing behavior. This leads us to ban mutations on the result of `_to_copy` when aliased, since we can't guarantee correct program semantics. This meant users had to explicitly call `.clone()` before mutating. In the special cases where we don’t ban mutations (e.g. dtype conversion), we add runtime assertions on the input & conversion dtype/devices in the decomposed program (see https://github.com/pytorch/pytorch/pull/142420).

After change: we decompose to the aliasing/non-aliasing behavior that matches eager, allowing mutations in all cases. We also add dtype/device assertions for all `aten.to` ops, starting in the pre-dispatch graph, basically specializing the program to the dtype/devices.

Differential Revision: D71229547

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149235
Approved by: https://github.com/tugsbayasgalan
2025-04-03 05:20:10 +00:00
Avik Chaudhuri
1017927c83 multidimensional slicing (#150104)
Differential Revision: D71962884

Fixes #150057

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150104
Approved by: https://github.com/angelayi
2025-04-02 20:57:16 +00:00
Yidi Wu
22030efb64 expect fail scan test in sigmoid (#150475)
Summary: as titled.

Test Plan: see modified test.

Differential Revision: D72271976

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150475
Approved by: https://github.com/zhxchen17
2025-04-02 19:56:50 +00:00
angelayi
60fe0922f6 [pytree] Register normal class to register_dataclass (#147752)
Fixes https://github.com/pytorch/pytorch/pull/147532#discussion_r1964365330

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147752
Approved by: https://github.com/zou3519
2025-04-01 23:28:20 +00:00
Avik Chaudhuri
b70d105c77 infer dynamic shapes through additional inputs (#150144)
Summary:
Instead of explicitly specifying dynamic shapes, it is possible to infer them from additional example inputs. Together with the example inputs provided to export, we can basically make any varying dim dynamic and keep any fixed dim static. This should be useful for prod scenarios that have access to tests and/or profiling data, yet are somewhat removed from the model authoring process.

However this alone is not satisfactory: the exported program by design has only one graph, representing one path through the model, and we cannot necessarily guarantee that this graph works for the additional example inputs because different guards might have been created if we had exported with them instead (corresponding to different traced paths). However, checking that the additional example inputs satisfy the guards created by the original export should be sufficient for generalization.

Now, while we don't preserve all guards in the exported program, we do check a subset of them as part of input matching. So we add a verification step at the end of export when such additional example inputs are provided. This should be enough for now.

Test Plan: added test (positive and negative cases)

Differential Revision: D72001771

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150144
Approved by: https://github.com/bobrenjc93
2025-04-01 21:13:39 +00:00
Pian Pawakapan
925fd4aa2e [export] min/max ranges for dim hints (#149590)
Differential Revision: D71522032

Adds min/max ranges to Dim.AUTO/DYNAMIC/STATIC, so users can do `Dim.AUTO(min=2, max=2048)`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149590
Approved by: https://github.com/tugsbayasgalan
2025-03-31 21:32:20 +00:00
Pian Pawakapan
103bf64a3c [export] refactor _Dim into Dim (#149891)
Summary: forward fix T218515233

Test Plan: test_export

Differential Revision: D71769231

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149891
Approved by: https://github.com/jingsh, https://github.com/angelayi
2025-03-28 06:19:03 +00:00
bobrenjc93
f649ee73ce Use source hashing to generate consistent symbolic ids (#149665)
This PR was inspired by internal models that were cache missing due to PGO. At a high level the problem looks as follows

Run 1, Invocation 1: We do static compile, save some example values in PGO/automatic dynamic

Run 1, Invocation 2: We detect varying inputs, do dynamic compile, get a dynamic graph and save to PGO. Crucially what we save to PGO is actually a superset of what is actually dynamic. If we notice an input was varying, we mark it as dynamic in PGO even if later on that value gets specialized. When a value gets specialized, we actually remove the symbol from the graph. This results in an interesting conundrum where although we are producing the same isomorphic graph, PGO makes the second run cache miss. Let's see how....

Run 2, Invocation 1: We fetch the PGO, over-mark things as dynamic, get a fx graph, look it up in the cache and... whoops! cache miss! This is because of the aforementioned behavior where the PGO profile will cause us to over-allocate symbols. In practice this means we end up saving a graph in cache with symbols x:s1, y:s3 and on second attempt we cache miss with x:s1, y:s6 where symbols s3,s4,s5 were all optimistically marked dynamic by PGO and subsequently specialized.

We solve this problem by hashing the source names. This ensures somewhat stable assignment. To prevent catastrophic symbol collisions, we use linear probing to ensure no collisions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149665
Approved by: https://github.com/Mingming-Ding, https://github.com/laithsakka
2025-03-28 05:36:32 +00:00
Tugsbayasgalan Manlaibaatar
c49315e645 Improve attr mismatch msg (#149576)
Differential Revision: [D71513041](https://our.internmc.facebook.com/intern/diff/D71513041)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149576
Approved by: https://github.com/avikchaudhuri
2025-03-28 05:10:56 +00:00
Avik Chaudhuri
21bcbbfb5e fix range constraints for expr (#150103)
During tracing it is possible for a `s1: VR[2, inf]` to be replaced by a `s0: VR[3, inf]` (note smaller range) by the shape env. But after export, unfortunately we'd previously record `range_constraints[s0] = VR[2, inf]` (note larger range), which is incorrect.

This is because we'd map `s1.node.expr` (`s0`) to the `var_to_range` of `s1.node._expr` (`s1`) when creating `range_constraints`. The comment surrounding this code suggests this predated `bound_sympy`, but now we can do better.

For users, this means that when using `Dim.DYNAMIC` previously they wouldn't get input constraints checked sufficiently, now they do (shifting errors early).

Differential Revision: D71962694

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150103
Approved by: https://github.com/zhxchen17
2025-03-27 22:11:39 +00:00
PyTorch MergeBot
af7719a2fa Revert "Use source hashing to generate consistent symbolic ids (#149665)"
This reverts commit 1f92348dc6.

Reverted https://github.com/pytorch/pytorch/pull/149665 on behalf of https://github.com/malfet due to Broke trunk, see 6eb3c2e282/1 ([comment](https://github.com/pytorch/pytorch/pull/149665#issuecomment-2758578187))
2025-03-27 16:02:27 +00:00
bobrenjc93
1f92348dc6 Use source hashing to generate consistent symbolic ids (#149665)
This PR was inspired by internal models that were cache missing due to PGO. At a high level the problem looks as follows

Run 1, Invocation 1: We do static compile, save some example values in PGO/automatic dynamic

Run 1, Invocation 2: We detect varying inputs, do dynamic compile, get a dynamic graph and save to PGO. Crucially what we save to PGO is actually a superset of what is actually dynamic. If we notice an input was varying, we mark it as dynamic in PGO even if later on that value gets specialized. When a value gets specialized, we actually remove the symbol from the graph. This results in an interesting conundrum where although we are producing the same isomorphic graph, PGO makes the second run cache miss. Let's see how....

Run 2, Invocation 1: We fetch the PGO, over-mark things as dynamic, get a fx graph, look it up in the cache and... whoops! cache miss! This is because of the aforementioned behavior where the PGO profile will cause us to over-allocate symbols. In practice this means we end up saving a graph in cache with symbols x:s1, y:s3 and on second attempt we cache miss with x:s1, y:s6 where symbols s3,s4,s5 were all optimistically marked dynamic by PGO and subsequently specialized.

We solve this problem by hashing the source names. This ensures somewhat stable assignment. To prevent catastrophic symbol collisions, we use linear probing to ensure no collisions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149665
Approved by: https://github.com/Mingming-Ding, https://github.com/laithsakka
2025-03-27 03:39:27 +00:00
Yidi Wu
b2b9aaf0ad Fix non-strict export doesn't turn on dynamo for hop (#149903)
Somehow the torch._dynamo.is_compiling is changed to torch.compiler.is_compiling(), which also checks whether we're exporting. This is not caught by cI because we don't have an export test for scan.

Changing to torch.compiler.is_dynamo_compiling and added a test.

edit: piggyback the re-tracing support in this PR. Related code in combine_fn_is_normalized.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149903
Approved by: https://github.com/zou3519
2025-03-27 02:38:05 +00:00
Scott Wolchok
dc39e673e2 Remove aten.elu core ATen decomp because it is now core ATen (#149780)
Per @larryliu0820.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149780
Approved by: https://github.com/larryliu0820
2025-03-25 01:59:57 +00:00
Zhengxu Chen
84684e9397 [sigmoid] Fix scalar resolution for Scalar_mode aten ops. (#149755)
Summary: For Scalar variant resolution, we didn't handle a corner case of "Tensor_mode" variant (from aten::div). Adding the missing case to the graph pass.

Test Plan: buck test mode/opt caffe2/test:test_export -- -r test_operator_aten_tensor_mode_variant_cpp_runtime

Differential Revision: D71638433

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149755
Approved by: https://github.com/yushangdi
2025-03-25 01:17:36 +00:00
Zhengxu Chen
490ce7e67c [sigmoid] Support _operator.neg/truediv (#149754)
Summary: adding operator.truediv and operator.neg support to the runtime

Test Plan: buck run mode/opt caffe2/test:test_export -- -r test_sym_float_operators_cpp_runtime_nonstrict

Differential Revision: D71637267

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149754
Approved by: https://github.com/pianpwk
2025-03-24 22:15:25 +00:00
Yidi Wu
0a0a73a9a9 [cond] don't trace fw and bw graph in autograd key (#148930)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148930
Approved by: https://github.com/zou3519
2025-03-24 17:07:29 +00:00