Commit Graph

4261 Commits

Author SHA1 Message Date
William Wen
3ac5a499dd [dynamo] add dynamo disable reasons to codebase (#150440)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150440
Approved by: https://github.com/jansel, https://github.com/zou3519
ghstack dependencies: #150341
2025-04-02 04:26:48 +00:00
William Wen
25eff6e991 [dynamo] add reason field to torch.compiler.disable (#150341)
Implements https://github.com/pytorch/pytorch/issues/146445

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150341
Approved by: https://github.com/zou3519, https://github.com/jansel
2025-04-02 04:26:48 +00:00
Michael Lazos
0d44a8aea1 [Hierarchical Compile] Apply deduplication after output node creation (#150306)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150306
Approved by: https://github.com/anijain2305
ghstack dependencies: #150303, #150304, #150305
2025-04-01 20:54:18 +00:00
Michael Lazos
8740ffa760 [Hierarchical Compile] Add cycle detection to graph region expansion (#150305)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150305
Approved by: https://github.com/anijain2305
ghstack dependencies: #150303, #150304
2025-04-01 20:54:18 +00:00
Michael Lazos
a2300aff94 [Hierarchical Compile] Add cycle detection function for debug (#150304)
Remove print

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150304
Approved by: https://github.com/anijain2305
ghstack dependencies: #150303
2025-04-01 20:54:10 +00:00
Michael Lazos
99fd96c10b [Hierarchical Compile] Remove spammy debug log (#150303)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150303
Approved by: https://github.com/williamwen42
2025-04-01 20:54:03 +00:00
Will Feng
b0c560ef2a [dynamo][hooks] use wrap_top_frame config for functions (#150209)
When torch.compile is applied to a module via `mod.compile(...)`, it's equivalent to `torch.compile(mod._call_impl)` which takes a different path than `OptimizedModule`. This PR ensures that the `wrap_top_frame` config can also take effect for the `torch.compile(mod._call_impl)` use case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150209
Approved by: https://github.com/anijain2305
2025-04-01 17:41:23 +00:00
Nikita Shulga
428234bc28 [MPSInductor] torch.complex128 is unsupported on MPS (#150386)
Same as torch.float64

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150386
Approved by: https://github.com/dcci
ghstack dependencies: #150382
2025-04-01 15:19:10 +00:00
Xuehai Pan
a10b765bf1 [pytree] add APIs to determine a class is a namedtuple or PyStructSequence (#113257)
Changes in this PR:

1. Add `is_structseq` and `is_structseq_class` functions to determine a object or a class is PyStructSequence.
2. Add a generic class `structseq` which can be used as the registration key for PyStructSequence types like `namedtuple` for Named Tuple types.
3. Change `is_namedtuple` to accept subclasses of namedtuple to be namedtuple. Before this PR, only namedtuple class directly created by `collections.namedtuple` or `typing.NamedTuple` were namedtuple classes while their subclasses were not. This PR makes `is_namedtuple` return true for subclasses of namedtuple class.

Resolves #75982. New tests are included in this PR.

- #75982

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113257
Approved by: https://github.com/zou3519
2025-04-01 10:40:43 +00:00
Prajesh Praveen Anchalia
48e9ffc873 Unify on dynamo_compile as the overall wait counter (#150293)
Summary:
dynamo_compile for the most part has been accounting for compile time except autotuning.

all_compilation_types had earlier been injected on fx_codegen_and_compile, which was incorrect.

Add autotuining to dynamo and deprcate all_compilation_types counter.

Differential Revision: D72145447

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150293
Approved by: https://github.com/masnesral, https://github.com/jamesjwu
2025-04-01 08:55:51 +00:00
William Wen
790d459f85 [dynamo] add error message for unsupported LOAD_BUILD_CLASS (#150323)
Improved error message for https://github.com/pytorch/pytorch/issues/128942

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150323
Approved by: https://github.com/jansel, https://github.com/zou3519
2025-04-01 05:03:50 +00:00
angelayi
5e34758cef [invoke_subgraph] Support unbacked (#149298)
Differential Revision: [D71420641](https://our.internmc.facebook.com/intern/diff/D71420641)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149298
Approved by: https://github.com/zou3519
2025-03-31 17:25:09 +00:00
Prajesh Praveen Anchalia
005c9b2f4f Fix _Waitcounter decorator and dd backward pass wait counter (#150235)
Summary:
This will log a wait counter with for backward compile and fixes weirdness with nested context managers.

Since the old wait counters added through dynamo_timed were never created with the nesting issue. I am also changing the key nomenclature from `pytorch.dynamo_timed` to `pytorch.wait_counter`. We want to use the same nomenclature, to make it easy to find keys.

Reviewed By: jamesjwu

Differential Revision: D72032055

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150235
Approved by: https://github.com/jamesjwu, https://github.com/masnesral
2025-03-30 05:20:12 +00:00
Michael Lazos
d2c0c65ea1 [Dynamo] Add debug linting option for graph dedupe (#150053)
As title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150053
Approved by: https://github.com/StrongerXi, https://github.com/anijain2305
2025-03-28 14:27:09 +00:00
Yuanhao Ji
d4da0e955e [Dynamo] Fix is_compile_supported() when device_type contains device index (#147837)
Fixes #147826

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147837
Approved by: https://github.com/anijain2305
2025-03-28 07:16:29 +00:00
Animesh Jain
c9ebf517c2 [dynamo][invoke_subgraph] Input aliasing and mutation check in Dynamo (#148953)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148953
Approved by: https://github.com/zou3519
ghstack dependencies: #149087, #149667, #150036
2025-03-28 03:50:07 +00:00
Simon Fan
748252378d [ca] introduce RuntimeState to support c++ hooks via graph breaks (#149987)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149987
Approved by: https://github.com/jansel
ghstack dependencies: #149647, #149709, #149651, #149897
2025-03-27 05:05:34 +00:00
Simon Fan
dcb378cff2 [ca] support anomly mode nan checks with different semantics than eager (#149897)
see note in code

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149897
Approved by: https://github.com/jansel
ghstack dependencies: #149647, #149709, #149651
2025-03-27 05:05:34 +00:00
Yidi Wu
b2b9aaf0ad Fix non-strict export doesn't turn on dynamo for hop (#149903)
Somehow the torch._dynamo.is_compiling is changed to torch.compiler.is_compiling(), which also checks whether we're exporting. This is not caught by cI because we don't have an export test for scan.

Changing to torch.compiler.is_dynamo_compiling and added a test.

edit: piggyback the re-tracing support in this PR. Related code in combine_fn_is_normalized.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149903
Approved by: https://github.com/zou3519
2025-03-27 02:38:05 +00:00
Lucas Kabela
d1ff3ff675 [Bugfix] Add handling for buffer overrides (#149882)
Fixes #139167

This PR:
* uses `named_buffers` to mark static
* Checks that `named_buffers` is of expected type (callable, iterator) before trying to iterate over; if not, we skip this pass

These changes fix the previous errors in dynamo causing to crash (as shown in issue above)

### Unit Test
```
python test/dynamo/test_buffers_override.py
```

Results in:
```
.
----------------------------------------------------------------------
Ran 2 tests in 5.344s

OK
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149882
Approved by: https://github.com/anijain2305
2025-03-25 20:12:43 +00:00
Ryan Guo
1c98dc3664 [dynamo] Fix handling of setattr with some tensor attributes (#149791)
We weren't handling `setattr(tensor_obj, "real", 42)` correctly, because
the attribute is a `GetSetDescriptorType` that has special setter logic.
See added test and comments for more explanations.

This patch makes it so that we graph break in those cases, rather than
resulting in silent incorrectness.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149791
Approved by: https://github.com/mlazos
ghstack dependencies: #149481
2025-03-25 18:57:56 +00:00
angelayi
84ae056d82 [invoke_subgraph] Support pending unbacked symint (#149297)
The "PendingUnbackedSymbolNotFound" error is when an unbacked symbol is created within a piece of code, but this symbol never appears in any of the outputs. I believe the original intention is to help catch incorrectly written meta kernels, where users might've unintentionally created an unbacked symbol but never used it anywhere, but in our case this is intentional. An example is the following test case:

```python
    def test_pending_unbacked(self):
        class M(torch.nn.Module):
            @mark_compile_region
            def gn(self, x):
                u = x[0].item()
                return x * u

            def forward(self, x):
                for _ in range(4):
                    x = self.gn(x)
                return x

        torch._dynamo.config.capture_scalar_outputs = True
        torch.compile(M())(torch.randn(8))
```

This fails with the error:
```
torch._dynamo.exc.InternalTorchDynamoError: PendingUnbackedSymbolNotFound: Pending unbacked symbols {zuf1} not in returned outputs (FakeTensor(..., size=(8,)),) .
```

In this case, creating the unbacked symbol is intentional, so we can bypass this using `fake_mode.shape_env.ignore_fresh_unbakced_symbols()`.

Differential Revision: [D71298926](https://our.internmc.facebook.com/intern/diff/D71298926)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149297
Approved by: https://github.com/zou3519
ghstack dependencies: #149296
2025-03-25 16:42:58 +00:00
Michael Lazos
a89bdc0565 [Hierarchical Compilation] Handle origin nodes without children (#149685)
Bug discovered running Hierarchical Compilation on HF.

I don't have a smaller repro for this unfortunately.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149685
Approved by: https://github.com/williamwen42, https://github.com/anijain2305
2025-03-25 07:27:11 +00:00
Ryan Guo
ae6158500a [dynamo] fix calling torch function on newly constructed tensor subclass (#149481)
This patch updates existing `test_return_..._subclass` tests in
`test/dynamo/test_subclasses.py`, so that they end up invoking the
`__torch_function__` method of the newly constructed tensor subclass
instnaces.

This exposes a bug in `TensorVariable.method_as_subclass`, where it
forgot to grab the `__func__` out of `__torch_function__`, which led to
the an error down the line.

This patch fixes `TensorVariable.method_as_subclass` by centralizing how
we extract and wrap torch function, in `build_torch_function_fn`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149481
Approved by: https://github.com/jansel
2025-03-24 21:07:41 +00:00
Kirill Goltsman
f12969421e [DYNAMO] [BUG FIX] correct casting to boolean for TORCH_COMPILE_DISABLE (#149852)
Fixes #149840

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149852
Approved by: https://github.com/jingsh
2025-03-24 20:50:44 +00:00
Simon Fan
1e5a561c13 [ca] fix accumulate grad polyfill when different strides between param and grad (#149651)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149651
Approved by: https://github.com/jansel
ghstack dependencies: #149647, #149709
2025-03-24 19:06:45 +00:00
Simon Fan
754875e237 [ca] API comments and support dynamic shapes via configs (#149709)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149709
Approved by: https://github.com/jansel
ghstack dependencies: #149647
2025-03-24 19:06:45 +00:00
bobrenjc93
60f31f551e Only print dde partial fx graph for export (#149831)
Lazos correctly pointed out this doesn't make sense for compile since
we graph break in compile. This results in tons of unwanted user log
spew. We do want this in export though since it's drastiaclly reduced
the support load for DDEs. This PR does the refactor to keep it in
export but remove it from compile

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149831
Approved by: https://github.com/mlazos
2025-03-24 17:46:18 +00:00
William Wen
6608d4e3e9 [dynamo] keep chained exceptions in user-facing tracebacks (#149676)
This preserves graph breaks in the case that one graph break directly causes another, e.g. graph breaks in generic context managers.

```python
import torch

class CtxMgr:
    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_value, traceback):
        pass

@torch.compile(backend="eager", fullgraph=True)
def fn():
    with CtxMgr():
        with CtxMgr():
            pass
        with CtxMgr():
            with CtxMgr():
                pass
            torch._dynamo.graph_break()

fn()
```

Output:
```
torch._dynamo.exc.Unsupported: Call to `torch._dynamo.graph_break()`
  Explanation: User-inserted graph break. Message: None
  Hint: Remove the `torch._dynamo.graph_break()` call.

  Developer debug context: Called `torch._dynamo.graph_break()` with args `[]`, kwargs `{}`

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/data/users/williamwen/pytorch/playground.py", line 23, in <module>
    fn()
  File "/data/users/williamwen/pytorch/torch/_dynamo/eval_frame.py", line 664, in _fn
    raise e.with_traceback(None) from e.__cause__
torch._dynamo.exc.Unsupported: Graph break under GenericContextWrappingVariable
  Explanation: Attempted to graph break in an active context manager(s) that doesn't support graph breaking.
  Hint: Move the offending context manager(s) to outside the compiled region.
  Hint: This graph break may have been caused by an earlier graph break. Resolving the earlier graph break may resolve this one.

  Developer debug context: Active generic context managers: [GenericContextWrappingVariable(CtxMgr), GenericContextWrappingVariable(CtxMgr)]

from user code:
   File "/data/users/williamwen/pytorch/playground.py", line 20, in fn
    torch._dynamo.graph_break()

Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```

Note in particular that both graph breaks (torch._dynamo.graph_break and graph break in context manager) are present in the logs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149676
Approved by: https://github.com/jansel, https://github.com/zou3519, https://github.com/anijain2305
2025-03-24 17:36:13 +00:00
Yidi Wu
0a0a73a9a9 [cond] don't trace fw and bw graph in autograd key (#148930)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148930
Approved by: https://github.com/zou3519
2025-03-24 17:07:29 +00:00
fzyzcjy
85027ef74a Super tiny fix typo (#149109)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149109
Approved by: https://github.com/malfet
2025-03-23 03:02:53 +00:00
Animesh Jain
6bbe8dbd63 [dynamo][hooks] config to wrap the top frame in a wrapper (#149758)
This should be done by default but there are too many issues. This PR is a
workaround.

https://github.com/pytorch/pytorch/issues/117584

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149758
Approved by: https://github.com/yf225
ghstack dependencies: #149712
2025-03-22 07:17:01 +00:00
bobrenjc93
621c801f78 fix dynamic float when dynamic=True (#149564)
Fixes https://github.com/pytorch/pytorch/issues/149406#issuecomment-2738111733. Basically previously we would only make floats dynamic via automatic dynamic, now if you set dynamic=True, we will make the floats dynamic on the first compile.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149564
Approved by: https://github.com/laithsakka
2025-03-22 05:58:59 +00:00
Animesh Jain
d320af0663 [dynamo] Ensure placeholder name is not an intermediate node name (#149712)
Fixes https://fb.workplace.com/groups/1075192433118967/permalink/1615671879071017/

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149712
Approved by: https://github.com/zou3519
2025-03-21 22:24:45 +00:00
Michael Lazos
34743678b9 [Dynamo] Cleanup state management for ctx managers (#149689)
Removes state indirection for ctx managers. This isn't needed anymore since VTs are mutable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149689
Approved by: https://github.com/StrongerXi
2025-03-21 07:18:33 +00:00
Hollow Man
0692301e25 Catch OSError in general when writing files (#149464)
Redundant exception types in `except (PermissionError, OSError):`.  Write `except OSError:`, which catches exactly the same exceptions.

https://github.com/pytorch/pytorch/actions/runs/13935844871/job/39141062991

When hipify files, or writing cprofile files, PermissionError is not enough when the file is located in a place that is not writable at all, or other OS errors happened when writing files.

This fix makes the code more robust.

Example error log:
```log
  File "deepspeed/ops/adam/fused_adam.py", line 94, in __init__
    fused_adam_cuda = FusedAdamBuilder().load()
                      ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "deepspeed/ops/op_builder/builder.py", line 540, in load
    return self.jit_load(verbose)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "deepspeed/ops/op_builder/builder.py", line 587, in jit_load
    op_module = load(name=self.name,
                ^^^^^^^^^^^^^^^^^^^^
  File "torch/utils/cpp_extension.py", line 1597, in load
    return _jit_compile(
           ^^^^^^^^^^^^^
  File "torch/utils/cpp_extension.py", line 2031, in _jit_compile
    hipify_result = hipify_python.hipify(
                    ^^^^^^^^^^^^^^^^^^^^^
  File "torch/utils/hipify/hipify_python.py", line 1167, in hipify
    preprocess_file_and_save_result(output_directory, filepath, all_files, header_include_dirs,
  File "torch/utils/hipify/hipify_python.py", line 213, in preprocess_file_and_save_result
    result = preprocessor(output_directory, filepath, all_files, header_include_dirs, stats,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "torch/utils/hipify/hipify_python.py", line 940, in preprocessor
    output_source = RE_QUOTE_HEADER.sub(mk_repl('#include "{0}"', True), output_source)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "torch/utils/hipify/hipify_python.py", line 919, in repl
    preprocess_file_and_save_result(output_directory,
  File "torch/utils/hipify/hipify_python.py", line 213, in preprocess_file_and_save_result
    result = preprocessor(output_directory, filepath, all_files, header_include_dirs, stats,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "torch/utils/hipify/hipify_python.py", line 986, in preprocessor
    with clean_ctx.open(fout_path, 'w', encoding='utf-8') as fout:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "torch/utils/hipify/hipify_python.py", line 123, in open
    return open(fn, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 30] Read-only file system: 'deepspeed/ops/csrc/adam/multi_tensor_apply_hip.cuh'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149464
Approved by: https://github.com/janeyx99
2025-03-21 02:42:50 +00:00
Michael Lazos
1d3c50fcc5 [Dynamo] Support the torch._C.DisableTorchFunction ctx manager (#149491)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149491
Approved by: https://github.com/StrongerXi
ghstack dependencies: #149489, #149490
2025-03-20 22:19:55 +00:00
Michael Lazos
ce5adc5c05 [Dynamo] add support for torch._C._is_torch_function_all_disabled (#149490)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149490
Approved by: https://github.com/StrongerXi
ghstack dependencies: #149489
2025-03-20 22:19:55 +00:00
Michael Lazos
f64c361860 [Dynamo] Refactor DisableTorchFunction ctx manager (#149489)
Refactors the DisableTorchFunction ctx manager to properly model the eager code (no args to the context manager).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149489
Approved by: https://github.com/StrongerXi
2025-03-20 22:19:55 +00:00
Guilherme Leobas
406d464d97 Add is_batchedtensor to dynamo builder (#149541)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149541
Approved by: https://github.com/zou3519
2025-03-20 20:46:15 +00:00
Guilherme Leobas
18435945af Set __context__/__cause__ when generator raise StopIteration (#148765)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148765
Approved by: https://github.com/zou3519
ghstack dependencies: #146505
2025-03-20 19:59:30 +00:00
Guilherme Leobas
44e6464914 Allow setting attribute to NestedUserFunctionVariable (#146505)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146505
Approved by: https://github.com/zou3519
2025-03-20 19:59:30 +00:00
William Wen
6285a71aba [dynamo] fix bug where non-recursive disable modifies the original function (#148896)
Fixes https://github.com/pytorch/pytorch/issues/148787.

We fix this by:
- Wrapping the original function instead of directly modifying it
- When we detect that the previous frame is the non-recursive disable wrapper, then skip tracing this frame (non-recursive disable wrapper will always be skipped, so that frame will be present in the traceback)l

Pull Request resolved: https://github.com/pytorch/pytorch/pull/148896
Approved by: https://github.com/jansel
2025-03-20 18:33:54 +00:00
Shuai Yang
00a2c68f67 Fix a typo "trochrec" to "torchrec" (#149542)
Summary: As titled, the path is incorrect due to the typo

Test Plan: CI

Differential Revision: D71490709

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149542
Approved by: https://github.com/williamwen42
2025-03-20 10:14:23 +00:00
William Wen
a66a9581da [dynamo] support Python 3.13t (#149549)
A few bug fixes to get Dynamo mostly working with 3.13 nogil. Dynamo encounters internal CPython assert errors in older versions of 3.13. The fix has been landed on [CPython's 3.13 branch](https://github.com/python/cpython/tree/3.13) and will be included in 3.13.3 (https://peps.python.org/pep-0719/ - april 8). If you wish to try `torch.compile` on the latest 3.13 branch, you can comment out the error checking (i.e. 70b6cd4e11/torch/__init__.py (L2535) and 70b6cd4e11/torch/_dynamo/eval_frame.py (L899)).

We will work on getting PyTorch CI up for Dynamo/dynamo-wrapped/inductor once 3.13.3 is available.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149549
Approved by: https://github.com/jansel
2025-03-20 09:49:27 +00:00
Sam Larsen
1e30192b19 [logging] Add python version to dynamo_compile table (#149419)
Summary: This adds a version field like the following: `3.10.9+fb (3.10:1dd9be6, May  4 2022, 01:23:45) [Clang 15.0.7 (mononoke://mononoke.internal.tfbnw.net/fbsource 5d1601b0eed7426ac`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149419
Approved by: https://github.com/c00w
2025-03-20 01:48:34 +00:00
Simon Fan
f123f2c077 [ca] fix dce for side-effects (#149336)
The AOT backward could have contained side effectful ops, so we can't DCE them. Have CA also call the default fx.Node.is_impure which will cover some of the existing cases

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149336
Approved by: https://github.com/jansel
2025-03-19 05:56:47 +00:00
Animesh Jain
a3c286677b [compile] Switch off inference mode during compilation (#149321)
PR does following
* Turns `inference_mode` to False and `no_grad` for `convert_frame`, if the inference_mode is on globally.
* Turns off inference_mode for fake tensor prop. This ensures that converting from real inference tensor to a fake tensor removes the inference-ness.
* Graph breaks on is_inference and is_inference_mode_enabled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149321
Approved by: https://github.com/jansel, https://github.com/zou3519
2025-03-19 02:45:27 +00:00
Thomas Bohnstingl
cd5c13d8f0 [hop] Rework the check of Metadata in the functionalization key (#148789)
This PR is a more cosmetic rework of the metadata check performed by some HOPs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/148789
Approved by: https://github.com/ydwu4
2025-03-18 20:30:59 +00:00
Animesh Jain
f9a787224c [dynamo][guards][serialization] Dont use ID_MATCH guard for bool and None (#149228)
Doing this removes the need of collecting `id` and therefore facilitates serialization. It also improves readability with recompilations. Earlier, recompile message will just show the `id`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149228
Approved by: https://github.com/jansel
2025-03-18 01:25:37 +00:00