Commit Graph

483 Commits

Author SHA1 Message Date
Rachel Guo
cad0727fe1 Rename the provenance tracing artifact name for kernel <-> post_grad nodes mapping (#154046)
Summary:
Context:

Recently we've added a couple more kernel types support other than inductor generated triton kernels,

such as cpu cpp kernels, extern kernels.

The name appeared in tlparse chrome link can be confusing to users.

Rename from

`inductor_triton_kernel_to_post_grad_nodes.json`

to `inductor_generated_kernel_to_post_grad_nodes.json`

Test Plan: CI

Differential Revision: D75159042

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154046
Approved by: https://github.com/yushangdi
2025-05-22 19:20:56 +00:00
Gabriel Ferns
254293b777 Add flag _metrics_log_runtime to disable runtime metric logging by default (#153506)
https://github.com/pytorch/pytorch/pull/152708 expanded support of `get_estimated_runtime` to many more types of `SchedulerNodes`. This caused an increase in compile time because we're always calling `get_estimated_runtime` to populate the metrics table. This PR adds a flag for this logging, which reduces the instruction count by 8%. Long term, we should probably merge metrics.py with TORCH_LOGS/tlparse (suggestion from @xmfan).

Update: added support for TORCH_LOGS for the metrics logging.

Test Plan:
mm_loop.py and many existing tests cover.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153506
Approved by: https://github.com/eellison
2025-05-22 01:02:11 +00:00
PaulZhang12
a7c01d7f13 [Inductor] Subgraph check output strides (#153755)
Make sure outputs strides of subgraph consistent with original gm. Without checking strides, it was possible for subgraph to produce nans with a reinterpret tensor on the output of the subgraph output, in which itself was not contiguous.

Differential Revision: [D74691119](https://our.internmc.facebook.com/intern/diff/D74691119/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153755
Approved by: https://github.com/eellison
ghstack dependencies: #153754
2025-05-20 16:07:18 +00:00
Menglu Yu
701e22112d [PT2][Optimus][Observability] Refactor the logging to avoid excessive tlparse log (#153584)
Summary: context: https://fb.workplace.com/groups/943185660584207/permalink/1215335930035844/

Test Plan:
before: aps-aps-ig_v4_2t_2_make_baseline_30batch-735703723-f735706162

tlparse: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/aps-aps-ig_v4_2t_2_make_baseline_30batch-735703723-f735706162/attempt_0/version_0/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000&fbclid=IwZXh0bgNhZW0CMTEAAR575JfJZUtE7kQCqzIZVCYomv1q03JzuMFVok8qDA_FuGC8oZ6rhhb2EziSQA_aem_abITQJZQP45t51_r-J-cFw

Differential Revision: D74776025

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153584
Approved by: https://github.com/jamesjwu
2025-05-19 22:57:29 +00:00
PyTorch MergeBot
3443627e07 Revert "[BE]: Enable RUFF TRY400 rule - log.exception (#153473)"
This reverts commit 4f4ecc583e.

Reverted https://github.com/pytorch/pytorch/pull/153473 on behalf of https://github.com/jeanschmidt due to seems to have broken internal signals, @albanD may I count on you to help the author merge his PR? D74837988 ([comment](https://github.com/pytorch/pytorch/pull/153473#issuecomment-2886017075))
2025-05-16 08:29:26 +00:00
Aaron Gokaslan
4f4ecc583e [BE]: Enable RUFF TRY400 rule - log.exception (#153473)
Change logging.error to logging.exception to log additional information when relevant.  A few places have slipped in logging.errors in try except since I last did a clean up here and the rule is stabilized so I am enabling it codebase wide. I have NOQA'd much of our custom exception stack trace handling for RPC calls and distributed and tried to a fix a few errors based on whether we immediately reraised it or if we didn't print any exception handling where it could be useful.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153473
Approved by: https://github.com/albanD, https://github.com/cyyever
2025-05-15 13:36:59 +00:00
clr
85f97b5a8c compile_fx: make a compile event that corresponds to the fx_compile waitcounter (#152983)
This is a pretty minor change, but by having exact correspondence, we can
easily confirm data differences between perfetto and wait counters

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152983
Approved by: https://github.com/jansel, https://github.com/masnesral
2025-05-14 01:54:42 +00:00
Animesh Jain
7fdd754136 [compile-time traces] Profile large missing gaps in compile time (#151256)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151256
Approved by: https://github.com/bdhirsh, https://github.com/masnesral, https://github.com/zou3519, https://github.com/jansel
2025-05-13 14:44:51 +00:00
PyTorch MergeBot
01bb249978 Revert "has_triton: Use the device interface for detecting Triton availability (#139171)"
This reverts commit 48bfe9afc7.

Reverted https://github.com/pytorch/pytorch/pull/139171 on behalf of https://github.com/masnesral due to Performance regression for huggingface ([comment](https://github.com/pytorch/pytorch/pull/139171#issuecomment-2868939790))
2025-05-10 14:46:23 +00:00
George White
48bfe9afc7 has_triton: Use the device interface for detecting Triton availability (#139171)
This PR replaces the `has_triton()` global method which was previously used for this task.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139171
Approved by: https://github.com/jansel, https://github.com/shink
2025-05-07 12:23:10 +00:00
Aaron Orenstein
7a0781eaad Improve cache key graph printing performance (#151928)
Teach the graph printer how to allow overriding printing SymTypes (`SymInt`, `SymFloat`, `SymBool`) and then use that to reuse the fast SymNode printing from `torch._inductor.utils.sympy_str()` to make computing the cache key faster.

On my computer the repro from #151823 goes from 480s -> 80s (still terrible... but better).

Fixes #151823

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151928
Approved by: https://github.com/laithsakka
2025-05-06 17:39:53 +00:00
Jovian Anthony Jaison
5d36485b4a Log aot and idx waitcounters. (#152444)
Summary:
Added for create_aot_dispatcher_function and compile_fx_inner.

Note:
Log wait counters flag is already set for:
1. async_compile.precompile
2. remote_fx_graph_cache_get
3. remote_fx_graph_cache_put

Test Plan: contbuild

Differential Revision: D73866124

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152444
Approved by: https://github.com/ppanchalia, https://github.com/masnesral
2025-05-06 16:21:58 +00:00
Animesh Jain
cc254eaa7c [inductor][refactor] Refactor the fetching of subgraph names (#152770)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152770
Approved by: https://github.com/jansel, https://github.com/zou3519
ghstack dependencies: #152772
2025-05-06 02:55:34 +00:00
PyTorch MergeBot
172a7c942e Revert "Log aot and idx waitcounters. (#152444)"
This reverts commit ea9ea02959.

Reverted https://github.com/pytorch/pytorch/pull/152444 on behalf of https://github.com/jovianjaison due to needs a fix ([comment](https://github.com/pytorch/pytorch/pull/152444#issuecomment-2851905261))
2025-05-05 18:11:37 +00:00
Jovian Anthony Jaison
ea9ea02959 Log aot and idx waitcounters. (#152444)
Summary:
Added for create_aot_dispatcher_function and compile_fx_inner.

Note:
Log wait counters flag is already set for:
1. async_compile.precompile
2. remote_fx_graph_cache_get
3. remote_fx_graph_cache_put

Test Plan: contbuild

Differential Revision: D73866124

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152444
Approved by: https://github.com/ppanchalia, https://github.com/masnesral
2025-05-05 17:35:29 +00:00
Laith Sakka
38a9a8b7f7 Fix: Consider input defined unbacked during inductor codegen for runtime asserts (#152231)
So when we use mark_unbacked the graph will have an unbacked inputs symInt. Right now,
deferred runtime assertions that uses those  is never generated.

This PR changes that, such that in the forward graph we consider those and generate the corresponding
runtime assertions of them. We still ignore them for backward which is not ideal

The way we generate runtime assertion is by emitting them when all the defined unbacked symbols used
in them are seen.

We previously skipped placeholder, because for backward we have a wacky approach were we
ignore input defined unbacked symbols and assumes assertions that uses them are already emitted
in forward and we try to emit all other runtime assertions again. see [Note [Backwards runtime asserts]

Doing that we ends up only emitting the runtime assertions that depends on things defined solely in backward, but we could miss checks that spans inputs defined in both backward and forward, i.e one symbol defined in forward passed as input to backward., and another that is defined in backward.) .This is not ideal an ideal approach could be something like this https://github.com/pytorch/pytorch/pull/151919 but it require more work .

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152231
Approved by: https://github.com/aorenste
2025-05-02 07:01:48 +00:00
henrylhtsang
1845df05c6 [inductor][BE] Add more debug logs for why fx graph cache doesn't happen (#152487)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152487
Approved by: https://github.com/Skylion007, https://github.com/eellison
2025-05-01 17:25:28 +00:00
Blaine Burton Rister
7c63ddd817 [Inductor] Wrapper code refactors to prepare for FX codegen (#152391)
This PR contains some refactors from https://github.com/pytorch/pytorch/pull/146942, which help to enable Wrapper FX codegen:
1. Remove `OutputLine`, which is unused.
2. Add an attribute to the backend classes specifying whether they support caching.
3. Before compiling a graph, query the registered backends and check whether caching is supported.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152391
Approved by: https://github.com/jansel
2025-05-01 09:14:55 +00:00
PyTorch MergeBot
49a72011cc Revert "[inductor][BE] Add more debug logs for why fx graph cache doesn't happen (#152487)"
This reverts commit 76331657d2.

Reverted https://github.com/pytorch/pytorch/pull/152487 on behalf of https://github.com/malfet due to And it broke those tests, not sure why signal was ignored ([comment](https://github.com/pytorch/pytorch/pull/152487#issuecomment-2843333471))
2025-04-30 21:35:17 +00:00
rzou
22ecaeb145 [standalone_compile] fix dynamic shapes with config_patches (#152462)
compile_fx with config_patches goes down another path where we need to
propagate the kwarg...

Test Plan:
- updated test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152462
Approved by: https://github.com/oulgen
2025-04-30 21:02:14 +00:00
henrylhtsang
76331657d2 [inductor][BE] Add more debug logs for why fx graph cache doesn't happen (#152487)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152487
Approved by: https://github.com/Skylion007, https://github.com/eellison
2025-04-30 20:05:21 +00:00
Brian Hirsh
4a63cab624 [cudagraphs] Fix issue in collecting static_input_idxs (#152287)
related to https://github.com/pytorch/pytorch/issues/152275

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152287
Approved by: https://github.com/bdhirsh, https://github.com/eellison

Co-authored-by: Brian Hirsh <hirsheybar@fb.com>
2025-04-30 03:24:05 +00:00
PyTorch MergeBot
a6d19fcfac Revert "[cudagraphs] Fix issue in collecting static_input_idxs (#152287)"
This reverts commit 75a564608a.

Reverted https://github.com/pytorch/pytorch/pull/152287 on behalf of https://github.com/wdvr due to causing ao failures - discussed with author ([comment](https://github.com/pytorch/pytorch/pull/152287#issuecomment-2837686127))
2025-04-29 06:57:06 +00:00
Animesh Jain
75a564608a [cudagraphs] Fix issue in collecting static_input_idxs (#152287)
related to https://github.com/pytorch/pytorch/issues/152275

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152287
Approved by: https://github.com/bdhirsh, https://github.com/eellison
2025-04-28 23:07:52 +00:00
Animesh Jain
ddff3d4f6b [inductor][invoke_subgraph] Run joint graph passes for inference (#152062)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152062
Approved by: https://github.com/eellison
ghstack dependencies: #151409, #151633, #151477, #151957, #151961
2025-04-28 20:42:55 +00:00
Boyuan Feng
99b6c426a9 [Graph Partition] fix extra reference in runner.partitions to cudagraphify functions (#152066)
When CompiledFxGraph is deallocated, its cudagraphifed fn (i.e., `current_callable`) is expected to also be deallocated.
Without graph partition, this is true since the cudagraphified fn is only refered by compiled_fx_graph.current_callable.

However, with graph partition, runner.partitions hold cudagraphified fns while compiled_fx_graph.current_callable holds the runner.call. Thus the cudagraphied fn may not be deallocated when CompiledFxGraph is deallocated. This leads to errors in several unit tests (e.g., test_unaligned_static_input_no_cudagraphs and test_unaligned_static_input_non_trees).

In this PR, we also clean up runner.partitions when CompiledFxGraph is deallocated. This fixes the issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152066
Approved by: https://github.com/eellison
2025-04-28 20:38:26 +00:00
Shangdi Yu
85bfaf8cc5 Package const folded graph's cubin file (#152145)
Summary: We need to pacakge const folded graph's cubin file into the final .pt2 package.

Fix https://github.com/pytorch/pytorch/issues/152067

Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r test_constant_folding_cuda
```

Differential Revision: D73626480

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152145
Approved by: https://github.com/henrylhtsang, https://github.com/desertfire
2025-04-25 18:38:32 +00:00
rzou
596296fb0b [standalone_compile] Dynamic shape handling (#151788)
standalone_compile needs to get dynamic shape information from
somewhere. We add a new `dynamic_shapes` argument with three options:

1. from the passed-in graph (dynamic="from_graph"). This is the default.
2. from the example inputs, thereby specializing on them. (dynamic="from_example_inputs")
3. from the current tracing context (dynamic="from_tracing_context")

1 and 3 are not exactly the same. 2 can also be used for more advanced
things... (specialize on one input but not the other).

Most of this PR is tests.

Test Plan:
- a lot of new tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151788
Approved by: https://github.com/oulgen
2025-04-22 20:17:24 +00:00
Shangdi Yu
264e8fb151 More fix for aot_export_module name collision during unlifting (#151684)
Summary: Also check the module's named buffers and parameters when resolving name collision

Test Plan:
```
buck2 run mode/dev-nosan caffe2/test/inductor:test_aot_inductor -- -r aoti_constant_tensor_name_collision
```

Differential Revision: D73264885

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151684
Approved by: https://github.com/angelayi
2025-04-22 16:59:33 +00:00
Lu Fang
b3b1616560 Add explict type info in the try-catch for dynamo logging (#151733)
Differential Revision: D73295871

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151733
Approved by: https://github.com/hl475
2025-04-21 23:29:10 +00:00
PyTorch MergeBot
fd04c79878 Revert "[aot autograd][logging] Profile large missing gaps in compile time tracing (#151256)"
This reverts commit 8e373592c8.

Reverted https://github.com/pytorch/pytorch/pull/151256 on behalf of https://github.com/Camyll due to breaking internal tests, cannot import ([comment](https://github.com/pytorch/pytorch/pull/151256#issuecomment-2819244186))
2025-04-21 18:49:23 +00:00
Oguz Ulgen
ef64beb232 Include post grad gm and fx runnable in cache artifacts for tlparse (#151469)
Fixed #151462

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151469
Approved by: https://github.com/bdhirsh
2025-04-17 17:14:13 +00:00
Shangdi Yu
cd1db55817 Fix tensor_constant name collision in aot_export_module (#151123)
Summary:
When we have an exported program that looks like this:

```
ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, b__tensor_constant0: "f32[1]", ... c_lifted_tensor_0: "i64[925]", …. , tupleized_input_0_0: "f32[10, 2139]",

            clone: "i64[925]" = torch.ops.aten.clone.default(c_lifted_tensor_0);  c_lifted_tensor_0 = None

            index_select: "f32[10, 925]" = torch.ops.aten.index_select.default(tupleized_input_0_0, 1, clone);  clone = None
```

The graph after `aot_export_module` could have a name collision, notice that `_tensor_constant0` arg of `clone` is different from the  `_tensor_constant0`  in the input module .

```
def forward(self):
        arg9_1: "f32[10, 2139]"

        _tensor_constant0: "f32[1]" = self._tensor_constant0 # this should be int64, conflicted with the original _tensor_constant0, had a clone on this constant before lifting

        index: "f32[10, 925]" = torch.ops.aten.index.Tensor(arg9_1, [None, _tensor_constant0]);  _tensor_constant0 = None
```

This caused the `tensors used as indices must binary, int...` aoti error on PT2I dashboard because later we used `clone` as index.

We had this error because we created a new `_tensor_constant0` at [here](https://github.com/pytorch/pytorch/blob/main/torch/fx/_symbolic_trace.py#L403-L412), and the new `_tensor_constant0` overrides the original `_tensor_constant0` on the input Module in `_unlift_graph`. The `arg` for `clone` is created at `create_proxy` in `proxy.py`.

To fix this, we do a graph pass before we unlift the graph inputs to avoid name collision

Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r aot_compile_constant_folding

buck2 run mode/dev-nosan caffe2/test/inductor:test_aot_inductor -- -r aoti_constant_tensor_name_collision
```

Differential Revision: D72761937

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151123
Approved by: https://github.com/tugsbayasgalan, https://github.com/jingsh
2025-04-17 01:52:21 +00:00
Animesh Jain
8e373592c8 [aot autograd][logging] Profile large missing gaps in compile time tracing (#151256)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151256
Approved by: https://github.com/bdhirsh, https://github.com/masnesral
ghstack dependencies: #151330
2025-04-16 20:37:08 +00:00
Animesh Jain
c58b3f6be3 [invoke_subgraph][inductor] Run pre and post grad passes on invoke_subgraph (#151330)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151330
Approved by: https://github.com/eellison, https://github.com/zou3519
2025-04-16 20:37:01 +00:00
Oguz Ulgen
3cf0e2d8ec Add inductor standalone_compile API (#150670)
This PR adds standalone_compile API that does precompilation via caching to support vLLM use case in the short term while we work on the longer term precompilation solution.

```
standalone_compile(gm, example_inputs, options) -> CompiledArtifact
CompiledArtifact.save(path, format: binary|unpacked = binary)
CompiledArtifact.load(path, format: binary|unpacked = binary)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150670
Approved by: https://github.com/jamesjwu, https://github.com/zou3519
2025-04-15 23:38:15 +00:00
PyTorch MergeBot
74f6bc28a7 Revert "Add inductor standalone_compile API (#150670)"
This reverts commit c9aef50898.

Reverted https://github.com/pytorch/pytorch/pull/150670 on behalf of https://github.com/Camyll due to breaking internal builds with torch module not found error ([comment](https://github.com/pytorch/pytorch/pull/150670#issuecomment-2806975267))
2025-04-15 17:35:59 +00:00
Oguz Ulgen
c9aef50898 Add inductor standalone_compile API (#150670)
This PR adds standalone_compile API that does precompilation via caching to support vLLM use case in the short term while we work on the longer term precompilation solution.

```
standalone_compile(gm, example_inputs, options) -> CompiledArtifact
CompiledArtifact.save(path, format: binary|unpacked = binary)
CompiledArtifact.load(path, format: binary|unpacked = binary)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150670
Approved by: https://github.com/jamesjwu, https://github.com/zou3519
2025-04-14 22:00:09 +00:00
PyTorch MergeBot
24b3ab9255 Revert "Add inductor standalone_compile API (#150670)"
This reverts commit bbc5fe8504.

Reverted https://github.com/pytorch/pytorch/pull/150670 on behalf of https://github.com/albanD due to Broke profiler test ([comment](https://github.com/pytorch/pytorch/pull/150670#issuecomment-2802067144))
2025-04-14 15:22:33 +00:00
Oguz Ulgen
bbc5fe8504 Add inductor standalone_compile API (#150670)
This PR adds standalone_compile API that does precompilation via caching to support vLLM use case in the short term while we work on the longer term precompilation solution.

```
standalone_compile(gm, example_inputs, options) -> CompiledArtifact
CompiledArtifact.save(path, format: binary|unpacked = binary)
CompiledArtifact.load(path, format: binary|unpacked = binary)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150670
Approved by: https://github.com/jamesjwu, https://github.com/zou3519
2025-04-14 07:07:10 +00:00
James Wu
f1364431f0 Add debug_lines of FXGraphCacheKey to AOTAutogradCacheEntry (#150594)
Previously we didn't save debug_lines because it's pretty large, but compared to the size of FXGraphCache entries it's still pretty small. So let's add it to AOTAutogradCache for easier debugability.

Differential Revision: [D72361611](https://our.internmc.facebook.com/intern/diff/D72361611/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150594
Approved by: https://github.com/oulgen
2025-04-11 15:24:13 +00:00
Burak Turk
38bec787fa cleanup JK for duplicate pt2 compile callbacks prevention (#148704)
Summary: This diff cleans up the JK we used for enabling `add pt2 callbacks for backward pass and prevent duplicate callbacks` feature.

Differential Revision: D70643543

Pull Request resolved: https://github.com/pytorch/pytorch/pull/148704
Approved by: https://github.com/mlazos
2025-04-11 15:17:06 +00:00
Gabriel Ferns
24f50653c8 fix bug in logging code (#150518)
Fixes https://github.com/pytorch/pytorch/issues/150379

```python
>>> key = "aten._int_mm_1_2_3"
>>> m, n, k = key.split("_")[-3:]
>>> m, n, k
('1', '2', '3')
>>> name = "_".join(key.split("_")[:-3])
>>> name
'aten._int_mm'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150518
Approved by: https://github.com/xmfan
2025-04-02 23:39:06 +00:00
James Wu
b03c42109c Proactively remove CompiledTritonKernels before loading from cache/starting inductor compile (#150453)
We'll still running into this issue intermittently and it's hard to debug; so I thought a more aggressive cache clear strategy may fix it as a stopgap until we can Statically launch cuda kernels and avoid some of this stuff

Differential Revision: [D72257973](https://our.internmc.facebook.com/intern/diff/D72257973/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150453
Approved by: https://github.com/oulgen
2025-04-02 20:08:32 +00:00
Prajesh Praveen Anchalia
48e9ffc873 Unify on dynamo_compile as the overall wait counter (#150293)
Summary:
dynamo_compile for the most part has been accounting for compile time except autotuning.

all_compilation_types had earlier been injected on fx_codegen_and_compile, which was incorrect.

Add autotuining to dynamo and deprcate all_compilation_types counter.

Differential Revision: D72145447

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150293
Approved by: https://github.com/masnesral, https://github.com/jamesjwu
2025-04-01 08:55:51 +00:00
Shangdi Yu
cc58ecceea Move dump location to avoid dumping twice (#150219)
Summary:
If we put the dumping code in codegen, we might get a separate node_mapping dump for the constant folded graph (https://github.com/pytorch/pytorch/blob/main/torch/_inductor/compile_fx.py#L1119).

We move it into compile_fx.py so there's only one node_mapping dump.

Test Plan: CI

Reviewed By: YUNQIUGUO

Differential Revision: D72068715

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150219
Approved by: https://github.com/YUNQIUGUO
2025-03-30 03:35:38 +00:00
Colin L. Rice
2f785ab208 dynamo_compile: Log all compilation time under all_compilation_types (#149664)
This counter is designed to include all compilation pytorch does (triton +
dynamo_compile). However this wasn't including all of dynamo compilation, since
it was put in at the fx_codegen_and_compile spot.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149664
Approved by: https://github.com/masnesral
2025-03-28 02:27:48 +00:00
Shangdi Yu
0ed0b7fa96 [aoti] Better error message when torchbind object is used as a graph input in AOTI (#149965)
Summary: Given an explicit error when torchbind object is used as input to AoTI

Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r test_torchbind_input
```

Differential Revision: D69490915

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149965
Approved by: https://github.com/desertfire
2025-03-27 18:48:55 +00:00
PyTorch MergeBot
ab9ca6b31f Revert "[inductor] Fix mm logging for torch._scaled_.mm (#149967)"
This reverts commit 661d74bf44.

Reverted https://github.com/pytorch/pytorch/pull/149967 on behalf of https://github.com/malfet due to This broke ROCM testing, see 45b11730f1/1 ([comment](https://github.com/pytorch/pytorch/pull/149967#issuecomment-2753149024))
2025-03-26 03:29:59 +00:00
Rachel Guo
661d74bf44 [inductor] Fix mm logging for torch._scaled_.mm (#149967)
Summary:
This pr is just for recreation of the original pr: https://github.com/pytorch/pytorch/pull/149769

Fix for `torch._scaled_mm` op mm logging,  which breaks the original brittle underscore parsing
assumptions.

Test Plan: CI

Differential Revision: D71828732

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149967
Approved by: https://github.com/vkuzo
2025-03-25 23:38:35 +00:00