Commit Graph

86852 Commits

Author SHA1 Message Date
Michael Lazos
253059356f [Cutlass] Implement EVT example tensor creation (#150904)
This PR implements a translation layer from inductor IR to "example tensors" the expected arguments of the EVT tracer. These tensors basically store the name, shape, stride, and dtype of the tensor and allow an ast-based python parse to generate the EVT C++.

udpates to example tensor creation

Previously merged:
* https://github.com/pytorch/pytorch/pull/150903
* https://github.com/pytorch/pytorch/pull/150346
* https://github.com/pytorch/pytorch/pull/150345
* https://github.com/pytorch/pytorch/pull/150344

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150904
Approved by: https://github.com/eellison
2025-04-23 03:26:56 +00:00
Oguz Ulgen
cd021d048e Fix circular imports (#151939)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151939
Approved by: https://github.com/jamesjwu
2025-04-23 02:53:32 +00:00
Pian Pawakapan
13339ce086 [dynamic shapes] bound_sympy for size-oblivious min/max reasoning (#151242)
Differential Revision: D72978020

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151242
Approved by: https://github.com/bobrenjc93
2025-04-23 02:14:05 +00:00
Shunting Zhang
74074fe8d8 [inductor] handle offset in ReinterpretView for alignment (#151859)
Fix https://github.com/pytorch/pytorch/issues/151589

It's interesting that the Q4_K dequantization example in the referred GH issue does not crash even if Inductor pass triton the wrong alignment information. I dig this a bit. The main reason is, there are 2 things in triton that decides the vectorization size
1. alignement
2. max number of contiguous elements a thread need to process

Here is the triton code that decides vectorization size [link](c5fed8e1ca/third_party/nvidia/lib/TritonNVIDIAGPUToLLVM/LoadStoreOpToLLVM.cpp (L147-L157)), and here is the triton code that considers contiguity for vectorization [link](c5fed8e1ca/lib/Analysis/AxisInfo.cpp (L1250-L1269))

When Inductor wrongly tell triton that a unaligned tensor is aligned, Triton may not do vectorization (or not do full vectorization) because of the second restriction.

Check this test:
```
    @parametrize(
        "size",
        (
            128,
            1024,
            1024 * 1024,
        ),
    )
    def test_slice_view_dtype(self, size):
        offset = 1

        def f(x):
            return x[2:].view(dtype=torch.float32) + 1

        x = torch.randn((size + offset) * 2, dtype=torch.bfloat16, device=self.device)
        self.common(f, (x,), reference_in_float=False)
```

Before the fix, Inductor would tell Triton that the output of aten.view.dtype tensor is aligned even though it's not. That tensor will be passed to the triton kernel for the aten.add. Triton may do different vectorization decision depending on the tensor size
1. when size = 128, triton pick ld.global.b32 to load data from global memory
2. when size = 1024, triton uses ld.global.v2.b32
4. when size = 1024 * 1024, triton uses ld.global.v4.b32

So whether wrong alignment metadata causes issue depends on if triton picks the vectorized instructions. The latter depends on the triton config (block size) decided by inductor and triton internal logic (how they assign elements to each thread). We'd better to make sure Inductor always generate correct metadata to make sure such hidden issues does not turn into crash later.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151859
Approved by: https://github.com/jansel, https://github.com/eellison
ghstack dependencies: #151841
2025-04-23 01:50:49 +00:00
leslie-fang-intel
68a7501dab [Inductor][CPP] Fix Codegen Issue when Parallel Reduction under the vectorization (#151887)
**Summary**
Fixes [#151290](https://github.com/pytorch/pytorch/issues/151290) and [#151523](https://github.com/pytorch/pytorch/issues/151523), which are regressions introduced by [#144020](https://github.com/pytorch/pytorch/pull/144020). That PR enabled parallelization at the inner loop level.

However, a currently unsupported case arises when parallel reduction occurs under the vectorization loop level, specifically in patterns like:
```
for vec_loop_level:
    do_parallel_reduction
```
In such cases, a temporary buffer `tmp_acc_array` is allocated for tail scalar kernels, and another temporary buffer `tmp_acc_array` is also defined for parallel reduction. This results in a conflict due to overlapping temporary buffers. This PR disables the problematic case to avoid the conflict until proper support is implemented.

**Test Plan**
```
python test/inductor/test_flex_attention.py -k test_make_block_mask_cpu
python test/inductor/test_cpu_repro.py -k test_parallel_reduction_vectorization
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151887
Approved by: https://github.com/jansel
2025-04-23 00:41:14 +00:00
Nikita Shulga
015b526a2a [MPSInductor] Warn-cast double as floats (#151963)
To support sqrt over dynamic shapes, i.e. make something like:
```python
torch.compile(dynamic=True)(lambda x: x * math.sqrt(x.size(0))
```
compilable into
```metal
// Source node to ATen node mapping:
// Graph fragment:
//   %scalar_tensor_default : [num_users=1] = call_function[target=torch.ops.aten.scalar_tensor.default](args = (%arg0_1,), kwargs = {})
//   %convert_element_type_default : [num_users=1] = call_function[target=torch.ops.prims.convert_element_type.default](args = (%scalar_tensor_default, torch.float64), kwargs = {})
//   %sqrt_default : [num_users=1] = call_function[target=torch.ops.aten.sqrt.default](args = (%convert_element_type_default,), kwargs = {})
//   %convert_element_type_default_1 : [num_users=1] = call_function[target=torch.ops.prims.convert_element_type.default](args = (%sqrt_default, torch.float32), kwargs = {})
//   %mul_tensor : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%arg1_1, %convert_element_type_default_1), kwargs = {})
 kernel void generated_kernel(
     device float* out_ptr0,
     constant float* in_ptr0,
     constant long& ks0,
     uint xindex [[thread_position_in_grid]]
 ) {
     int x0 = xindex;
     auto tmp0 = in_ptr0[x0];
     auto tmp1 = ks0;
     auto tmp2 = static_cast<float>(tmp1);
     auto tmp3 = metal::sqrt(tmp2);
     auto tmp4 = static_cast<float>(tmp3);
     auto tmp5 = tmp0 * tmp4;
     out_ptr0[x0] = static_cast<float>(tmp5);
 }
```

TODO:
 - Figure out if this could be tweaked in fx-passes, but overhead is probably too high

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151963
Approved by: https://github.com/dcci
ghstack dependencies: #151869, #151871, #151872
2025-04-23 00:30:45 +00:00
Davide Italiano
49b7ffbb15 [MPS] Implement _print_Trunc_to_Int (#151964)
Fixes `test_device_assert_mps`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151964
Approved by: https://github.com/malfet

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-04-23 00:30:00 +00:00
PyTorch MergeBot
72f711e200 Revert "[inductor] Change minimum number of SMs to 60 to let Ada use Triton GEMM backend (#150888)"
This reverts commit 8d81806211.

Reverted https://github.com/pytorch/pytorch/pull/150888 on behalf of https://github.com/henrylhtsang due to Revert because this change isn't needed ([comment](https://github.com/pytorch/pytorch/pull/150888#issuecomment-2822768377))
2025-04-23 00:26:49 +00:00
Syed Tousif Ahmed
334aab0dea Updates NCCLConfig with QOS variable (#151821)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151821
Approved by: https://github.com/kwen2501
2025-04-23 00:03:49 +00:00
Scott Wolchok
aa61707a56 Fix extra heap allocation in Source constructor (#151800)
This was a sneaky one: the StringCordView default constructor allocates.

Differential Revision: [D73129448](https://our.internmc.facebook.com/intern/diff/D73129448/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151800
Approved by: https://github.com/malfet, https://github.com/cyyever, https://github.com/Skylion007
ghstack dependencies: #151682
2025-04-22 23:36:06 +00:00
Riley Dulin
cd576fdce5 [torch][fx] Add support for EXIR dialect overload ops in normalize_function (#143689)
Summary:
I had a minor annoyance when debugging graphs using EXIR dialect ops,
that all the function normalization went away. For functions with > 5 arguments,
some of which are just simple bools and ints, it's very helpful to have
the kwarg names attached.

Enhance `normalize_target` to handle EdgeOpOverload targets. To avoid
a circular dependency on Executorch from pytorch core, I just use a `hasattr`
check for "_op". This only happens if the target is not already a recognized
torch function.

Also, I noticed that the new `fx.Node.normalized_arguments` function
didn't forward an important kwarg to `normalize_target`, so I fixed that too.

Test Plan: Tested with FxGraphDrawer and an fx Graph containing EXIR nodes.

Differential Revision: D67545909

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143689
Approved by: https://github.com/angelayi
2025-04-22 23:36:02 +00:00
Scott Wolchok
4f8adde5ce Speed up OperatorEntry construction by avoiding updateDispatchTableFull_ (#151682)
The purpose of the updateDispatchTableFull_ call is, according to the comment, just to pick up fallback kernels if there are any. We can implement that directly more efficiently.

Differential Revision: [D73129447](https://our.internmc.facebook.com/intern/diff/D73129447/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151682
Approved by: https://github.com/Skylion007, https://github.com/malfet, https://github.com/bdhirsh
2025-04-22 23:35:53 +00:00
Keshav Kolur
c98340e268 [autodeps2] Replace third-party/pyyaml with third-party/pypi/pyyaml (#151668)
Summary: We should use the pypi version.

Test Plan: CI

Differential Revision: D73211869

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151668
Approved by: https://github.com/Skylion007
2025-04-22 23:27:13 +00:00
angelayi
f4ac9a160d [fx] Filter stacktrace (#151029)
Filtering out the stacktrace so that the stacktrace on nodes when using fx.Tracer looks nicer. I just copied the filtering we have in [proxy_tensor.py](6720d23969/torch/fx/experimental/proxy_tensor.py (L1903-L1931)).

Previously the stacktrace looked like:
```
File "/data/users/angelayi/pytorch/moo.py", line 3964, in <module>
    run_tests()
  File "/data/users/angelayi/pytorch/torch/testing/_internal/common_utils.py", line 1342, in run_tests
    unittest.main(argv=argv)
  File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/main.py", line 101, in __init__
    self.runTests()
  File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/main.py", line 271, in runTests
    self.result = testRunner.run(self.test)
  File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/runner.py", line 184, in run
    test(result)
  File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 84, in __call__
    return self.run(*args, **kwds)
  File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 122, in run
    test(result)
  File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 84, in __call__
    return self.run(*args, **kwds)
  File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/suite.py", line 122, in run
    test(result)
  File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/case.py", line 650, in __call__
    return self.run(*args, **kwds)
  File "/data/users/angelayi/pytorch/torch/testing/_internal/common_utils.py", line 3324, in run
    self._run_custom(
  File "/data/users/angelayi/pytorch/torch/testing/_internal/common_utils.py", line 3296, in _run_custom
    super_run(result=result)
  File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/case.py", line 591, in run
    self._callTestMethod(testMethod)
  File "/home/angelayi/.conda/envs/pytorch-3.10/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
    method()
  File "/data/users/angelayi/pytorch/torch/testing/_internal/common_utils.py", line 3156, in wrapper
    method(*args, **kwargs)
  File "/data/users/angelayi/pytorch/moo.py", line 1495, in test_stack_trace
    gm = torch.fx.GraphModule(m, tracer.trace(m))
  File "/data/users/angelayi/pytorch/torch/fx/_symbolic_trace.py", line 837, in trace
    (self.create_arg(fn(*args)),),
  File "/data/users/angelayi/pytorch/moo.py", line 1485, in forward
    x = x * 2
  File "/data/users/angelayi/pytorch/torch/fx/proxy.py", line 716, in impl
    return tracer.create_proxy("call_function", target, args, kwargs)
  File "/data/users/angelayi/pytorch/torch/fx/proxy.py", line 248, in create_proxy
    proxy.node.stack_trace = "".join(CapturedTraceback.extract().format())
```
Now it looks like:
```
File "/data/users/angelayi/pytorch/moo.py", line 1485, in forward
    x = x * 2
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151029
Approved by: https://github.com/jfix71, https://github.com/zou3519, https://github.com/jingsh
2025-04-22 22:50:36 +00:00
Amandeep Chhabra
a7ccd96bbf logging start of torch elastic workers. (#150849)
Summary:
We would like to log start of the workers. It will help with complete logging.

Test Plan:
unit tests

https://www.internalfb.com/intern/testinfra/testrun/6473924724652056

e2e tests
https://www.internalfb.com/mlhub/pipelines/runs/mast/f712311762-27449483648-TrainingApplication_V403K?job_attempt=0&version=0&tab=execution_details&env=PRODUCTION

Reviewed By: tnykiel

Differential Revision: D72297314

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150849
Approved by: https://github.com/d4l3k, https://github.com/kiukchung
2025-04-22 22:35:06 +00:00
angelayi
6a1b820255 [export] Enable symint inputs for AdditionalInputs and ShapesCollection (#151842)
With `AdditionalInputs`, the behavior is the same as with tensors:
```python
class M(torch.nn.Module):
    def forward(self, x, y):
        return x + y

additional_inputs = torch.export.AdditionalInputs()
additional_inputs.add((5, 5))
additional_inputs.add((3, 5))
additional_inputs.add((5, 4))
ep = torch.export.export(
    M(), (6, 7), dynamic_shapes=additional_inputs, strict=False
)
```

With `ShapesCollection`, we now need to wrap integer inputs as `_IntWrapper` so that we can have a unique identifier for each integer input.
```python
class M(torch.nn.Module):
    def forward(self, x, y):
        return x + y

from torch.export.dynamic_shapes import _IntWrapper

args = (_IntWrapper(5), _IntWrapper(5))
# Or we can do `args = pytree.tree_map_only(int, lambda a: _IntWrapper(a), orig_args)`
shapes_collection = torch.export.ShapesCollection()
shapes_collection[args[0]] = Dim.DYNAMIC
shapes_collection[args[1]] = Dim.DYNAMIC
ep = torch.export.export(
    M(), args, dynamic_shapes=shapes_collection, strict=False
)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151842
Approved by: https://github.com/pianpwk
2025-04-22 22:29:18 +00:00
atalman
43de9b75c3 Remove mention of magma-cuda in readme.md, refactor magma_conda install (#147476)
Related to: https://github.com/pytorch/pytorch/issues/138506 we migrated magma-cuda build from anaconda to aws
Last version of magma-cuda published was 12.6 https://anaconda.org/pytorch/magma-cuda126

Here is the PR that moved from anaconda to tarball: https://github.com/pytorch/pytorch/pull/140417

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147476
Approved by: https://github.com/albanD
2025-04-22 22:08:49 +00:00
Nikita Shulga
c0b70f94e2 [Testing] Enable test_mutations_loop_fusion_mps (#151872)
By testing it against float32 rather than double dtype

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151872
Approved by: https://github.com/Skylion007, https://github.com/dcci, https://github.com/jansel
ghstack dependencies: #151869, #151871
2025-04-22 22:00:16 +00:00
Nikita Shulga
2f851ac8f8 [MPSInductor] Implement atomic_add store mode (#151871)
Which fixes `GPUTests.test_index_put2_mps`, `GPUTests. test__unsafe_masked_index_put_accumulate_mps` and dozen of scatter/gather tests that relied on atomic_add store mode

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151871
Approved by: https://github.com/jansel, https://github.com/dcci
ghstack dependencies: #151869
2025-04-22 22:00:16 +00:00
Nikita Shulga
3aecf2dc52 [MPS] Extend index_put to half precision floats (#151869)
By reusing `c10/metal/atomic.h`
This also fixes `GPUTests.test_index_put_fallback[12]_mps` that is unrolled by inductor, so no need for dedicated atomic_add support

TODOs:
 - Get rid of indexing kernel and compute it directly when kernel is run
 - Simulate atomic_add for int64 types as series of int32 atomic-add-and-fetch
 - Setup tolerances correctly to pass float16/bfloat16 tests (as CPU always takes sequential strategy)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151869
Approved by: https://github.com/Skylion007, https://github.com/dcci
2025-04-22 22:00:08 +00:00
Prachi Gupta
b8f4dc5a9f [ROCm] opportunistic fastatomics for ReduceAdd operations for MI300 GPUs (#146264)
In this approach, we are catching any lane within a wave that is doing fastatomics to the same destination address and computing the sum on the CU. This is leading to 3x improvement in scatter_add performance and 2x improvement in index_select.

scatter_add performance on MI300x:
dtype|Baseline (before optimizations)|opportunistic fastatomics
-------|----------------------------------|----------------------------------
f32|1.389425039|0.430447996
fp16|2.195472956|0.779729486
bf16|2.194051027|0.784599513

Using the following reproducer
```
import torch
import triton

def main():
    dtype = torch.float32
    dim = 1305301
    a = torch.rand(100, device="cuda", dtype=dtype)
    index = torch.randint(0, 100, (dim,), device="cuda")
    src = torch.rand(dim, device="cuda", dtype=dtype)

    print("=" * 20)
    print(
        triton.testing.do_bench(
            lambda: a.scatter_add(0, index, src),
            return_mode="median",
        )
    )
    print("=" * 20)

if __name__ == "__main__":
    main()
```

co-authored by: @amd-hhashemi

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146264
Approved by: https://github.com/jeffdaily, https://github.com/mxz297

Co-authored-by: Hashem Hashemi <hashem.hashemi@amd.com>
2025-04-22 21:55:40 +00:00
Catherine Lee
e05ac9b794 Use folder tagged docker images for binary builds (#151706)
Should be the last part of https://github.com/pytorch/pytorch/pull/150558, except for maybe s390x stuff, which I'm still not sure what's going on there

For binary builds, do the thing like we do in CI where we tag each image with a hash of the .ci/docker folder to ensure a docker image built from that commit gets used.  Previously it would use imagename:arch-main, which could be a version of the image based on an older commit

After this, changing a docker image and then tagging with ciflow/binaries on the same PR should use the new docker images

Release and main builds should still pull from docker io

Cons:
* if someone rebuilds the image from main or a PR where the hash is the same (ex folder is unchanged, but retrigger docker build for some reason), the release would use that image instead of one built on the release branch
* spin wait for docker build to finish
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151706
Approved by: https://github.com/atalman
2025-04-22 21:50:10 +00:00
sumantro93
017a6bd593 add min/max_seqlen to non_differentiable (#151750)
Fixes #148988

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151750
Approved by: https://github.com/soulitzer
2025-04-22 21:46:02 +00:00
PyTorch MergeBot
835413baed Revert "[Optimus][Observability] Improve tlparse logging (#151635)"
This reverts commit 06a3c3c8cd.

Reverted https://github.com/pytorch/pytorch/pull/151635 on behalf of https://github.com/clee2000 due to broke dynamo/test_structured_trace.py::StructuredTraceTest::test_ddp_graphs [GH job link](https://github.com/pytorch/pytorch/actions/runs/14600342064/job/40970324075) [HUD commit link](06a3c3c8cd), test did fail on PR but dr ci says it matches an existing failure, which it does, but also this PR breaks the test too ([comment](https://github.com/pytorch/pytorch/pull/151635#issuecomment-2822538113))
2025-04-22 21:39:23 +00:00
PyTorch MergeBot
bc6c0bc344 Revert "Do not generate long log messaged for suppressed data dependent errors. (#151023)"
This reverts commit dfdf731579.

Reverted https://github.com/pytorch/pytorch/pull/151023 on behalf of https://github.com/laithsakka due to breaking other PRs ([comment](https://github.com/pytorch/pytorch/pull/151023#issuecomment-2822483635))
2025-04-22 21:08:30 +00:00
PyTorch MergeBot
459c62ee1d Revert "Do not log exception when recording is disabled or already recording (#151038)"
This reverts commit 73d95893a2.

Reverted https://github.com/pytorch/pytorch/pull/151038 on behalf of https://github.com/laithsakka due to breaking other PRs ([comment](https://github.com/pytorch/pytorch/pull/151023#issuecomment-2822483635))
2025-04-22 21:08:30 +00:00
PyTorch MergeBot
aaf71a481b Revert "Log information about suppressed data dependent errors (#151041)"
This reverts commit ccd00359da.

Reverted https://github.com/pytorch/pytorch/pull/151041 on behalf of https://github.com/laithsakka due to breaking other PRs ([comment](https://github.com/pytorch/pytorch/pull/151023#issuecomment-2822483635))
2025-04-22 21:08:30 +00:00
Scott Wolchok
2f74cffab2 Remove reinterpret_casts with undefined behavior from stable/library.h (#151595)
There is a list of valid uses of `reinterpret_cast` (see https://en.cppreference.com/w/cpp/language/reinterpret_cast), and the use here was not on the list, hence undefined behavior. Implement what we meant using memcpy, which is well-defined.

Differential Revision: [D73200791](https://our.internmc.facebook.com/intern/diff/D73200791/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151595
Approved by: https://github.com/janeyx99
2025-04-22 20:24:47 +00:00
Wanchao Liang
3380a46b44 Fix DTensorTestBase to barrier with device ids (#150896)
try to get rid of the below annoying warnings when running the unit tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150896
Approved by: https://github.com/fegin
2025-04-22 20:22:55 +00:00
Shunting Zhang
a48ccf02f9 [Inductor] move alignment tests to a separate file (#151841)
This is a pure code movement. test_torchinductor.py is already 15K lines of code. Move alignment related tests I added recently to a separate file. I need add more such kind of tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151841
Approved by: https://github.com/jansel, https://github.com/eellison
2025-04-22 20:18:58 +00:00
rzou
596296fb0b [standalone_compile] Dynamic shape handling (#151788)
standalone_compile needs to get dynamic shape information from
somewhere. We add a new `dynamic_shapes` argument with three options:

1. from the passed-in graph (dynamic="from_graph"). This is the default.
2. from the example inputs, thereby specializing on them. (dynamic="from_example_inputs")
3. from the current tracing context (dynamic="from_tracing_context")

1 and 3 are not exactly the same. 2 can also be used for more advanced
things... (specialize on one input but not the other).

Most of this PR is tests.

Test Plan:
- a lot of new tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151788
Approved by: https://github.com/oulgen
2025-04-22 20:17:24 +00:00
Brian Hirsh
7e4b89ac6c fix spammy library deinit errors when user passes an invalid TORCH_LOGS argument (#151678)
fixes https://github.com/pytorch/pytorch/issues/151055. Thanks @desertfire for the patch that fixed this.

I was a bit careful about the test - I wanted to make sure the test accurately ensures that we don't regress and our error message is not spammy when users enter an invalid `TORCH_LOGS=....` argument. But I tried to avoid using expecttests, since people  occasionally add new logging artifacts and I didn't want to add to much churn by forcing this to fail CI.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151678
Approved by: https://github.com/desertfire, https://github.com/zou3519
2025-04-22 20:13:52 +00:00
PyTorch MergeBot
0bb9b89fb7 Revert "[compile][compile time traces] Add more dynamo traces (#151357)"
This reverts commit 607443b16b.

Reverted https://github.com/pytorch/pytorch/pull/151357 on behalf of https://github.com/wdvr due to stack in a weird state - reverting for now ([comment](https://github.com/pytorch/pytorch/pull/151357#issuecomment-2822369232))
2025-04-22 20:12:44 +00:00
Thomas Bohnstingl
d0d4e992f1 [associative_scan] Fixes for assoc_scan testcases (#149988)
This PR fixes some issues with the testcases of `associative_scan`, in particular the problem where the compile_mode is inadvertently always set to `none`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149988
Approved by: https://github.com/ydwu4
2025-04-22 20:09:12 +00:00
henrylhtsang
8ca7953d51 [cutlass backend] delay construction of cutlass presets to when called (#151875)
In hindsight, always constructing the dict is a bit silly. We should only construct it when we need it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151875
Approved by: https://github.com/yangw-dev
2025-04-22 20:03:10 +00:00
titaiwangms
6cd1741985 [ONNX] Update decomposition logic to loop over onnx registry (#151826)
Fixes #150367

This PR makes decomposition table from onnx registry, which includes registered ops not only ATen and prim. This will help to keep the custom ops that are specified in the custom_translation table from decomposition during ONNX export.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151826
Approved by: https://github.com/justinchuby
2025-04-22 19:40:52 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
69ee6a9280 [Sana][HybridCache] Fix bug in detect_attr_assignment (#151824)
Summary: tree_flatten_with_map will internally call unflatten function with user supplied function. But this function was not returning anything causing the leaves to be None. This is wrong when the constructor is sensitive to this behaviour

Test Plan: CI

Differential Revision: D73388529

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151824
Approved by: https://github.com/bdhirsh
2025-04-22 19:39:50 +00:00
Aart J.C. Bik
337caacd4c Use more efficient mask to index computation (#151372)
This change addresses the third time/mem "spike" observed in

https://github.com/pytorch/pytorch/issues/151351

The change sees to perform better (time/mem) for both very sparse and very dense cases. It runs faster, and claims less memory both observed on CPU/GPU. It even avoids OOM for larger cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151372
Approved by: https://github.com/eqy
2025-04-22 19:31:12 +00:00
Li-Huai (Allan) Lin
fbd29527d8 [MPS] Move ops modifiers to testing utils so other tests can reuse (#151781)
Test collection check:
```
python -m pytest test/test_mps.py --collect-only
```
Before:
```
6390 tests collected in 8.34s
```

After:
```
6390 tests collected in 7.71s
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151781
Approved by: https://github.com/malfet
2025-04-22 19:19:52 +00:00
Oguz Ulgen
982062dfc4 Cache the value of torch_key in subproc (#151057)
No need to recalculate torch_key in subprocs, lets pass it from main process.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151057
Approved by: https://github.com/jamesjwu, https://github.com/masnesral
2025-04-22 18:54:06 +00:00
zeshengzong
fa0f13b90b Fix doc requirements install error (#151787)
Fixes #151786

Change version in requirements of docs consistent with version in [CI version file](https://github.com/pytorch/pytorch/blob/main/.ci/docker/requirements-docs.txt), which changed in #149331

### Test Result

![image](https://github.com/user-attachments/assets/f8646c03-116f-4f1c-b017-11b70995626b)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151787
Approved by: https://github.com/malfet
2025-04-22 18:33:44 +00:00
Shivam Raikundalia
4bf09562e4 [EZ/Profiler] Update Submodule (#151843)
Summary: Update to d82680bbd4

Test Plan: CI

Differential Revision: D73397323

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151843
Approved by: https://github.com/Skylion007, https://github.com/aaronenyeshi
2025-04-22 18:19:43 +00:00
zeshengzong
834a017fe3 Optimize register_full_backward_hook description when all input no grad (#151785)
Fixes #100528

## Test Result

### Before

![image](https://github.com/user-attachments/assets/5dd2e1d3-3bb1-49d0-84bf-8a7a6b18fa4b)

### After

![image](https://github.com/user-attachments/assets/2e16d17b-1586-40d8-b0ef-35559fc064f4)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151785
Approved by: https://github.com/soulitzer
2025-04-22 17:57:31 +00:00
Tugsbayasgalan Manlaibaatar
2c27597d6a Infra for handling builtin ops (min, max, math.pow) (#151348)
Reapply of https://github.com/pytorch/pytorch/pull/150003

Differential Revision: [D73050801](https://our.internmc.facebook.com/intern/diff/D73050801/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151348
Approved by: https://github.com/zhxchen17
ghstack dependencies: #151347
2025-04-22 17:20:09 +00:00
Shangdi Yu
264e8fb151 More fix for aot_export_module name collision during unlifting (#151684)
Summary: Also check the module's named buffers and parameters when resolving name collision

Test Plan:
```
buck2 run mode/dev-nosan caffe2/test/inductor:test_aot_inductor -- -r aoti_constant_tensor_name_collision
```

Differential Revision: D73264885

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151684
Approved by: https://github.com/angelayi
2025-04-22 16:59:33 +00:00
Menglu Yu
06a3c3c8cd [Optimus][Observability] Improve tlparse logging (#151635)
Summary: We improve tlparse logging for Optimus graph transformaton to enable easier debug

Test Plan:
```
TORCH_TRACE=~/my_trace_log_dir CUDA_VISIBLE_DEVICES=5 buck2 run mode/opt //aps_models/ads/ecosystem/tooling/tools/efficient_module_suite/pyper_models:pyper_model_perf_benchmark -- --flow_id 720055919 --shrink_model --mfu_profile_module "impl.shared_arch.dense_sparse_interaction" --use_synthetic_data
```

Differential Revision: D73229681

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151635
Approved by: https://github.com/Yuzhen11
2025-04-22 16:56:08 +00:00
Thanh Ha
5fc1eb85fc Add OIDC permissions to bazel workflow (#151456)
Update workflow to use OIDC authentication to access AWS resources rather than assuming the runner's default role. This is part of the multicloud effort to prepare jobs to support being run in non-AWS clouds.

The JWT ID token requires `id-token: write` in order to create the token for the job. See: https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-cloud-providers#adding-permissions-settings

Ref: pytorch-fdn/multicloud-ci-infra#3

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151456
Approved by: https://github.com/malfet
2025-04-22 16:54:14 +00:00
Shangdi Yu
5d316ce0d0 Add device check for inputs (#151828)
Summary: Generate device checks for inputs in AOTI. Enable with AOTI_RUNTIME_CHECK_INPUTS=1

Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r test_runtime_checks_device_type_failed
```

Differential Revision: D73382824

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151828
Approved by: https://github.com/angelayi
2025-04-22 16:36:27 +00:00
PyTorch MergeBot
3804aed32e Revert "[Inductor] Add Additional Configs for persistent+TMA version of Triton mm and addmm (#150587)"
This reverts commit 99aeee2c5f.

Reverted https://github.com/pytorch/pytorch/pull/150587 on behalf of https://github.com/ZainRizvi due to Sorry but this is breaking internally (see D73410693). To validate your fixes internally, you can follow the instructions here: https://fburl.com/fixing-ghfirst-reverts ([comment](https://github.com/pytorch/pytorch/pull/150587#issuecomment-2821828926))
2025-04-22 16:15:55 +00:00
PyTorch MergeBot
4504910843 Revert "[ez] Make relaxed constraint error message more user friendly (#151407)"
This reverts commit e0f05229e9.

Reverted https://github.com/pytorch/pytorch/pull/151407 on behalf of https://github.com/ZainRizvi due to Sorry but this is breaking internally (see D73198095). To validate your fixes internally, you can follow the instructions here: https://fburl.com/fixing-ghfirst-reverts. ([comment](https://github.com/pytorch/pytorch/pull/151407#issuecomment-2821819654))
2025-04-22 16:12:42 +00:00