Commit Graph

90994 Commits

Author SHA1 Message Date
Xuehai Pan
efcf87654e [CI] update flake8 and mypy lint dependencies (#158720)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158720
Approved by: https://github.com/Skylion007
2025-07-29 08:05:56 +00:00
Laith Sakka
2523e58781 unbacked handling for view_copy (#159244)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159244
Approved by: https://github.com/bobrenjc93
2025-07-29 07:10:46 +00:00
Jane Xu
222fa451a2 Move some of vec into headeronly in preparation for Half.h (#158976)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158976
Approved by: https://github.com/albanD, https://github.com/desertfire
2025-07-29 05:43:53 +00:00
Jason Ansel
6de24135e5 Fix flaky test_inductor_multiple_specializations (#159264)
Summary: This test was using do_bench, so it was flaky performance is non-deterministic.

Test Plan:
buck test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:compile_subprocess -- --exact 'caffe2/test/inductor:compile_subprocess - test_inductor_multiple_specializations_cuda (caffe2.test.inductor.test_compile_subprocess.GPUTests)' --run-disabled

Rollback Plan:

Differential Revision: D79098692

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159264
Approved by: https://github.com/jingsh
2025-07-29 05:16:55 +00:00
henrylhtsang
27ae72036d [cutlass] Prep for cutlass upgrade by ignoring Wunused-but-set-variable (#159276)
Differential Revision: [D79106238](https://our.internmc.facebook.com/intern/diff/D79106238/)

This is in prep for cutlass upgrade.

More context: https://github.com/NVIDIA/cutlass/issues/2487

Tested in https://github.com/pytorch/pytorch/pull/159115
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159276
Approved by: https://github.com/adamomainz, https://github.com/njriasan, https://github.com/Skylion007
2025-07-29 04:40:24 +00:00
Sherlock Huang
e924df23a6 [NativeRT] Strengthen matcher check for StaticDispatch kernel (#159187)
Summary:
Strength matcher for StaticDispatch kernels: all input, output tensor must be on CPU, all Device-typed attribute must be CPU.

Previously, we only check output tensor on CPU. This will miss catching the case where we do DeviceToHost aten._to_copy.

Prepare for turning on static dispatch kernel by default.

Test Plan:
I should add some test before land.

Rollback Plan:

Differential Revision: D78747600

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159187
Approved by: https://github.com/dolpm
2025-07-29 04:03:49 +00:00
fduwjj
67e68e0785 [c10d] Cleanup split_group logic using the newly built splitGroup (#158488)
with https://github.com/pytorch/pytorch/pull/157716 merged we want to further clean up the code on the python side for `split_group` API. We do need to keep some old global book keeping for bc. The rest of logic is now all in cpp. Regarding the change brought in https://github.com/pytorch/pytorch/pull/152175, we did clean up in https://github.com/pytorch/pytorch/pull/158790 (including internal changes) so that we can safely remove it.

Differential Revision: [D78777152](https://our.internmc.facebook.com/intern/diff/D78777152)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158488
Approved by: https://github.com/d4l3k
2025-07-29 03:27:11 +00:00
Xuehai Pan
775788f93b [BE][PYFMT] migrate PYFMT for test/[i-z]*/ to ruff format (#144556)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144556
Approved by: https://github.com/ezyang
2025-07-29 03:26:09 +00:00
Mu-Chu Lee
19ce1beb05 [AOTInductor] Add test for enabling CUDACachingAllocator for AOTInductor's Weight (#159279)
Summary:
Add test for enabling CUDACachingAllocator for AOTInductor's Weight.
Implementation TBD

Test Plan:
N/A, commit is adding a test.

Rollback Plan:

Differential Revision: D79107507

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159279
Approved by: https://github.com/desertfire, https://github.com/jingsh
2025-07-29 02:52:10 +00:00
Guilherme Leobas
a91ddea61f Add CPython tests for collections module (#158950)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158950
Approved by: https://github.com/zou3519
2025-07-29 02:24:27 +00:00
William Wen
ffccb90ff4 [dynamo, docs] add fullgraph=False docs (#159050)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159050
Approved by: https://github.com/svekars, https://github.com/anijain2305
ghstack dependencies: #157985, #158055, #158531
2025-07-29 01:53:47 +00:00
William Wen
f916f34739 [dynamo, docs] non-strict programming model docs (#158531)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158531
Approved by: https://github.com/AlannaBurke, https://github.com/mlazos, https://github.com/anijain2305
ghstack dependencies: #157985, #158055

Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
2025-07-29 01:53:47 +00:00
William Wen
c32994ce4b [docs, dynamo] add fullgraph=True, common graph breaks docs (#158055)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158055
Approved by: https://github.com/AlannaBurke, https://github.com/anijain2305
ghstack dependencies: #157985

Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
2025-07-29 01:53:41 +00:00
William Wen
433e43cbec [dynamo, docs] programming model dynamo core concepts (#157985)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157985
Approved by: https://github.com/svekars, https://github.com/anijain2305
2025-07-29 01:53:34 +00:00
Xia, Weiwen
e469414b59 [CPU] fix _weight_int8pack_mm with large output shape (#158341)
**Summary**
`_weight_int8pack_mm` on CPU may cause segmentation fault if output shape is large (i.e., M * N is large). It's because the kernel compute output buffer address by
```c++
auto* C_ptr = C_data + mb_start * N + nb_start;
```
where both `mb_start` and `N` are `int` and when they are large their product may overflow.
The solution is simple: declare these variables as `int64_t` so that the product won't overflow.

**Test plan**
```
pytest -sv test/test_linalg.py -k test__int8_mm_large_shape
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158341
Approved by: https://github.com/mingfeima, https://github.com/drisspg
2025-07-29 01:14:50 +00:00
rzou
657e5e9aa6 All custom operators go through Inductor's graph.call_function (#159174)
Fixes #158892

All custom operators should go through the graph.call_function path. The
other fallback path is for aten/prim operations that don't have support
for things (like torch.float8_e8m0fn).

Test Plan:
- new tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159174
Approved by: https://github.com/eellison
2025-07-29 00:31:57 +00:00
Nikita Shulga
f02b783aae [1/N] Remove MacOS-13 MPS testing (#159278)
Starts addressing https://github.com/pytorch/pytorch/issues/159275
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159278
Approved by: https://github.com/dcci
ghstack dependencies: #159277
2025-07-28 23:52:47 +00:00
Xu Han
8ad96a563c [inductor] normalize path of the code. (#159255)
Error stack:
<img width="1361" height="345" alt="image" src="https://github.com/user-attachments/assets/50fb2baa-34fd-4a48-a3e7-76e3185391d4" />

After fix:
<img width="1103" height="398" alt="image" src="https://github.com/user-attachments/assets/ece5a9ba-a085-46fe-b061-0c2ebda3a2df" />

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159255
Approved by: https://github.com/desertfire
2025-07-28 23:42:11 +00:00
PyTorch MergeBot
59e261bbd8 Revert "[CI] update flake8 and mypy lint dependencies (#158720)"
This reverts commit f5130bf339.

Reverted https://github.com/pytorch/pytorch/pull/158720 on behalf of https://github.com/yangw-dev due to this pr failed internally when build torchgen due to rror: fail: Unknown PyPI project: pyyaml, it seems like this is caused by change PyYAML into  pyyaml, please fix it ([comment](https://github.com/pytorch/pytorch/pull/158720#issuecomment-3129995414))
2025-07-28 22:02:10 +00:00
Catherine Lee
08ea8fccaf [ez][docker] Remove some unused vars and scripts (#158680)
`CUDNN_VERSION` isn't used in any Dockerfiles, it's picked automatically based on the cuda version in `install_cuda.sh`

`install_cudnn.sh` isn't used anywhere, cudnn installation happens in `install_cuda.sh`

I didn't find any mentions of `GRADLE_VERSION` or `TENSORRT_VERSION`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158680
Approved by: https://github.com/janeyx99, https://github.com/atalman, https://github.com/malfet
2025-07-28 21:44:47 +00:00
atalman
41754539be Add 3.14 triton wheel build (#159261)
Related to https://github.com/pytorch/pytorch/issues/156856

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159261
Approved by: https://github.com/malfet, https://github.com/albanD
2025-07-28 20:34:16 +00:00
Nikita Shulga
716d52779f [BE] Delete non-existing labels (#159277)
As no such runners has been online for last 2+ month
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159277
Approved by: https://github.com/clee2000
2025-07-28 20:28:57 +00:00
Michael Lazos
3bf41f26c8 [cutlass] rename EVT args within kernels for code caching (#159243)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159243
Approved by: https://github.com/henrylhtsang
2025-07-28 19:01:40 +00:00
Eddie Yan
19aa8eb4f5 [TF32][Flex Attention] Turn off TF32 for reference computation in test_flex_decoding (#158979)
Seems to avoid threshold (fudge factor) twiddling games as this causes the checks to go down the "very small ref error" path instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158979
Approved by: https://github.com/drisspg, https://github.com/BoyuanFeng, https://github.com/nWEIdia
2025-07-28 18:38:23 +00:00
Animesh Jain
8c0c5c58c7 [benchmarks] Set model name early to keep warmup and main model same (#159231)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159231
Approved by: https://github.com/williamwen42
ghstack dependencies: #159209
2025-07-28 18:18:16 +00:00
Xiaochang Wu
2d1e92307d Partitioner: Fix to align partition node order with original graph (#157892)
Fixes #157891

Pull Request resolved: https://github.com/pytorch/pytorch/pull/157892
Approved by: https://github.com/ezyang
2025-07-28 17:36:29 +00:00
Lucca Bertoncini
399c89e15c fix torch/distributed contributing doc (#158934)
both pointers are pointing to a page of empty github issues. I'm moving this to point to all issues tagged with `pt_distributed_rampup`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158934
Approved by: https://github.com/d4l3k
2025-07-28 17:01:05 +00:00
PyTorch MergeBot
14d67eec05 Revert "[dynamo][fsdp] Consistent behavior of int attributes (#157262)"
This reverts commit 9b4d938f04.

Reverted https://github.com/pytorch/pytorch/pull/157262 on behalf of https://github.com/ZainRizvi due to This was reverted internally. Somehow this PR didn't get reverted alongside it. See D78772867. To validate your fixes internally, you can follow the instructions here: https://fburl.com/fixing-ghfirst-reverts ([comment](https://github.com/pytorch/pytorch/pull/157262#issuecomment-3128148475))
2025-07-28 16:58:27 +00:00
Benson Ma
9ad7dd54f9 [fbgemm_gpu] Upgrade KernelLauncher kernelLaunchCheck to print help string (#158896)
Summary: - Upgrade KernelLauncher kernelLaunchCheck to print help string, following D78440016

Test Plan:
```
buck test 'fbcode//mode/opt' fbcode//deeplearning/fbgemm/fbgemm_gpu/test/utils:kernel_launcher
```

Rollback Plan:

Differential break Revision: D78572009

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158896
Approved by: https://github.com/atalman
2025-07-28 16:11:13 +00:00
Deepak Seshadri
387db86ef1 Name Inductor's Subproc pool threads. (#158815)
Differential hack Revision: D78710371

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158815
Approved by: https://github.com/d4l3k
2025-07-28 16:08:08 +00:00
dolpm
e5a1d839c5 [nativert] ensure planner once flag is class-local, not static. (#159116)
Summary: att - otherwise only one global planner will be made even though we need it to be per-model if models are colocated.

Differential hack Revision: D78939141

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159116
Approved by: https://github.com/SherlockNoMad
2025-07-28 16:06:21 +00:00
zhxchen17
c06164a9c5 [nativert][ez] Remove unused dist collectives ops. (#159220)
Removing dependency to c10d/ in ExecutionFrame.h. We don't need c10d::Work in the frame.

Differential Revision: [D79041618](https://our.internmc.facebook.com/intern/diff/D79041618/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159220
Approved by: https://github.com/SherlockNoMad, https://github.com/dolpm
2025-07-28 16:03:14 +00:00
Karim Abou Zeid
c7586d4ed3 typo (#156560)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/156560
Approved by: https://github.com/albanD, https://github.com/Skylion007
2025-07-28 15:40:06 +00:00
thenumberouscode
8e07c9870d [dynamo] [guard] Add caching for inside torch.compile.disable function to avoid unnecessary recompilation. (#157566)
inside torch.compile.disable function always triggers recompilation. because a user inside function decorated with torch._dynamo.disable would be used as an argument in the resume_in_xx function. In the current implementation,  it will always be a new object, resulting in the ID_MATCH guard always failing and triggering recompilation.

Fixes https://github.com/pytorch/pytorch/issues/157399
@xmfan

Pull Request resolved: https://github.com/pytorch/pytorch/pull/157566
Approved by: https://github.com/mlazos, https://github.com/anijain2305
2025-07-28 12:44:22 +00:00
PyTorch UpdateBot
a76147c9e0 [xla hash update] update the pinned xla hash (#158223)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned xla hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158223
Approved by: https://github.com/pytorchbot
2025-07-28 11:19:05 +00:00
pzzp
f3913ea641 [CUDA] fix nansum in non-JIT build (#158633)
This change fix crash of
```
import torch
a = torch.tensor([[1, 2]], dtype=torch.complex32).to('cuda')
b = torch.nansum(a, dim=0)
print(b)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158633
Approved by: https://github.com/ngimel
2025-07-28 08:11:32 +00:00
Sherlock Huang
1abff80fae Reland D78841818 (#159216)
Summary: Relanding D78841818 with fixes

Test Plan:
Tested all failing tests

buck build --config fbcode.use_link_groups=true --flagfile fbcode//mode/dev-nosan fbcode//sigmoid/core/executor/memory/test:layout_planner_tests

buck test 'fbcode//mode/opt' fbcode//sigmoid/inference/test:test_passes

Rollback Plan:

Reviewed By: hl475

Differential Revision: D79038615

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159216
Approved by: https://github.com/dolpm
2025-07-28 07:39:35 +00:00
zeshengzong
799303f655 Fix atleast_{1,2,3}d() with no arguments description (#156042)
Fixes #130667

## Test Result

### Before
![image](https://github.com/user-attachments/assets/7e3a6764-872a-4573-8bec-e7219f920a15)
![image](https://github.com/user-attachments/assets/194be00c-9a29-44cf-b6bc-4d261a12d04e)
![image](https://github.com/user-attachments/assets/21cd6a4f-0793-44e3-9073-7b8b801f997c)

### After

![image](https://github.com/user-attachments/assets/fdbaa2ff-f13c-4fa9-bf52-0810faa698bd)
![image](https://github.com/user-attachments/assets/0374b474-4c6b-4b7d-abea-70e3df0c0a06)
![image](https://github.com/user-attachments/assets/9f9dc188-60e2-4c0f-9e23-36a39310008c)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/156042
Approved by: https://github.com/zou3519
2025-07-28 06:25:23 +00:00
PyTorch MergeBot
d26ab281d2 Revert "Setup TorchBench in Docker (#158613)"
This reverts commit d72ebefe3f.

Reverted https://github.com/pytorch/pytorch/pull/158613 on behalf of https://github.com/XuehaiPan due to checkout_install_torchbench function is removed but still referenced in trunk ([comment](https://github.com/pytorch/pytorch/pull/158613#issuecomment-3125695250))
2025-07-28 06:19:00 +00:00
PyTorch MergeBot
1cffb217ef Revert "[Profiler] Fix lost C call events problem in Python 3.12.0-3.12.4 (#155446)"
This reverts commit e88f804a2e.

Reverted https://github.com/pytorch/pytorch/pull/155446 on behalf of https://github.com/XuehaiPan due to Breaks Windows wheels ([comment](https://github.com/pytorch/pytorch/pull/155446#issuecomment-3125566269))
2025-07-28 05:29:37 +00:00
PyTorch UpdateBot
c8342b7231 [vllm hash update] update the pinned vllm hash (#159235)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vllm hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159235
Approved by: https://github.com/pytorchbot
2025-07-28 04:16:31 +00:00
Animesh Jain
f63673626d [dynamo][guards] Skip guards on constant func.__defaults__ elements (#159209)
Func.__defaults__ is a tuple. Therefore, we can skip guards on immutable elements. Mutable elements are still guarded.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159209
Approved by: https://github.com/jansel
2025-07-27 22:46:17 +00:00
Sampath Victor
37638c303e Addressing some linter errors (#158670)
Summary: Addressing the linter errors reported in the changed files.

Test Plan:
```
buck test mode/opt deeplearning/fbgemm:QuantUtilsTest
```
https://www.internalfb.com/intern/testinfra/testrun/11821949118528688

```
buck test mode/opt caffe2/torch/fb/model_transform/splitting/tests:split_dispatcher_test
```
https://www.internalfb.com/intern/testinfra/testrun/7881299627525465

Rollback Plan:

Differential Revision: D78352311

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158670
Approved by: https://github.com/excelle08, https://github.com/cyyever, https://github.com/digantdesai
2025-07-27 21:55:50 +00:00
Max Podkorytov
ee2edf3d37 [ROCm][CK][Inductor] enable gfx950 for max autotune with CK (#159195)
+ update inductor config for new gfx arch
+ fixes in codegen for conv2d and ck-tile matmul
+ use appropriate fp8 dtypes
+ test cleanup

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159195
Approved by: https://github.com/chenyang78
2025-07-27 20:47:13 +00:00
Chinmay Shrivastava
51eb41a57e Enable dynamic shapes for foreach operations by default (#158985)
## Summary

This PR changes the default value of `combo_kernel_foreach_dynamic_shapes` from `False` to `True` in `torch/_inductor/config.py`.

## Context

The `combo_kernel_foreach_dynamic_shapes` configuration was introduced in PR #134477 (August 2024) to support dynamic shapes for foreach and combo kernels. It was initially disabled by default as a conservative approach to avoid disrupting production workflows.

## Why This Change?

After several months of the feature being available and stable, it's time to enable it by default. This improves the user experience for developers using `torch.compile(dynamic=True)` with foreach operations.

### Current behavior:
- Users must manually discover and enable `combo_kernel_foreach_dynamic_shapes`
- Without this flag, foreach operations may fail with dynamic shapes
- This creates friction and confusion

### With this change:
- Foreach operations work seamlessly with dynamic compilation
- No manual configuration needed
- Better "it just works" experience

## Testing

Extensive testing was performed with PyTorch 2.5.0+ and 2.7.1:
-  Various tensor sizes (8, 16, 32, 64, 128)
-  Multiple tensors in operations (tested up to 20)
-  Nested foreach operations
-  Mixed operations (foreach + standard operations)
-  Both CPU and CUDA devices
-  Symbolic shapes with dynamic compilation

## Impact Assessment

- **Performance**: No impact - this only affects compilation behavior
- **Backward Compatibility**: Fully maintained - users can still set to `False`
- **Risk**: Minimal - feature has been stable since August 2024

## References

- Original implementation: PR #134477 by @qchip
- This completes the feature rollout by making it available by default

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158985
Approved by: https://github.com/jansel, https://github.com/mlazos
2025-07-27 19:56:07 +00:00
Howard Huang
ede6186c86 [PP] Allow intermediate nodes in ZB to have multiple grads (#159084)
Fixes a ZB regression (https://github.com/pytorch/torchtitan/actions/runs/16478292562/job/46585646792)

Previously we only allowed an intermediate node to have 1 gradient. Recently a torchtitan ZB test started failing and I tracked to back to FusedRMSNorm grad_fn having two values `(grad, None)` (see https://github.com/pytorch/pytorch/pull/153666) and it started breaking our ZB tests.

This PR allows `stage_backward_weight` intermediate nodes to have multiple grads (it sums them together or if the grad value is None, then ignores it). Here is an example where the backward would have two grad values (gI1, gI2):

```python
class Func(torch.autograd.Function):
    @staticmethod
    def forward(ctx, x):
        return x, 2
    @staticmethod
    def backward(ctx, gI1, gI2):
        assert gI2 is None
        return gI1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159084
Approved by: https://github.com/tianyu-l
2025-07-27 19:16:51 +00:00
Nikita Shulga
6d071bd65d Remove numpy dependency from onnx (#159177)
One should not expect numpy to be there during onnx import
Forward fix for : https://github.com/pytorch/pytorch/pull/157734
Added regression test to `test_without_numpy` function

Test plan: Run `python -c "import sys;sys.path.insert(0, 'fake_numpy');import torch; import torch.onnx"` with/without this fix
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159177
Approved by: https://github.com/atalman, https://github.com/justinchuby, https://github.com/titaiwangms, https://github.com/cyyever, https://github.com/Skylion007, https://github.com/andrewboldi
2025-07-27 13:23:03 +00:00
cyy
d742a2896c Remove tensorexpr tests (#158928)
The tests are not maintained.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158928
Approved by: https://github.com/albanD, https://github.com/malfet
2025-07-27 07:13:27 +00:00
Xu Han
11d6559a58 [inductor] disable failed UTs of test_misc.py (#159210)
Disable failed UTs.

<img width="1195" height="118" alt="image" src="https://github.com/user-attachments/assets/da0933fb-3c4c-44c9-ba85-45971f03405f" />

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159210
Approved by: https://github.com/jansel

Co-authored-by: Jason Ansel <jansel@jansel.net>
2025-07-27 05:41:44 +00:00
PyTorch UpdateBot
e7667e5702 [vllm hash update] update the pinned vllm hash (#159217)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vllm hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159217
Approved by: https://github.com/pytorchbot
2025-07-27 04:16:35 +00:00