Huy Do
c15638d803
Enable swap on all Linux jobs ( #143316 )
...
A swapfile on Linux runner has been prepared by https://github.com/pytorch/test-infra/pull/6058 . So this PR does 2 things:
* Start using the swapfile on all Linux build and test jobs
* Testing the rollout https://github.com/pytorch-labs/pytorch-gha-infra/pull/582
### Testing
Run `swapon` inside the container and the swapfile shows up correctly:
```
jenkins@259dfb0a314c:~/workspace$ swapon
NAME TYPE SIZE USED PRIO
/swapfile file 3G 256K -2
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143316
Approved by: https://github.com/ZainRizvi , https://github.com/atalman
2024-12-17 02:12:24 +00:00
Michael Lazos
cb4c614ed6
[foreach-map] Add tests for backward ( #143282 )
...
Adds tests for unary and binary foreach_map w/ backwards
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143282
Approved by: https://github.com/eellison
2024-12-17 02:08:12 +00:00
PyTorch MergeBot
533d63f83b
Revert "FileTimerClient: add retry logic on connect ( #143318 )"
...
This reverts commit b3fb8f8a3a .
Reverted https://github.com/pytorch/pytorch/pull/143318 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing lint jobs in trunk ([comment](https://github.com/pytorch/pytorch/pull/143318#issuecomment-2547342910 ))
2024-12-17 02:06:52 +00:00
cyy
201cb8834f
Enable more C++ warnings ( #143099 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143099
Approved by: https://github.com/albanD
2024-12-17 02:03:39 +00:00
Yifu Wang
af190479c8
[fused_all_gather_matmul] use _multimem_all_gather_matmul for small global Ms ( #143160 )
...
## Benchmark
M=2048, N=3584, K=8192
baseline (nccl + cublas): 301us
decomp-based async-tp: 354us
comm-aware async-tp: 295us
**multimem_all_gather matmul: 277us**
As M further decreases, the multimem_all_gather approach consistently outperforms the baseline and other approaches (omitted other approaches in the chart as they start to be slower than the baseline):

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143160
Approved by: https://github.com/weifengpy
ghstack dependencies: #142283 , #142810 , #143159
2024-12-17 01:07:27 +00:00
Yifu Wang
286921b39e
[fused_all_gather_matmul] introduce an argument to specify whether the all-gather result needs to be returned ( #143159 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143159
Approved by: https://github.com/weifengpy
ghstack dependencies: #142283 , #142810
2024-12-17 01:07:27 +00:00
Yifu Wang
6fae60a34a
[SymmetricMemory] introduce multimem_all_gather ( #142810 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142810
Approved by: https://github.com/weifengpy
ghstack dependencies: #142283
2024-12-17 01:07:27 +00:00
PyTorch MergeBot
519d858c31
Revert "Kill capture_pre_autograd_graph API ( #143224 )"
...
This reverts commit 4c62275325 .
Reverted https://github.com/pytorch/pytorch/pull/143224 on behalf of https://github.com/huydhn due to Sorry for reverting your change but the XLA failure is legit ([comment](https://github.com/pytorch/pytorch/pull/143224#issuecomment-2547264675 ))
2024-12-17 00:47:24 +00:00
Will Constable
9d57a39541
[C10D] Update docs for wait() ( #143305 )
...
Clarify that currently active stream, not default stream, is the one
that will be blocked by a call to wait(), and also point out that the
CPU is not blocked by the call for CUDA/nccl collectives.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143305
Approved by: https://github.com/LucasLLC , https://github.com/ngimel
2024-12-17 00:41:11 +00:00
Tristan Rice
b3fb8f8a3a
FileTimerClient: add retry logic on connect ( #143318 )
...
Fixes #143188
The fifo server binds from a thread -- under rare cases the client connects before the server thread starts. This adds a retry when opening the fifo socket in non-blocking mode. This will wait up to 1s for the server to start which balances fast error messages while still providing some wiggle room on the server side.
Test plan:
```
pytest --minutes 10 test/distributed/elastic/timer/file_based_local_timer_test.py -k test_watchdog_call_count -x
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143318
Approved by: https://github.com/fegin
2024-12-17 00:36:10 +00:00
Andrew Gu
90fb7c36ab
[FSDP2] Clamp reduce_dtype in lazy init ( #143297 )
...
fixes https://github.com/pytorch/pytorch/issues/143277 by moving the clamp of `reduce_dtype` to `None` to lazy init (same place as where `param_dtype` can be clamped to `None`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143297
Approved by: https://github.com/weifengpy
2024-12-17 00:25:08 +00:00
atalman
dd2cd4279e
Create build_directory if it does not exist when generating ninja build file ( #143328 )
...
Fixes: https://github.com/pytorch/vision/issues/8816
I am observing this failure on Windows, Python 3.13 vision builds:
```
Emitting ninja build file C:\actions-runner\_work\vision\vision\pytorch\vision\build\temp.win-amd64-cpython-313\Release\build.ninja...
error: [Errno 2] No such file or directory: 'C:\\actions-runner\\_work\\vision\\vision\\pytorch\\vision\\build\\temp.win-amd64-cpython-313\\Release\\build.ninja'
ERROR conda.cli.main_run:execute(49): `conda run packaging/windows/internal/vc_env_helper.bat python setup.py bdist_wheel` failed. (See above for error)
```
Adding the code above fixes it, confirmed by running `` python setup.py bdist_wheel`` :
```
building 'torchvision._C' extension
Emitting ninja build file C:\actions-runner\_work\vision\vision\pytorch\vision\build\temp.win-amd64-cpython-313\Release\build.ninja...
Creating build directory C:\actions-runner\_work\vision\vision\pytorch\vision\build\temp.win-amd64-cpython-313\Release
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/26] cl /showIncludes /nologo /O2 /W3 /GL /DNDEBUG /MD /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc -Dtorchvision_EXPORTS -IC:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include\torch\csrc\api\include -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include\TH -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include\THC -IC:\actions-runner\_work\_temp\conda_environment_12361066769\include -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Include "-IC:\Pr
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143328
Approved by: https://github.com/kit1980 , https://github.com/albanD
2024-12-17 00:20:43 +00:00
Bin Bao
467970d683
[AOTI] Relax input alignment assertion ( #143236 )
...
Summary: https://github.com/pytorch/pytorch/pull/142136 added a runtime alignment assertion. But the assumption is probably too strict for more flexible use cases of AOTI, e.g. python deployment, see a recent error torchchat ran into for more details, https://github.com/pytorch/torchchat/actions/runs/12322072267/job/34394851280 . This PR relaxes the runtime check and implements copy_misaligned_inputs in cpp instead.
Differential Revision: [D67287922](https://our.internmc.facebook.com/intern/diff/D67287922 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143236
Approved by: https://github.com/malfet , https://github.com/chenyang78
2024-12-17 00:17:39 +00:00
bobrenjc93
c4ab3e6ceb
remove allow-untyped-defs for torch/__config__.py ( #143320 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143320
Approved by: https://github.com/aorenste
ghstack dependencies: #143319
2024-12-17 00:16:09 +00:00
bobrenjc93
0178e43949
remove allow-untyped-defs for torch/utils/_stats.py ( #143319 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143319
Approved by: https://github.com/aorenste
2024-12-17 00:16:09 +00:00
Shivam Raikundalia
ff373171d0
[Profiler] Add Optional Flag to turn off external correlations v2 ( #143314 )
...
Summary: The original diff got reverted because its base commit was on a broken version of pytorch that was failing rocm tests. There is no indication that this diff had any effect on rocm. Had trouble rebasing the GH pr after revert and accidentally closed the PR so submitting again .
Test Plan: See original PR with same name
Differential Revision: D67293040
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143314
Approved by: https://github.com/leitian , https://github.com/aaronenyeshi
2024-12-16 23:49:13 +00:00
rzou
10df370a77
Add missing IValue overloads for SymInt lists ( #143167 )
...
We should be able to convert Int lists into SymInt lists.
Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143167
Approved by: https://github.com/ezyang
ghstack dependencies: #143166
2024-12-16 23:18:55 +00:00
rzou
557da8014d
[gen_autograd_functions] rename some variables ( #143166 )
...
This is a follow-up from https://github.com/pytorch/pytorch/pull/141278 .
Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143166
Approved by: https://github.com/soulitzer
2024-12-16 23:18:55 +00:00
Shangdi Yu
4c62275325
Kill capture_pre_autograd_graph API ( #143224 )
...
Summary:
Delete the following API:
- capture_pre_autograd_graph()
- capture_pre_autograd_graph_using_training_ir()
- gm_using_training_ir()
There's no more call sites to `capture_pre_autograd_graph`.
Except
1) two test cases in coreml, PR to remove: https://github.com/apple/coremltools/pull/2400
2) XLA: one test case in pytorch/xla, PR to remove: https://github.com/pytorch/xla/pull/8398
3) a few call sites guarded by version guard (< 2.5.0)
Test Plan: CI
Reviewed By: tugsbayasgalan
Differential Revision: D64056353
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143224
Approved by: https://github.com/tugsbayasgalan
2024-12-16 23:06:22 +00:00
PyTorch MergeBot
6356690b3d
Revert "[BE] Revert "Add conda to Manylinux Docker images ( #139903 )" ( #143300 )"
...
This reverts commit c86383f956 .
Reverted https://github.com/pytorch/pytorch/pull/143300 on behalf of https://github.com/atalman due to failing nova workflows with conda: command not found ([comment](https://github.com/pytorch/pytorch/pull/143300#issuecomment-2547030664 ))
2024-12-16 22:50:08 +00:00
eellison
135a2d4483
Update low prec codegen for div/mod ( #142350 )
...
Div/mod in fp16/bf16 requires a downcast to preserve its inputs' dtypes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142350
Approved by: https://github.com/blaine-rister
2024-12-16 21:46:08 +00:00
Bradley Davis
15aee8e090
update aten bmm CK heuristic ( #143294 )
...
Summary: updates heuristic to use new instances based on ck profiling of LLM shapes
Differential Revision: D67280269
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143294
Approved by: https://github.com/mxz297 , https://github.com/xw285cornell
2024-12-16 21:44:59 +00:00
atalman
c86383f956
[BE] Revert "Add conda to Manylinux Docker images ( #139903 )" ( #143300 )
...
This reverts commit 56a40d4ebb .
Having conda in manylinux builder images is not required. This was added to have manylinux-builder images as the only images for CD builds after conda-builder is deprecated. However we decided to start using ``almalinux-builder``.
We are using almalinux-builder for linux_job_v2 which contains conda: https://github.com/pytorch/test-infra/blob/main/.github/workflows/linux_job_v2.yml#L114
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143300
Approved by: https://github.com/seemethere
2024-12-16 21:40:08 +00:00
Bert Maher
4e594f4d12
Triton bump for 3.2 cherry-picks (mmav3 segfault fix, gfx950 support) ( #143302 )
...
* https://github.com/triton-lang/triton/pull/5277
* https://github.com/triton-lang/triton/pull/5084
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143302
Approved by: https://github.com/atalman , https://github.com/pruthvistony
2024-12-16 21:22:29 +00:00
Aaron Orenstein
401b1498d2
[BE] typing for decorators - distributed/_tensor/ops/utils ( #142139 )
...
Test Plan: unit tests
Differential Revision: D62302679
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142139
Approved by: https://github.com/Skylion007 , https://github.com/kwen2501
2024-12-16 21:19:33 +00:00
Aaron Orenstein
159b7ad8aa
Improve async workers to handle forking for async compile ( #142072 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142072
Approved by: https://github.com/masnesral
2024-12-16 21:16:42 +00:00
xadupre
678f74988d
Fix a misspelling [ONNX] ( #143301 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143301
Approved by: https://github.com/titaiwangms
2024-12-16 20:19:41 +00:00
bobrenjc93
8ad842cda4
remove allow-untyped-defs for utils/data/datapipes/dataframe/structures.py ( #143273 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143273
Approved by: https://github.com/aorenste
ghstack dependencies: #143271
2024-12-16 20:07:36 +00:00
PyTorch MergeBot
54ed13cdce
Revert "Update low prec codegen for div/mod ( #142350 )"
...
This reverts commit ca973069ed .
Reverted https://github.com/pytorch/pytorch/pull/142350 on behalf of https://github.com/huydhn due to Sorry for reverting your change but I think it. breaks an internal test ([comment](https://github.com/pytorch/pytorch/pull/142350#issuecomment-2546615951 ))
2024-12-16 20:05:14 +00:00
Adnan Akhundov
e885225eda
Add persistent+TMA version of Triton mm and addmm ( #142101 )
...
This PR adds persistent+TMA versions (Triton template + the corresponding infra) for the `tuned_mm` and `tuned_addmm` lowerings. The persistent+TMA choices are added to the GEMM autotuning if (checked by the `use_triton_tma_template` helper):
1. The min. hardware and Triton version requirements are met for the TMA support.
2. The GEMM inputs are compatible with the Triton TMA API (i.e., 16-byte aligned and contiguous).
3. The `config.triton.enable_persistent_tma_matmul` is set to `True`.
Additional notes:
1. As added in this PR, the TMA uses are not compatible with prolog / epilogue fusion. To this end, in the new Triton template we currently support: TMA-based loads of A/B, but no prologue fusion; epilogue fusion, but no TMA-based stores of C. TMA + fusion compatibility can be added as a follow-up.
2. The current Triton TMA API (`experimental_device_tensormap_create2d`) does not support strides. Due to this, we limit the applicability of the new Triton template to the cases where the inputs are contiguous.
3. The transposed layouts of A and / or B are supported by passing the constexpr flags to the kernel and adjusting the ordering of the block sizes accordingly in the kernel code (this should have no effect on the kernel perf, as decided at the Triton compilation time).
4. After the next Triton pin update, we can switch to the tensor descriptor API (landed recently in https://github.com/triton-lang/triton/pull/5290 ) in the new Triton template, which should allow lifting 2 and 3 above.
5. The configs for the new Triton template in `persistent_mm_kernel_configs` are preliminary. We should do more perf exploration and possibly augment the config in a follow-up.
6. This PR is rebased onto and unifies with two related PRs landed previously: https://github.com/pytorch/pytorch/pull/142045 (some infra unification with the persistent+TMA template for _scaled_mm) and https://github.com/pytorch/pytorch/pull/134532 (add possibility to disable prolog fusion for selected choices).
7. The current Triton TMA API only supports 1D and 2D descriptors (even after https://github.com/triton-lang/triton/pull/5290 , see [here](9829ce87cc/python/triton/language/core.py (L1957) )). For now, this blocks adding persistent+TMA template for `torch.bmm`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142101
Approved by: https://github.com/drisspg , https://github.com/eellison
2024-12-16 19:12:12 +00:00
Oguz Ulgen
17b71e5d6a
Add config alias ( #142088 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142088
Approved by: https://github.com/c00w
2024-12-16 18:51:17 +00:00
William Wen
1b6b86fad7
[dynamo] disable eval frame callback around most of _TorchDynamoContext wrapper function ( #143211 )
...
Internal xref: https://fb.workplace.com/groups/1075192433118967/permalink/1559636954674510/
If the `_fn` returned by `_TorchDynamoContext.__call__` makes an external function call, dynamo is recursively invoked. This can cause issues if there are added calls that are not skipped by Dynamo. So we should disable the eval frame callback as much as possible.
Differential Revision: [D67211749](https://our.internmc.facebook.com/intern/diff/D67211749 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143211
Approved by: https://github.com/jansel
2024-12-16 18:38:58 +00:00
Animesh Jain
1bf983077f
[reland][dynamo][guards] Consider tensors as immutable for dict tag matches ( #141085 )
...
Reland - https://github.com/pytorch/pytorch/pull/139560
As mentioned in https://github.com/pytorch/pytorch/pull/130341 , using `static py::object` can lead to segfaults. I suspect this is the reason for the import system error seen internally (https://www.internalfb.com/sevmanager/view/469592 ). In this PR, I am removing the `static` part. This is fine and also the right thing to do because this will catch if user changes the flag in the same process for compiling two different functions.
Unfortunately, there is no easy way to trigger this segfault, so I can't write a test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141085
Approved by: https://github.com/jansel
Co-authored-by: William Wen <williamwen@meta.com>
2024-12-16 18:38:32 +00:00
Jeeja
338835d0d2
Add support for other backends in get_preferred_device ( #132118 )
...
Currenlty get_preferred_device supports only cuda and cpu. Add support for other backends using backend config.
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132118
Approved by: https://github.com/kwen2501
2024-12-16 18:30:41 +00:00
leslie-fang-intel
ccf35af142
[Inductor] Fix the Index Put lowering with same input of self and values ( #139366 )
...
**Summary**
Fix the issue: https://github.com/pytorch/pytorch/issues/138908 , the root-cause is in https://github.com/pytorch/pytorch/issues/138908#issuecomment-2449192447
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_cpu_repro.py -k test_index_put
python -u -m pytest -s -v test/inductor/test_cpu_repro.py -k test_index_add
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139366
Approved by: https://github.com/jgong5 , https://github.com/eellison
2024-12-16 17:07:14 +00:00
PyTorch MergeBot
7ab3177776
Revert "[AMD] Turn on TF32 for aten::mm ( #139869 )"
...
This reverts commit e0bdae7884 .
Reverted https://github.com/pytorch/pytorch/pull/139869 on behalf of https://github.com/jeffdaily due to causing ROCm CI failures, need to investigate, revert for now ([comment](https://github.com/pytorch/pytorch/pull/139869#issuecomment-2546127069 ))
2024-12-16 16:46:48 +00:00
chuanqiw
a8cc19bb51
[CD] Fix XPU linux CD whl test failure ( #143268 )
...
Follow https://github.com/pytorch/pytorch/pull/142482 , refer the original fix PR https://github.com/pytorch/pytorch/pull/130742 and new issue in https://github.com/pytorch/pytorch/actions/runs/12323126436/job/34403681230
Works for https://github.com/pytorch/pytorch/issues/114850
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143268
Approved by: https://github.com/atalman
2024-12-16 15:00:03 +00:00
PyTorch UpdateBot
e4d2e81086
Update slow tests ( #143278 )
...
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml ).
Update the list of slow tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143278
Approved by: https://github.com/pytorchbot
2024-12-16 12:40:40 +00:00
bobrenjc93
d745b2b516
remove allow-untyped-defs for distributed/rpc/_testing/__init__.py ( #143271 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143271
Approved by: https://github.com/aorenste
2024-12-16 02:35:37 +00:00
Yu, Guangye
9706ada369
[RELAND] Add device-agnostic runtime Device/Stream C++ API ( #138677 )
...
# Motivation
This PR intends to add C++ accelerator device-agnostic APIs.
# Additional Context
This PR is relanded. It is reverted because `torch.Event` doesn't support mps backend. We have fixed it in https://github.com/pytorch/pytorch/pull/142468 . The previous commit is f84e533a2c
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138677
Approved by: https://github.com/albanD , https://github.com/EikanWang
ghstack dependencies: #143171 , #133572
2024-12-16 02:18:41 +00:00
Yu, Guangye
45ac4ebf15
[RELAND] Add UTs for accelerator device-agnostic runtime APIs ( #133572 )
...
# Motivation
This PR intends to add UTs for accelerator device-agnostic APIs.
# Additional Context
This PR is relanded. It is reverted because `torch.Event` doesn't support mps backend. We have fixed it in https://github.com/pytorch/pytorch/pull/142468 . The previous commit is 952514f0c8
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133572
Approved by: https://github.com/EikanWang , https://github.com/albanD
ghstack dependencies: #143171
2024-12-16 02:18:41 +00:00
Yu, Guangye
c1d4d9d3cf
[MPS] Support torch.accelerator.synchronize() on mps ( #143171 )
...
# Motivation
Support `torch.accelerator.synchronize()` on mps. The root cause is that MPS doesn't support lazy initialization. So we must check if the current accelerator supports device lazy initialization rather than early return.
# Additional Context
Add a mps UT to test code change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143171
Approved by: https://github.com/albanD
2024-12-16 02:18:32 +00:00
cyy
af8789c056
Hide torch_python symbols ( #142214 )
...
Change symbols in torch_python to invisible by default on platforms other than Apple.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142214
Approved by: https://github.com/ezyang
2024-12-16 00:59:26 +00:00
drisspg
744a303dee
[FlexAttention] Optimzing learned bias perf to dq calc ( #142281 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142281
Approved by: https://github.com/Chillee
2024-12-15 21:44:32 +00:00
Xiaodong Wang
e0bdae7884
[AMD] Turn on TF32 for aten::mm ( #139869 )
...
Summary: hipblaslt supports TF32, so adding the support.
Test Plan: CI
Differential Revision: D65435392
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139869
Approved by: https://github.com/leitian
2024-12-15 10:02:29 +00:00
PyTorch UpdateBot
5273d8fd2a
[audio hash update] update the pinned audio hash ( #143265 )
...
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml ).
Update the pinned audio hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143265
Approved by: https://github.com/pytorchbot
2024-12-15 03:41:14 +00:00
PyTorch MergeBot
9ed045eae9
Revert "[Profiler] Add Optional Flag to turn off external correlations ( #142516 )"
...
This reverts commit b29fc52f82 .
Reverted https://github.com/pytorch/pytorch/pull/142516 on behalf of https://github.com/huydhn due to Sorry for reverting your change but the test is failing on ROCm ([comment](https://github.com/pytorch/pytorch/pull/142516#issuecomment-2543431758 ))
2024-12-15 03:34:37 +00:00
Simon Fan
dd2d360b7d
[ca] re-enable disabled tests ( #143247 )
...
FIXES https://github.com/pytorch/pytorch/issues/133197
The unspecified floats PR landed while this test was disabled, and it added an analysis restart which counts towards the backend call counter the test is using
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143247
Approved by: https://github.com/zou3519
2024-12-15 02:11:39 +00:00
cyy
4273e1a059
[5/N] Apply bugprone-unchecked-optional-access ( #143111 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143111
Approved by: https://github.com/Skylion007
2024-12-15 01:07:28 +00:00
Tom Ritchford
91bf2e16de
[distributed] Remove unused variable in test_composability/test_pp_composability.py ( #143191 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143191
Approved by: https://github.com/mori360
2024-12-14 12:23:44 +00:00