Commit Graph

60 Commits

Author SHA1 Message Date
Wang, Chuanqi
0d3a4f7155 [CD] Enable Inductor performance test for xpu (#166289)
Add Dynamo benchmark performance tests for XPU backend

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166289
Approved by: https://github.com/EikanWang, https://github.com/atalman
2025-10-31 10:52:07 +00:00
amdfaa
0187db88d4 [ROCm][CI] Create periodic-rocm-mi200.yml (#166544)
* We are separating out the rocm jobs of the periodic workflow
* We are introducing a new label `ciflow/periodic-rocm-mi200` to allow us to run distributed tests only on ROCm runners, without triggering many other jobs on the `periodic.yml` workflow (via `ciflow/periodic`)
* This new workflow will also be triggered via the `ciflow/periodic`, thus maintaining the old status quo.
* We are reverting to the `linux.rocm.gpu.4` label since it targets a lot more CI nodes at this point than the K8s/ARC-based `linux.rocm.gpu.mi250.4` label, as that is still having some network/scaling issues.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166544
Approved by: https://github.com/jeffdaily

Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-10-30 02:08:07 +00:00
Jithun Nair
70592c6819 [ROCm][CI] Move gfx1100 workflows to own yaml file (#165699)
This should allow us to move gfx1100 workflow to a lower frequency and also allow it to be triggered on PRs via a dedicated label, for any PRs that target Navi fixes such as [this](https://github.com/pytorch/pytorch/pull/165630) or [this](https://github.com/pytorch/pytorch/pull/165625).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165699
Approved by: https://github.com/jeffdaily
2025-10-20 23:52:48 +00:00
Wei Wang
d7e275d4b4 [CI][CUDA] Add periodic b200 distributed job (#159323)
1. Run distributed job with B200 runner, periodically.
2. discovered generic distributed test issue that certain unit test hard-coded ranks, calling for require_exact_world_size(world_size) API instead of require_world_size(world_size).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159323
Approved by: https://github.com/eqy

Co-authored-by: Aidyn-A <aidyn.b.aitzhan@gmail.com>
2025-10-16 21:54:04 +00:00
Jeff Daily
d7e3f493d9 [ROCm][CI] add mi355 to inductor perf test nightly (#165326)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165326
Approved by: https://github.com/jeffdaily

Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-10-14 20:03:21 +00:00
Jithun Nair
ee6a1ecb0a [ROCm] Enable MI355 CI on PRs, and run full set of UTs on PRs (#160215)
Useful to have PR testing for PRs such as https://github.com/pytorch/pytorch/pull/151360

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160215
Approved by: https://github.com/malfet, https://github.com/atalman

Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-10-09 18:03:12 +00:00
Wei Wang
96182faf96 [CI][Distributed][CUDA][Symm-Mem] Enable B200 Symm Mem Test (#162988)
Inspired by https://github.com/pytorch/pytorch/pull/162981 and motivated by https://github.com/pytorch/pytorch/pull/159323 taking a total of 20 hours to finish (and unlikely to make it in short time due to https://github.com/pytorch/pytorch/issues/162178 )

Creating this subtest to get *something distributed* on B200.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162988
Approved by: https://github.com/malfet
2025-09-27 05:12:05 +00:00
Angel Li
3b73841f43 update test_quantization tests to run weekly (#163077)
Fixes #162854

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163077
Approved by: https://github.com/huydhn
2025-09-24 11:31:11 +00:00
drisspg
5f0c7cb4aa Add B200 smoke test (#159494)
Okay running test_max_autotune locally on B200is horrible read, for now to get something landed I am focusing on test_matmul_cuda.py and test_fp8

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159494
Approved by: https://github.com/nWEIdia, https://github.com/huydhn
ghstack dependencies: #163460, #163537, #163552
2025-09-23 15:45:05 +00:00
Yang Wang
75ea93484c [vllm test] add vllm.yml and additional package (#160698)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160698
Approved by: https://github.com/huydhn
ghstack dependencies: #160116
2025-08-16 04:24:20 +00:00
zhangfei
db32b60662 [ci] Add riscv opt-int build (#143979)
Hi, @malfet
Based on the previous discussion:

[RISCV CI support · Issue #141550 · pytorch/pytorch](https://github.com/pytorch/pytorch/issues/141550)

I have cross-compiled PyTorch for the RISC-V architecture on x86_64 Ubuntu 24.04 and created a new PR for it. Could you please help review it?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143979
Approved by: https://github.com/malfet

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-08-13 16:12:02 +00:00
iremyux
00da8e63eb CI for Windows Arm64 (#148753)
This pull request adds a new CI workflow for Windows Arm64, named win-arm64-build-test.yml.
It can be triggered on any pull request by including the ciflow/win-arm64 tag.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148753
Approved by: https://github.com/malfet
2025-07-23 16:12:20 +00:00
henrylhtsang
d984143a74 [ci][cutlass backend] Add ci for cutlass backend tests (#156626)
redo of https://github.com/pytorch/pytorch/pull/156136

Differential Revision: [D77327309](https://our.internmc.facebook.com/intern/diff/D77327309)

I want to try land the full version first. If the ci is taking too long, we can revert back to only testing for a few names.
```
 -k 'test_max_autotune_cutlass_backend_regular_mm and not test_max_autotune_cutlass_backend_regular_mm_streamk'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/156626
Approved by: https://github.com/huydhn, https://github.com/mlazos
2025-07-22 05:18:13 +00:00
fduwjj
8147c4a904 [symm_mem] Create a dedicated ci flow for symmetric memory and only use 4 GPUs (#157181)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157181
Approved by: https://github.com/kwen2501, https://github.com/huydhn
2025-06-28 08:33:50 +00:00
Eli Uriegas
127695eb5c ci: Add ciflow trigger for build-triton-wheel (#156893)
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156893
Approved by: https://github.com/malfet
2025-06-26 04:38:38 +00:00
charan-ponnada
bf06190e21 Integrated AMD AWS runners into Pytorch CI (#153704)
Integrated AMD AWS runners into PyTorch CI, including the linux.24xl.amd for performance tests, the linux.8xl.amd with AVX512 support for unit and periodic tests, and the linux.12xl.amd with AVX2 support for unit and periodic tests.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153704
Approved by: https://github.com/malfet, https://github.com/jithunnair-amd

Co-authored-by: kiriti-pendyala <kiriti.pendyala@amd.com>
2025-06-18 15:58:22 +00:00
mori360
da4aacabac Add h100_distributed label (#154562)
Add h100_distributed label, testing distributed 3D composability tests on 8*H100 GPU node.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154562
Approved by: https://github.com/seemethere
2025-05-31 00:17:43 +00:00
Catherine Lee
b8fad785d5 Change trigger for autoformat, use --all-files (#153289)
Change trigger for auto format to be pull_request b/c the reusable action used gets the pr number from the pull_request event context, but only run it if ciflow/autoformat is attached to the PR.  Tested this on a different PR, and it seems to be working

Changed tag name because ciflow prefixed labels have special handling

Also change to run on all files so it will mimic the normal CI lintrunner call, and because lintrunner, either by itself or using -m mergebase can miss some things.  Idk if it would miss for format, but it does for checking lint.  Format seems to take shorter than normal lint.  I don't know if the comment about making suggestions on non edited file changes is a concern.  I didn't really test this part

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153289
Approved by: https://github.com/atalman, https://github.com/malfet
2025-05-15 20:38:33 +00:00
Nikita Shulga
309ecb2277 [CI] Add opt-in h100 tests (#153170)
So far only run:
 - inductor/test_fp8.py
 - test_matmul_cuda.py
 - inductor/test_max_autotune.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153170
Approved by: https://github.com/drisspg, https://github.com/eellison
2025-05-09 17:01:05 +00:00
PyTorch MergeBot
34196301d5 Revert "[CI] Add opt-in h100 tests (#153170)"
This reverts commit f87a0fe2ca.

Reverted https://github.com/pytorch/pytorch/pull/153170 on behalf of https://github.com/clee2000 due to workflow doesnt have right concurrency group? ([comment](https://github.com/pytorch/pytorch/pull/153170#issuecomment-2864951319))
2025-05-09 03:04:50 +00:00
Nikita Shulga
f87a0fe2ca [CI] Add opt-in h100 tests (#153170)
So far only run:
 - inductor/test_fp8.py
 - test_matmul_cuda.py
 - inductor/test_max_autotune.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153170
Approved by: https://github.com/drisspg
2025-05-09 01:03:12 +00:00
Nikita Shulga
3849fd13de 🐛 Add ciflow/pull🦋 (#152567)
To make it easier to workaround GitHub relibability issues, when it sometime fails to scheduled `on: pull_request` workflows

See https://github.com/pytorch/pytorch/issues/151322

But alas, it does not fixes problem at hand...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152567
Approved by: https://github.com/clee2000, https://github.com/huydhn, https://github.com/ZainRizvi, https://github.com/Camyll, https://github.com/atalman
2025-05-01 02:00:51 +00:00
Jithun Nair
001695c397 [ROCm][CI] Enable distributed CI on MI300 (#150667)
* Enable distributed CI on MI300 runners, same schedule-based and release-branch triggers as `periodic.yml`; also uses label `ciflow/periodic-rocm-mi300` for triggering on PRs.
* Disabled failing distributed tests on MI300 via Github issues: [151077](https://github.com/pytorch/pytorch/issues/151077), [151078](https://github.com/pytorch/pytorch/issues/151078), [151081](https://github.com/pytorch/pytorch/issues/151081), [151082](https://github.com/pytorch/pytorch/issues/151082), [151083](https://github.com/pytorch/pytorch/issues/151083), [151084](https://github.com/pytorch/pytorch/issues/151084), [151085](https://github.com/pytorch/pytorch/issues/151085), [151086](https://github.com/pytorch/pytorch/issues/151086), [151087](https://github.com/pytorch/pytorch/issues/151087), [151088](https://github.com/pytorch/pytorch/issues/151088), [151089](https://github.com/pytorch/pytorch/issues/151089), [151090](https://github.com/pytorch/pytorch/issues/151090), [151153](https://github.com/pytorch/pytorch/issues/151153)
* Disable failing distributed tests via `skipIfRocm`: ea9315ff95

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150667
Approved by: https://github.com/jeffdaily
2025-04-14 16:19:04 +00:00
LifengWang
fa5f556f88 [CI] enable operator benchmark on CPU (#143733)
This is to enable operator benchmark for CPU to track op level performance. This PR is motivated by PR: https://github.com/pytorch/pytorch/issues/120982 and investigate feasibility in https://github.com/pytorch/pytorch/pull/127216

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143733
Approved by: https://github.com/leslie-fang-intel, https://github.com/atalman, https://github.com/huydhn, https://github.com/malfet

Co-authored-by: diwei sun <diwei.sun@intel.com>
Co-authored-by: chuanqiw <chuanqi.wang@intel.com>
2025-03-21 16:46:03 +00:00
Jithun Nair
6c3492b491 [ROCm] Enable mi300-specific workflows to be triggered on PRs (#147904)
This change will be needed to be able to trigger the MI300-specific CI workflows on PRs by using a PR label.

* inductor-rocm-mi300.yml uses the existing `ciflow/inductor-rocm` label so that any PR manually labeled as such will trigger `inductor` config runs on both MI200 and MI300.
* rocm-mi300.yml uses a separate `ciflow/rocm-mi300` label, since we don't want to over-trigger `default` config runs on MI300 runners due to limited capacity, and [`ciflow/rocm` label is automatically applied](79438512a0/torchci/lib/bot/autoLabelBot.ts (L24)) on many PRs.
* inductor-perf-test-nightly-rocm.yml uses a separate `ciflow/inductor-perf-test-nightly-rocm` label, so that we can manually trigger a round of perf testing on MI300 runners to test the perf impact of a major inductor-related change.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147904
Approved by: https://github.com/huydhn
2025-03-05 06:00:37 +00:00
Nikita Shulga
ead970c8d0 Revert "Add cifllow/riscv64 label"
This reverts commit 5116b27792.
(I've pushed to the wrong branch by accident)
2025-02-20 11:55:52 +01:00
Nikita Shulga
5116b27792
Add cifllow/riscv64 label 2025-02-20 11:09:06 +01:00
Huy Do
9b6ef8abaf Update inductor jobs to use CUDA 12.4 (#142177)
CUDA 12.4 is the default now.  This frees up some resources.  This also fixes newly added Python 3.13 job by #140733. That PR missed adding the new Docker image `pytorch-linux-focal-cuda12.4-cudnn9-py3.13-gcc9-inductor-benchmarks` into docker build workflow.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142177
Approved by: https://github.com/atalman
2024-12-09 16:18:38 +00:00
Huy Do
27bf7d62e7 Enable retry on A100 perf nightly (#142074)
This is a quick mitigation while the investigation on https://github.com/pytorch/pytorch/issues/142069 going.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142074
Approved by: https://github.com/jeanschmidt
2024-12-05 14:35:53 +00:00
Nikita Shulga
4acc988630 Add ciflow/inductor-cu126 label (#141377)
No op to unblock the testing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141377
Approved by: https://github.com/atalman, https://github.com/huydhn
2024-11-22 23:14:24 +00:00
atalman
99a03211cb Deprecate conda nightly builds (#141024)
Removing CD as per https://github.com/pytorch/pytorch/issues/138506

Pull Request resolved: https://github.com/pytorch/pytorch/pull/141024
Approved by: https://github.com/malfet
2024-11-19 16:09:54 +00:00
Huy Do
77e25d57b0 Create ciflow/inductor-periodic (#138763)
This is related to https://github.com/pytorch/pytorch/issues/138476.  This would save about 1/8 of the total cost, not a big number, but still a save I guess.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138763
Approved by: https://github.com/desertfire
2024-10-30 19:59:44 +00:00
Sahan Paliskara
6d473e0dda [autolint] move to use a label (#138263)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138263
Approved by: https://github.com/huydhn
2024-10-18 00:12:52 +00:00
Nikita Shulga
b44f25e1ba [CI] Move s390 binary build to its own workflow (#137304)
It was added by https://github.com/pytorch/pytorch/pull/125399 and takes 3 hours to finish
Considering limited number of runners, it often causes queueing see:
<img width="402" alt="image" src="https://github.com/user-attachments/assets/5c67c1d6-af4c-4453-a089-aa1174513bfa">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137304
Approved by: https://github.com/kit1980, https://github.com/huydhn, https://github.com/atalman
2024-10-03 22:31:36 +00:00
Huy Do
24a223c49d Run inductor micro benchmark on x86 metal runner (#135042)
This enables inductor micro benchmark on CPU (x86):

* Running on AWS metal runner for more accurate benchmark
* I add a new `arch` column, which will be either x86_64 or arm64 for CPU or GPU name for GPU.  We can use this later to differentiate between different setup, i.e. cuda (a100) vs cuda (a10g) or cpu (x86_64) vs cpu (arm64)

The next step would be to run this one cpu arm64, and cuda (a10g).

### Testing
Here is the CSV results from my test run https://github.com/pytorch/pytorch/actions/runs/10709344180

```
name,metric,target,actual,dtype,device,arch,is_model
mlp_layer_norm_gelu,flops_utilization,0.8,17.36,bfloat16,cpu,x86_64,False
gather_gemv,memory_bandwidth(GB/s),990,170.80,int8,cpu,x86_64,False
gather_gemv,memory_bandwidth(GB/s),1060,204.78,bfloat16,cpu,x86_64,False
Mixtral-8x7B-v0.1,token_per_sec,175,26.68,int8,cpu,x86_64,True
Mixtral-8x7B-v0.1,memory_bandwidth(GB/s),1130,171.91,int8,cpu,x86_64,True
Mixtral-8x7B-v0.1,compilation_time(s),162,47.36,int8,cpu,x86_64,True
gemv,memory_bandwidth(GB/s),870,236.36,int8,cpu,x86_64,False
gemv,memory_bandwidth(GB/s),990,305.71,bfloat16,cpu,x86_64,False
Llama-2-7b-chat-hf,token_per_sec,94,14.01,bfloat16,cpu,x86_64,True
Llama-2-7b-chat-hf,memory_bandwidth(GB/s),1253,185.18,bfloat16,cpu,x86_64,True
Llama-2-7b-chat-hf,compilation_time(s),162,74.99,bfloat16,cpu,x86_64,True
Llama-2-7b-chat-hf,token_per_sec,144,25.09,int8,cpu,x86_64,True
Llama-2-7b-chat-hf,memory_bandwidth(GB/s),957,165.83,int8,cpu,x86_64,True
Llama-2-7b-chat-hf,compilation_time(s),172,70.69,int8,cpu,x86_64,True
layer_norm,memory_bandwidth(GB/s),950,172.03,bfloat16,cpu,x86_64,False
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135042
Approved by: https://github.com/yanboliang
2024-09-05 21:31:36 +00:00
Jithun Nair
05064f2827 [CI] Move all ROCm jobs to periodic frequency (#131637)
`inductor` and `rocm` workflows are the major contributors to the CI load on ROCm CI at the moment, resulting in huge backlogs: https://github.com/pytorch/pytorch/pull/131489#issue-2425804464

* Move rocm.yml to cron frequency
* Move ROCm CI jobs from inductor.yml to inductor-rocm.yml
* Introduce `ciflow/inductor-rocm` as PR label to manually invoke inductor jobs for ROCm (no automatic invoking to limit CI load)
* After this PR, only `trunk` workflow jobs for ROCm will run on every commit and PR merge, but since they take 45min*3 time on average, I decided to leave them as-is since it will provide us some basic insulation against ROCm breakage.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131637
Approved by: https://github.com/clee2000, https://github.com/atalman, https://github.com/huydhn
2024-07-24 19:26:58 +00:00
Catherine Lee
213eba7d2e Configure mergebot via config (#128840)
Fixes #ISSUE_NUMBER
* Companion to https://github.com/pytorch/test-infra/pull/5312
* See the above for details + possible risks
* Without the above PR, this should have no effects
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128840
Approved by: https://github.com/huydhn
2024-06-17 18:53:56 +00:00
Catherine Lee
11f2d8e823 Move inductor cuda 124 jobs to a separate workflow that is not triggered by ciflow/inductor (#128250)
https://github.com/pytorch/pytorch/pull/127825

The majority of the g5 runner usage comes from inductor (its something like 2x everything else)
in the past week, inductor ran 1300 ish times on PRs and 300 times on main.  Inductor-periodic ran 50 times on main, so the previous move from inductor -> inductor-periodic only results in 250 fewer runs.

I was under the impression that cu124 is experimental currently and eventually we'll need to switch to it, so this will stay until we switch or inductor uses much fewer runners

Are we expected to be able to handle two versions of cuda in CI?  Because currently we cannot, at least not comfortably

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128250
Approved by: https://github.com/huydhn
2024-06-07 23:01:52 +00:00
PyTorch MergeBot
ea5c17de90 Revert "Add torchao nightly testing workflow (#126885)"
This reverts commit d938170314.

Reverted https://github.com/pytorch/pytorch/pull/126885 on behalf of https://github.com/atalman due to Broke inductor periodic test ([comment](https://github.com/pytorch/pytorch/pull/126885#issuecomment-2140139486))
2024-05-30 16:23:06 +00:00
Xu Zhao
d938170314 Add torchao nightly testing workflow (#126885)
Add and test torchao nightly testing workflow.

This workflow will be triggered under the following conditions:
1. If the PR has ciflow/torchao label
2. Manual trigger

It will run the torchao benchmark on torchbench/timm/huggingface model workloads with 5 configs (noquant, autoquant, int8dynamic, int8weightonly, int4weightonly). The output will be updated to the PT2 Dashboard: https://hud.pytorch.org/benchmark/compilers
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126885
Approved by: https://github.com/huydhn
2024-05-29 18:22:29 +00:00
Sergii Dymchenko
fc594ed219 Remove lint from retryable_workflows (#126806)
Related to https://github.com/pytorch/test-infra/pull/4934

Lint workflow now uses Docker, so there should not be network-related errors for pip installing stuff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126806
Approved by: https://github.com/seemethere, https://github.com/ZainRizvi, https://github.com/huydhn
2024-05-21 19:47:23 +00:00
Catherine Lee
48f98bcdfc [TD] Enable test removal on most default configs + distributed CUDA for everyone (#125931)
yolo

Add the longest jobs in pull:
* default cpu configs
* non sm86 cuda
* distributed cuda for everyone

Still excluding
* slow, inductor, rocm, onnx, mac, dynamo
* distributed cpu
* windows cuda
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125931
Approved by: https://github.com/huydhn, https://github.com/ZainRizvi
2024-05-14 17:35:12 +00:00
Yanbo Liang
196a0b1722 Add Inductor micro benchmark workflow (#125450)
Fixes #ISSUE_NUMBER

Co-authored-by: Huy Do <huydhn@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125450
Approved by: https://github.com/huydhn
2024-05-07 18:56:01 +00:00
Sunita Nadampalli
ab80a59677 CI: add opt-in aarch64 linux workflow (#121284)
Triggered by `ciflow/linux-aarch64` and runs only `test_modules`, `test_mkldnn`, `test_mkldnn_fusion` and `test_openmp` as test for now.
TODOS:
 - Enable sscache for fast CI
 - Extend to a more reasonable test coverage

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121284
Approved by: https://github.com/atalman, https://github.com/malfet
2024-04-30 15:10:56 +00:00
PyTorch MergeBot
e7631d6eae Revert "CI: add aarch64 linux workflow (#121284)"
This reverts commit 32cf04cb7f.

Reverted https://github.com/pytorch/pytorch/pull/121284 on behalf of https://github.com/malfet due to Test only changes has not been reverted ([comment](https://github.com/pytorch/pytorch/pull/121284#issuecomment-2083925890))
2024-04-30 00:24:11 +00:00
Sunita Nadampalli
32cf04cb7f CI: add aarch64 linux workflow (#121284)
aarch64 linux workflow is triggered for ciflow/aarch64 tags.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121284
Approved by: https://github.com/atalman, https://github.com/malfet
2024-04-29 18:25:40 +00:00
Catherine Lee
4a6dfbe480 Add label to label config to auto apply labels based on other labels (#125042)
* Implemented in https://github.com/pytorch/test-infra/pull/5127,
* Tested in malfet/delete me: https://github.com/malfet/deleteme/issues/85
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125042
Approved by: https://github.com/huydhn
2024-04-26 19:58:56 +00:00
Catherine Lee
61be8843c9 [TD] Use label to configure td on distributed for rollout (#122976)
Gate TD on distributed behind label

TODO:
auto add label to certain people's prs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122976
Approved by: https://github.com/huydhn, https://github.com/ZainRizvi
2024-04-08 15:53:55 +00:00
Xu Zhao
597f479643 Add torchbench on-demand test workflow (#122624)
When Torchbench discovers a regression, the PR author would like to know if their fix can pass the test, e.g., https://github.com/pytorch/pytorch/issues/122575

We are adding an on-demand ciflow to test Torchbench models if that is required by the user.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122624
Approved by: https://github.com/huydhn
2024-04-02 01:11:01 +00:00
Catherine Lee
79fac48bb3 Use pytorch bot's labeler (#121762)
Change corresponds to https://github.com/pytorch/test-infra/pull/4995
Testing (very light) in https://github.com/malfet/deleteme/pull/81
Should help with https://github.com/pytorch/test-infra/issues/4950

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121762
Approved by: https://github.com/huydhn
2024-03-13 17:16:49 +00:00