Commit Graph

67 Commits

Author SHA1 Message Date
Zain Rizvi
5dcee01c2b Monitor baseline for TD prioritizations (#110031)
For tests that TD prioritizes, we should track what their ordering _would have been_ if none of the TD heuristics had applied to it.

This is useful for two reasons:
1. It lets us better understand TD may have contributed to that test running sooner
2. it's possible that heuristics actually mark a test as less important than the default sorting would have claimed (the default sorts tests in a fixed order). This will let us track how often that happens
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110031
Approved by: https://github.com/clee2000
2023-09-26 04:27:16 +00:00
Zain Rizvi
28169193b4 [TD] Improve heuristic metrics collection (#109305)
Fixes a bug with heuristic metrics collection where the metrics would sometimes inaccurately claim a heuristic to have ranked a test more highly than any other heuristic.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109305
Approved by: https://github.com/clee2000
2023-09-14 22:20:34 +00:00
Zain Rizvi
5727b07ac6 TD: logging bugfix (#108288)
Fix bug where logging metrics don't get emitted unless the 'keep-going' label is specified on the PR

Also adds some extra logging to make debugging easier
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108288
Approved by: https://github.com/Skylion007
2023-08-31 16:51:49 +00:00
Zain Rizvi
238cc84af9 [TD] Emit metrics to compare heuristic quality (#108192)
When a test fails, we will now emit fine grained details about how accurately heuristics predicted the relevance of that test.

## Context
Why only look at failing tests? Our only signal that a PR is most likely relevant to a test is whether or not a test fails on it. Green tests don't tell us if the success was due to the code being good vs being irrelevant.  This isn't a perfect measure, since it can miscategorize unstable and flaky failures as having been "missed" by the heuristics, but it's a reasonable approximation.

## What's measured?
The metrics this PR collects are designed to answer the following questions

### How comprehensive are the heuristics?
- What's the false negative rate, the % of failures that ideally should have been prioritized but weren't? (Both at an aggregate level and at a per heuristic level)

### How precise are the heuristics?
- What % of failed tests were prioritized by a given heuristic? What % was prioritized overall?
- How relevant was a failed test was considered to be? (Both a aggregate level and at a per heuristic level)
- What % of time was a given heuristic prioritizing a failing test higher than any other heuristic?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108192
Approved by: https://github.com/huydhn
ghstack dependencies: #108117
2023-08-30 18:28:18 +00:00
Zain Rizvi
620d267ef3 Refactor TestPrioritizations to support more priorities and reduce risk of accidental mutations (#108117)
Refactor TD code to make it easier to add additional categories later and also support the changes required to enable the metrics needed for TD

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108117
Approved by: https://github.com/huydhn
2023-08-30 04:14:28 +00:00
Zain Rizvi
36399d067a Port existing heuristics to TD framework (#107071)
This PR looks big, but it's mostly just refactorings with a bit of dead code deletion. Exceptions are:
- Some metric emissions were changed to comply with the new TD format
- Some logging changes
- We now run tests in three batches (highly_relevant, probably_relevant, unranked_relevance) instead of the previous two (prioritized and general)

Refactorings done:
- Moves all test reordering code to the new TD framework
- Refactors run_test.py to cleanly support multiple levels of test priorities
- Deletes some dead code that was originally written for logging
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107071
Approved by: https://github.com/clee2000, https://github.com/huydhn
2023-08-23 21:23:23 +00:00
Zain Rizvi
5ddb8ef827 Make emit_metrics importable without having boto3 installed (#107070)
Make it so that scripts can import and run the `emit_metrics` function even if they don't have boto3 installed, in which case it will still validate the inputs but skip the actual metric emission part.

It's purely a refactor without any real logic changes

Motivation: So that run_test.py and the target determination code can use this library easily without worrying about if it was imported or if it's dependencies are installed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107070
Approved by: https://github.com/huydhn
2023-08-21 21:13:01 +00:00
Catherine Lee
f16be5e0d4 Reordering tests experiment (#106347)
Companion with https://github.com/pytorch/test-infra/pull/4424

Uses the file rating generated by the test infra PR to re order tests.  For each test file, sum the file ratings from the changed files in the PR, and put the tests in order of sum.

A lot of tests are probably going to end up as "prioritized" since it takes anything with a rating > 0 right now.

Sharding is done twice, once on the prioritized tests, and once on the general/non prioritized tests.  Prioritized tests have an order, so they should be sharded according to that order, while general tests don't have an order and are sharded by test time, which should result in more balanced shards.

I'll change the metric name before I merge, i want to quarantine my testing stuff from actual results

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106347
Approved by: https://github.com/ZainRizvi
2023-08-16 18:23:09 +00:00
Zain Rizvi
87cd10bc7b Add basic TD framework (#106997)
Adds a new structure to house all heuristics we use for Target Determination and Test Reordering.  I'm keeping it somewhat minimal for now, to let it evolve more easily as we try new things.

It currently does nothing. The 2nd pr in the stack ports the existing heuristics to actually use this new framework
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106997
Approved by: https://github.com/clee2000, https://github.com/huydhn
2023-08-15 20:54:54 +00:00
PyTorch MergeBot
9858edd99f Revert "Reordering tests experiment (#106347)"
This reverts commit 7dfab082be.

Reverted https://github.com/pytorch/pytorch/pull/106347 on behalf of https://github.com/clee2000 due to probably broke sharding ([comment](https://github.com/pytorch/pytorch/pull/106347#issuecomment-1675542738))
2023-08-11 23:59:48 +00:00
Catherine Lee
7dfab082be Reordering tests experiment (#106347)
Companion with https://github.com/pytorch/test-infra/pull/4424

Uses the file rating generated by the test infra PR to re order tests.  For each test file, sum the file ratings from the changed files in the PR, and put the tests in order of sum.

A lot of tests are probably going to end up as "prioritized" since it takes anything with a rating > 0 right now.

Sharding is done twice, once on the prioritized tests, and once on the general/non prioritized tests.  Prioritized tests have an order, so they should be sharded according to that order, while general tests don't have an order and are sharded by test time, which should result in more balanced shards.

I'll change the metric name before I merge, i want to quarantine my testing stuff from actual results

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106347
Approved by: https://github.com/ZainRizvi
2023-08-09 20:11:11 +00:00
Justin Chu
4cc1745b13 [BE] f-stringify torch/ and scripts (#105538)
This PR is a follow up on the pyupgrade series to convert more strings to use f-strings using `flynt`.

- https://docs.python.org/3/reference/lexical_analysis.html#f-strings
- https://pypi.org/project/flynt/

Command used:

```
flynt torch/ -ll 120
flynt scripts/ -ll 120
flynt tools/ -ll 120
```

and excluded `collect_env.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105538
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-07-21 19:35:24 +00:00
Justin Chu
14d87bb5ff [BE] Enable ruff's UP rules and autoformat tools and scripts (#105428)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105428
Approved by: https://github.com/albanD, https://github.com/soulitzer, https://github.com/malfet
2023-07-19 01:24:44 +00:00
Zain Rizvi
c3d3165f16 Enable uploading metrics and upload Test Reordering metrics to dynamodb (#102691)
Added a feature to upload test statistics to DynamoDB and Rockset using a new function `emit_metric` in `tools/stats/upload_stats_lib.py`.

Added metrics to measure test reordering effectiveness in `tools/testing/test_selections.py`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102691
Approved by: https://github.com/malfet
2023-06-12 23:01:53 +00:00
PyTorch MergeBot
b52ee80cdc Revert "Add print statements to debug sharding error (#102713)"
This reverts commit c7873522c2.

Reverted https://github.com/pytorch/pytorch/pull/102713 on behalf of https://github.com/clee2000 due to issue should be resolved now ([comment](https://github.com/pytorch/pytorch/pull/102713#issuecomment-1583334560))
2023-06-08 21:02:17 +00:00
Catherine Lee
90fd90dd94 Fix rocm sharding (#102871)
Rocm queries for the number of processes it should use per machine, which might cause it be different across shards, which leads to inconsistencies when distributing tests among shards.

My solution is to separate the vars used for shard calculations and the actual number of procs that can be used and to ensure that the var used for shard calculations is consistent across all shards for a test config + job.  I believe that the only consequence is that rocm sharding might become unbalanced.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102871
Approved by: https://github.com/huydhn, https://github.com/malfet
2023-06-06 17:29:53 +00:00
Catherine Lee
c7873522c2 Add print statements to debug sharding error (#102713)
sharding on rocm is broken, i cant replicate on dummy PRs even though it seems to happen pretty often on main, so adding this to increase my sample size.  Hopefully this is enough print statements...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102713
Approved by: https://github.com/huydhn
2023-06-01 22:38:28 +00:00
Zain Rizvi
c84f246c83 Improve time savings calculation math for test reordering (#102411)
Use a more accurate method that accounts for tests being run in parallel

Right now we still log results to the console, but later it'll get logged to Rockset for better tracking
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102411
Approved by: https://github.com/huydhn, https://github.com/malfet
2023-05-31 23:51:27 +00:00
Zain Rizvi
686b12c93d Reduce log output when no tests are prioritized (#101803)
<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 733b991</samp>

Improve test reordering output in `tools/testing/test_selections.py`. Add a check to only print reordering information when there are tests to prioritize.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101803
Approved by: https://github.com/malfet, https://github.com/kit1980
2023-05-18 20:21:41 +00:00
Zain Rizvi
9a17989b63 Prioritize modified tests when running on main (#101618)
If a PR modifies a test, prioritize running that test on the default branch so that we get the test signal faster

Fixes https://github.com/pytorch/pytorch/issues/101617
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101618
Approved by: https://github.com/huydhn
2023-05-17 00:49:45 +00:00
Zain Rizvi
b1474019a4 Test Reordering: Run previously failing tests first (#101123)
Makes the CI prioritize running any test files that had a failing test in a previous iteration of the given PR.

A follow up to https://github.com/pytorch/pytorch/pull/100522 which makes the `.pytest_cache` available to use here

A concrete example:
1. Person A pushes a new commit and creates a PR.
2. 2 hours later, test_im_now_broken.py fails
3. Person A attempts to fix the test, but the test is actually still broken
4. The CI, seeing that test_im_now_broken.py had failed on a previous run, will now prioritize running that test first. Instead of waiting another 2 hours to get a signal, Person A only needs to wait ~15 minutes (which is how long it takes for tests to start running)

# Testing
I modified a file to make the tests invoking it fail and triggered CI twice with this failure.

First run: https://github.com/pytorch/pytorch/actions/runs/4963943209/jobs/8883800811
Test step took 1h 9m to run

Second run: https://github.com/pytorch/pytorch/actions/runs/4965016776/jobs/8885657992
Test step failed within 2m 27s

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101123
Approved by: https://github.com/malfet, https://github.com/huydhn
2023-05-16 19:57:54 +00:00
Zain Rizvi
ceecccc09e Bugfix: Correctly detect test changes in PRs (#101304)
Fixes a bug where the logic for deciding what tests have been edited by a PR would include all files that had been edited since the merge base, including files that were in main!

Now it will only consider the files that are part of the PR itself
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101304
Approved by: https://github.com/seemethere, https://github.com/malfet
2023-05-13 00:59:41 +00:00
Zain Rizvi
95f191a248 Always run prioritized tests first, even if they're expected to run serially (#100748)
Today, we prioritize running test files that were edited in the user's PR, with the idea being to run them before we run any other test.

Except, if the modified test is supposed to run serially, then we still end up running it after all the parallelized tests have finished running.

This PR fixes that to _always_ run the prioritized tests before the regular tests, regardless of if the test is supposed to run serially or in parallel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100748
Approved by: https://github.com/huydhn
2023-05-08 20:23:46 +00:00
Catherine Lee
a1f318daba Fix get_reordered_tests in run_test.py (#100752)
i think get_reordered_tests broken since master -> main switch

add typing for some functions

checked for `prioritized` in the logs

limited testing because I only care about one very small part of the log thats near the beginning

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100752
Approved by: https://github.com/huydhn
2023-05-05 22:46:56 +00:00
Pruthvi Madugundu
2f5e9b60f9 [ROCm] Limiting the NUM_PROCS to 8 while UT testing (#100133)
- Few AMD machines have >8 GPUs so limiting the NUM_PARALLEL_PROCS to 8, so number of test shards are also max 8
- Parallelizing for >8 is limited by memory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100133
Approved by: https://github.com/jeffdaily, https://github.com/jithunnair-amd, https://github.com/malfet
2023-05-05 20:05:59 +00:00
Zain Rizvi
7546972565 [BE] Refactoring test execution and improving comments (#99467)
Sharing code between the code that handles test results in parallel vs serial mode.

Note that the original version of this code had an inconsistency between the two versions where it would execute `print_to_stderr(err_message)` on every test that ran in parallel, but for serial tests it would only invoke `print_to_stderr(err_message)` if `continue_on_error` was also specified.  By sharing code, this PR changes that behavior to be consistent between the two modes.

Also adding some comments.

<!--
copilot:poem
-->
### <samp>🤖 Generated by Copilot at 029342c</samp>

> _Sing, O Muse, of the skillful coder who refined_
> _The PyTorch testing script, `run_test.py`, and shined_
> _A light on its obscure logic, with docstrings and comments_
> _And made it run more smoothly, with better error contents_
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99467
Approved by: https://github.com/huydhn, https://github.com/malfet
2023-04-19 19:29:07 +00:00
PyTorch MergeBot
55724a5ec9 Revert "[experiment] More procs in CI (#98098)"
This reverts commit 9fd3eba6ce.

Reverted https://github.com/pytorch/pytorch/pull/98098 on behalf of https://github.com/clee2000 due to I think theres a bug
2023-04-07 19:50:54 +00:00
Catherine Lee
9fd3eba6ce [experiment] More procs in CI (#98098)
experiment with more procs but only in master so prs dont get affected

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98098
Approved by: https://github.com/huydhn
2023-04-07 17:21:32 +00:00
Catherine Lee
0d73cfb3e9 Retry at test file level (#97506)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97506
Approved by: https://github.com/huydhn
2023-03-31 18:36:53 +00:00
PyTorch MergeBot
675dfd2c1f Revert "Retry at test file level (#97506)"
This reverts commit 7d5d5beba2.

Reverted https://github.com/pytorch/pytorch/pull/97506 on behalf of https://github.com/clee2000 due to test_jit_cuda_fuser having a rough time
2023-03-31 06:22:14 +00:00
Catherine Lee
7d5d5beba2 Retry at test file level (#97506)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97506
Approved by: https://github.com/huydhn
2023-03-30 17:12:19 +00:00
Aaron Gokaslan
9171f7d4cd [BE] Modernize PyTorch even more for 3.8 with pyupgrade (#94520)
Applies some more pyupgrade fixits to PyTorch

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94520
Approved by: https://github.com/ezyang
2023-02-10 18:02:50 +00:00
Jane Xu
0ecb071fc4 [BE][CI] change references from .jenkins to .ci (#92624)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92624
Approved by: https://github.com/ZainRizvi, https://github.com/huydhn
2023-01-30 22:50:07 +00:00
Jeff Daily
f44946289b [CI][ROCm] fix device visibility, again (#91813)
The previous PR #91137 was incomplete.  Though it successfully queried for the number of available GPUs, it still resulted in test files sharing the same GPU.  This PR lifts the maxtasksperchild=1 restriction so that Pool workers will always use the same GPU.  This also adds a Note in run_test.py for future reference.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91813
Approved by: https://github.com/kit1980, https://github.com/huydhn, https://github.com/malfet
2023-01-06 22:19:07 +00:00
Jeff Daily
c18e8c68d8 [ROCm] fix parallel test runners and device visibility (#91137)
Fixes #90940.  This PR revamps how tests are run in parallel as well as device visibility at the docker container and within the run_test.py test runner.

First, running multiple test modules concurrently on the same GPU was causing instability for ROCm runners manifesting as timeouts.  ROCm runners have at least 1 GPU each, but often 2 or more.  This PR allows NUM_PROCS to be set equal to the number of devices available, but also takes care to set HIP_VISIBLE_DEVICES to avoid oversubscribing any GPU.

Second, we had introduced env vars `-e ROCR_VISIBLE_DEVICES` (#91031) to prepare for two GHA runners per CI node, to split up the GPU visibility at the docker level between the two runners.  This effort wasn't fully realized; to date, we haven't had more than one runner per CI host.  We abandon this effort in favor of all GPUs being visible to a single runner and managing GPU resources as stated above.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91137
Approved by: https://github.com/kit1980, https://github.com/huydhn, https://github.com/pruthvistony
2023-01-04 19:40:05 +00:00
Catherine Lee
d632d94cc7 Disable mem leak check (#88373)
tbh at this point it might be easier to make a new workflow and copy the relevant jobs...

Changes:
* Disable cuda mem leak check except for on scheduled workflows
* Make pull and trunk run on a schedule which will run the memory leak check
* Periodic will always run the memory leak check -> periodic does not have parallelization anymore
* Concurrency check changed to be slightly more generous
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88373
Approved by: https://github.com/ZainRizvi, https://github.com/huydhn
2022-11-04 20:47:42 +00:00
Tom Stein
fd60b818b9 [Python] refactor slices on sorted (#86995)
Sometimes you want to query the small element of a set of elements and use `sorted(elements)[0]` without a second thought. However, this is not optimal, since the entire list must be sorted first `O(n log n)`. It would be better to use the `min(elements)` method provided for this purpose `O(n)`.
Furthermore `sorted(elements)[::-1]` is not very efficient, because it would be better to use `sorted(elements, reverse=True)` to save the slice operation.

**TLDR: using `sorted(elements)[0]` is slow and can be replaced with `min(elements)`.**

I stumbled across these code snippets while playing around with CodeQL (see https://lgtm.com/query/4148064474379348546/).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86995
Approved by: https://github.com/jansel
2022-10-25 04:07:19 +00:00
Catherine Lee
7941b042a7 parallelize at file granularity (#85770)
part two of https://github.com/pytorch/pytorch/pull/84961

tests files in parallel at the test file granularity

* 2 procs at a time
* number of tests ran changed by <200, possibly due to adding more tests on master between the base commit and head commit of the PR
* may cause flakiness, but I haven't seen it in my small sample size of this PR
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85770
Approved by: https://github.com/huydhn
2022-10-03 16:59:39 +00:00
Catherine Lee
49e10c1598 [ci] test_ops in parallel, ci tests log to file (#85528)
part one of splitting up https://github.com/pytorch/pytorch/pull/84961 into (probably 2) parts

contains
* logging to file
* testing test_ops in parallel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85528
Approved by: https://github.com/huydhn
2022-09-23 20:45:20 +00:00
PyTorch MergeBot
3dce26635f Revert "test in parallel at file granularity (#84961)"
This reverts commit 8107666c6a.

Reverted https://github.com/pytorch/pytorch/pull/84961 on behalf of https://github.com/clee2000 due to makes test_forward_ad_nn_functional_max_unpool2d_cuda_float32 flakily unexpectedly pass
2022-09-21 20:21:25 +00:00
Catherine Lee
8107666c6a test in parallel at file granularity (#84961)
run tests in parallel at the test file granularity

runs 3 files in parallel using multiprocessing pool, output goes to a file, which is then printed when the test finishes.  Some tests cannot be run in parallel (usually due to lacking memory), so we run those after.  Sharding is changed to attempt to mask large files with other large files/run them on the same shard.

test_ops* gets a custom handler to run it because it is simply too big (2hrs on windows) and linalg_cholesky fails (I would really like a solution to this if possible, but until then we use the custom handler).

reduces cuda tests by a lot, reduces total windows test time by ~1hr

Ref. https://github.com/pytorch/pytorch/issues/82894
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84961
Approved by: https://github.com/huydhn
2022-09-21 16:58:11 +00:00
Sergii Dymchenko
591222f5d9 Fix use-dict-literal lint (#83718)
Fix use-dict-literal pylint suggestions by changing `dict()` to `{}`. This PR should do the change for every Python file except test/jit/test_list_dict.py, where I think the intent is to test the constructor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83718
Approved by: https://github.com/albanD
2022-08-24 00:26:46 +00:00
Huy Do
347b036350 Apply ufmt linter to all py files under tools (#81285)
With ufmt in place https://github.com/pytorch/pytorch/pull/81157, we can now use it to gradually format all files. I'm breaking this down into multiple smaller batches to avoid too many merge conflicts later on.

This batch (as copied from the current BLACK linter config):
* `tools/**/*.py`

Upcoming batchs:
* `torchgen/**/*.py`
* `torch/package/**/*.py`
* `torch/onnx/**/*.py`
* `torch/_refs/**/*.py`
* `torch/_prims/**/*.py`
* `torch/_meta_registrations.py`
* `torch/_decomp/**/*.py`
* `test/onnx/**/*.py`

Once they are all formatted, BLACK linter will be removed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81285
Approved by: https://github.com/suo
2022-07-13 07:59:22 +00:00
Michael Suo
d321be61c0 [ci] remove dead code related to test selection (#81163)
Since we are using Rockset for all this now, remove the code that used
the S3 path.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81163
Approved by: https://github.com/janeyx99
2022-07-12 04:50:19 +00:00
Brian Hirsh
282de5539d add open device registration test with cpp extensions (#80477)
Adding a test for open device registration using cpp extensions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80477
Approved by: https://github.com/albanD, https://github.com/malfet
2022-07-12 01:46:16 +00:00
Jane Xu
addeb1ed5e [GHA] Add warning when S3 stats for sharding aren't found (#80176)
Adds visibility to when this happens so I don't need to keep looking at the logs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80176
Approved by: https://github.com/suo
2022-06-24 04:14:10 +00:00
Michael Suo
5029a91f7b [ci] delete JOB_BASE_NAME (#80046)
`JOB_BASE_NAME` was a holdover from jenkins compatibility. Eventually,
it morphed to be always set to the build enviroment + `-test` or
`-build`, and we used it to detect whether we were in a build or test.

That's sort of pointless, so removing and fixing up the few remaining
use cases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80046
Approved by: https://github.com/malfet, https://github.com/janeyx99
2022-06-23 21:06:48 +00:00
PyTorch MergeBot
9f29d09b3b Revert "[ci] delete JOB_BASE_NAME"
This reverts commit a4080240e9.

Reverted https://github.com/pytorch/pytorch/pull/80046 on behalf of https://github.com/malfet due to Broke bazel job
2022-06-23 01:49:42 +00:00
Michael Suo
a4080240e9 [ci] delete JOB_BASE_NAME
`JOB_BASE_NAME` was a holdover from jenkins compatibility. Eventually,
it morphed to be always set to the build enviroment + `-test` or
`-build`, and we used it to detect whether we were in a build or test.

That's sort of pointless, so removing and fixing up the few remaining
use cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80046

Approved by: https://github.com/malfet
2022-06-22 23:33:18 +00:00
Michael Suo
842da8a5de [ci] remove TD + test specification code from run_test.py
In the case of target determination, this is just removing comments that
refer to non-existent code.

In the case of the test specification code; this removes (what I believe
to be) an unused feature. If we're using this somehow let me know and I
can revise the PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79372

Approved by: https://github.com/janeyx99
2022-06-13 16:09:53 +00:00