Commit Graph

17 Commits

Author SHA1 Message Date
wushirong
31c7e5d629 Install TensorRT lib on oss docker and enable fx2trt unit test (#70203)
Summary:
CI

Lib installed and unit test run on https://github.com/pytorch/pytorch/actions/runs/1604076060

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70203

Reviewed By: malfet

Differential Revision: D33264641

Pulled By: wushirong

fbshipit-source-id: ba30010bbd06e70d31415d8c52086d1779371bcf
2021-12-22 08:50:48 -08:00
Michael Suo
19f898402d Revert D33241684: [pytorch][PR] Install TensorRT lib on oss docker and enable fx2trt unit test
Test Plan: revert-hammer

Differential Revision:
D33241684 (dab3d3132b)

Original commit changeset: cd498908b00f

Original Phabricator Diff: D33241684 (dab3d3132b)

fbshipit-source-id: d5b2e663b5b0c9e570bd799b9f6111cd2a0de4f7
2021-12-20 23:14:35 -08:00
wushirong
dab3d3132b Install TensorRT lib on oss docker and enable fx2trt unit test (#70203)
Summary:
CI

Lib installed and unit test run on https://github.com/pytorch/pytorch/actions/runs/1604076060

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70203

Reviewed By: janeyx99

Differential Revision: D33241684

Pulled By: wushirong

fbshipit-source-id: cd498908b00f3417bdeb5ede78f5576b3b71087c
2021-12-20 18:51:48 -08:00
Jane Xu
7d1c0992e1 GHA: add back runner type for distributed tests (#67336)
Summary:
Addresses https://github.com/pytorch/pytorch/pull/67264#issuecomment-953031927

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67336

Test Plan:
the 8x is used for the distributed config
![image](https://user-images.githubusercontent.com/31798555/139103861-38d7dc37-ca8b-4448-b3ec-facc24aee342.png)

Reviewed By: malfet

Differential Revision: D31961179

Pulled By: janeyx99

fbshipit-source-id: cd21e2bf2a7c6602c9a42a53759b720959e43b8d
2021-10-27 09:34:18 -07:00
Jane Xu
69da4b4381 GHA: make obvious when we are running smoke tests to user (#66011)
Summary:
This PR clarifies what's run on PRs by explicitly stating when it runs smoke tests for windows CUDA and makes the logic so that user defined labels override other workflow logic.

1. Move smoke tests to its own config.

2. Make sure that when a user specifies a ciflow label that is not the default, the workflow runs as if it is on trunk.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66011

Test Plan:
the default on PRs would generate this matrix (default replaced by smoke_tests)
![image](https://user-images.githubusercontent.com/31798555/135672182-64454ea3-ff43-4746-b8e4-09b0b28e9d33.png)
But when retriggered with a label, it looks like (note that there's no smoke_tests config):
![image](https://user-images.githubusercontent.com/31798555/135672601-5aa9a268-bc76-40f1-80c6-62b3fac6601d.png)

Reviewed By: VitalyFedyunin, seemethere

Differential Revision: D31355130

Pulled By: janeyx99

fbshipit-source-id: fed58ade4235b58176e1d1a24101aea0bea83aa4
2021-10-04 07:53:17 -07:00
Jane Xu
9afdf017dc Add force_on_cpu test to win cuda10.2 on GHA (#65094)
Summary:
Part of migrating from Circle.

Once we get a successful force_on_cpu test, we can move it to trunk only.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65094

Reviewed By: seemethere

Differential Revision: D31086289

Pulled By: janeyx99

fbshipit-source-id: e1d135cc844d51f0b243b40efb49edca277d9de8
2021-09-21 11:14:15 -07:00
Eli Uriegas
3c79e0b314 .github: Migrate pytorch_linux_bionic_py_3_6_clang9 to GHA (#64218)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64218

Relies on https://github.com/fairinternal/pytorch-gha-infra/pull/11

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

cc ezyang seemethere malfet walterddr lg20987 pytorch/pytorch-dev-infra bdhirsh

Test Plan: Imported from OSS

Reviewed By: malfet, H-Huang, janeyx99

Differential Revision: D30651516

Pulled By: seemethere

fbshipit-source-id: e5843dfe84f096f2872d88f2e53e9408ad2fe399
2021-09-02 14:51:00 -07:00
Eli Uriegas
09e53c0cfe .github: Adding configuration for backwards_compat (#64204)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64204

Adds backwards_compat to our existing test matrix for github actions

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

cc ezyang seemethere malfet walterddr lg20987 pytorch/pytorch-dev-infra

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D30646764

Pulled By: seemethere

fbshipit-source-id: f0da6027e29fab03aff058cb13466fae5dcf3678
2021-08-30 13:59:00 -07:00
Eli Uriegas
9035a1cb4d .github: Adding configuration for docs_test (#64201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64201

Adds docs_test to our existing test matrix for github actions

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

cc ezyang seemethere malfet walterddr lg20987 pytorch/pytorch-dev-infra

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D30646765

Pulled By: seemethere

fbshipit-source-id: 946adae01ff1f1f7ebe626e408e161b77b19a011
2021-08-30 13:57:20 -07:00
Rong Rong (AI Infra)
7ccc4b5cc8 [CI] move distributed test into its own CI job (#62896)
Summary:
Moving distributed to its own job.

- [x] ensure there should be a distributed test job for every default test job matrix (on GHA)
- [x] ensure that circleci jobs works for distributed as well
- [x] waiting for test distributed to have its own run_test.py launch options, see https://github.com/pytorch/pytorch/issues/63147

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62896

Reviewed By: seemethere

Differential Revision: D30230856

Pulled By: walterddr

fbshipit-source-id: 0cad620f6cd9e56c727c105458d76539a5ae976f
2021-08-26 08:02:20 -07:00
Jane Xu
2b83007ae2 Modify GHA CI to use PYTORCH_IGNORE_DISABLED_ISSUES based on PR body (#62851)
Summary:
Another step forward in fixing https://github.com/pytorch/pytorch/issues/62359

Disclaimer: this only works with GHA for now, as circleci would require changes in probot.

Test plan can be seen a previous description where I modified the description to include linked issues. I've removed them now since the actual PR doesn't fix any of them.

It works! In the [periodic 11.3 test1](https://github.com/pytorch/pytorch/pull/62851/checks?check_run_id=3263109970), we get this in the logs and we see that PYTORCH_IGNORE_DISABLED_ISSUES is properly set:
```
  test_jit_cuda_extension (__main__.TestCppExtensionJIT) ... Using /var/lib/jenkins/.cache/torch_extensions/py36_cu113 as PyTorch extensions root...
Creating extension directory /var/lib/jenkins/.cache/torch_extensions/py36_cu113/torch_test_cuda_extension...
Detected CUDA files, patching ldflags
Emitting ninja build file /var/lib/jenkins/.cache/torch_extensions/py36_cu113/torch_test_cuda_extension/build.ninja...
Building extension module torch_test_cuda_extension...
Using envvar MAX_JOBS (30) as the number of workers...
[1/3] c++ -MMD -MF cuda_extension.o.d -DTORCH_EXTENSION_NAME=torch_test_cuda_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11 (d55b25a633)_COMPILER_TYPE=\"_gcc\" -DPYBIND11 (d55b25a633)_STDLIB=\"_libstdcpp\" -DPYBIND11 (d55b25a633)_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.6/site-packages/torch/include -isystem /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.6/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.6/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /opt/conda/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++14 -c /var/lib/jenkins/workspace/test/cpp_extensions/cuda_extension.cpp -o cuda_extension.o
[2/3] /usr/local/cuda/bin/nvcc  -DTORCH_EXTENSION_NAME=torch_test_cuda_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11 (d55b25a633)_COMPILER_TYPE=\"_gcc\" -DPYBIND11 (d55b25a633)_STDLIB=\"_libstdcpp\" -DPYBIND11 (d55b25a633)_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.6/site-packages/torch/include -isystem /opt/conda/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.6/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.6/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /opt/conda/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_52,code=compute_52 -gencode=arch=compute_52,code=sm_52 --compiler-options '-fPIC' -O2 -std=c++14 -c /var/lib/jenkins/workspace/test/cpp_extensions/cuda_extension.cu -o cuda_extension.cuda.o
nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
[3/3] c++ cuda_extension.o cuda_extension.cuda.o -shared -L/opt/conda/lib/python3.6/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda_cu -ltorch_cuda_cpp -ltorch -ltorch_python -L/usr/local/cuda/lib64 -lcudart -o torch_test_cuda_extension.so
Loading extension module torch_test_cuda_extension...
ok (26.161s)
```

whereas on the latest master periodic 11.1 windows [test](https://github.com/pytorch/pytorch/runs/3263762478?check_suite_focus=true), we see
```
test_jit_cuda_extension (__main__.TestCppExtensionJIT) ... skip (0.000s)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62851

Reviewed By: walterddr, tktrungna

Differential Revision: D30192029

Pulled By: janeyx99

fbshipit-source-id: fd2ecc59d2b2bb5c31522a630dd805070d59f584
2021-08-09 09:48:56 -07:00
Jane Xu
e352585f67 Clean up running smoke tests logic for Windows GHA (#62344)
Summary:
Followup to https://github.com/pytorch/pytorch/issues/62288

Front loads the logic and also force smoke tests to run on only one shard.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62344

Test Plan: Note that for the windows cuda10 run on PR, we get only 1 shard with the smoke tests running: https://github.com/pytorch/pytorch/pull/62344/checks?check_run_id=3194294041

Reviewed By: seemethere, heitorschueroff

Differential Revision: D29991573

Pulled By: janeyx99

fbshipit-source-id: 263d7de72c7a82a7205932914c32d39892294cad
2021-07-30 05:00:56 -07:00
Eli Uriegas
2fd37a830e Revert D29642893: .github: Add force_on_cpu tests for windows
Test Plan: revert-hammer

Differential Revision:
D29642893 (a52de0dfec)

Original commit changeset: 2dd2b295c71d

fbshipit-source-id: c01c421689f6d01cdfb3fe60a8c6428253249c5f
2021-07-12 14:01:44 -07:00
Eli Uriegas
a52de0dfec .github: Add force_on_cpu tests for windows (#61472)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61472

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Test Plan: Imported from OSS

Reviewed By: walterddr

Differential Revision: D29642893

Pulled By: seemethere

fbshipit-source-id: 2dd2b295c71d79593ad7f71d6160de4042c08b80
2021-07-12 11:16:17 -07:00
Sam Estep
e9a40de1af Add other Linux GPU auxiliary test jobs (#61055)
Summary:
- [x] add the jobs to the matrix
  - [x] `jit_legacy`
  - [x] `nogpu_NO_AVX`
  - [x] `nogpu_NO_AVX2`
  - [x] `slow`
- [x] use the test config properly to enable the different test conditions
- [x] validate that it works
- [x] disable on pull requests before merging

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61055

Test Plan: CI. Example run: https://github.com/pytorch/pytorch/actions/runs/1013240987

Reviewed By: walterddr

Differential Revision: D29594080

Pulled By: samestep

fbshipit-source-id: 02c531ebc42feae81ecaea0785915f95e0f53ed7
2021-07-09 09:29:15 -07:00
Sam Estep
0b8a7daa2a Enable multigpu_test in GHA (#60221)
Summary:
- [x] add to test matrix
- [x] enable on PRs for testing
- [x] modify the scripts so it actually runs the multigpu tests
- [x] put `num_shards` after `shard` number
- [x] use a separate test-reports artifact
- [x] run on `linux.16xlarge.nvidia.gpu`
- [x] validate that it works
- [x] disable on PRs before merging

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60221

Test Plan: CI. Example run: https://github.com/pytorch/pytorch/actions/runs/984347177

Reviewed By: malfet

Differential Revision: D29430567

Pulled By: samestep

fbshipit-source-id: 09f8e208e524579b603611479ca00515c8a1b5aa
2021-06-30 08:52:38 -07:00
Jane Xu
462448f07a Enable GHA sharding on linux (#60124)
Summary:
This is branch off of https://github.com/pytorch/pytorch/issues/59970 to only shard on linux so far (we're running in issues with windows gflags).

This would enable sharding of tests on a few Linux jobs on GHA, allowing tts to be essentially halved.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60124

Reviewed By: zou3519

Differential Revision: D29204211

Pulled By: janeyx99

fbshipit-source-id: 1cc31d1eccd564d96e2aef14c0acae96a3f0fcd0
2021-06-17 13:00:23 -07:00