Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73748
This adds CPU-only slow test jobs, which previously would never run.
Includes fixes/skips for slow tests which fail (they need to be skipped now because they used to never run)
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D34628803
Pulled By: davidberard98
fbshipit-source-id: c090ab7bf7bda9e24ec5cdefa6fd35c6310dbac0
(cherry picked from commit 06f7a94a57cc7023e9c5442be8298d20cd011144)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73676
For some reason https://github.com/pytorch/pytorch/pull/72637 ended up in getting messed up during rebasing so please refer to that pr for review history.
This PR creates a new workflow called ` deploy-linux-xenial-cuda11.3-py3.7-gcc7` for torch::deploy tests.
For testing go to https://www.torch-ci.com/pytorch/pytorch/pull/73676 and check if a build and test job occur with ` deploy-linux-xenial-cuda11.3-py3.7-gcc7`
Test Plan: Imported from OSS
Reviewed By: soulitzer
Differential Revision: D34586702
Pulled By: PaliC
fbshipit-source-id: 5627cf4ff411a4a04030f8b7726f84af979da213
(cherry picked from commit df6dddebb9fe078a6053a31033b5a40cc742fcf3)
Summary:
RFC: https://github.com/pytorch/rfcs/pull/40
This PR (re)introduces python codegen for unboxing wrappers. Given an entry of `native_functions.yaml` the codegen should be able to generate the corresponding C++ code to convert ivalues from the stack to their proper types. To trigger the codegen, run
```
tools/jit/gen_unboxing.py -d cg/torch/share/ATen
```
Merged changes on CI test. In https://github.com/pytorch/pytorch/issues/71782 I added an e2e test for static dispatch + codegen unboxing. The test exports a mobile model of mobilenetv2, load and run it on a new binary for lite interpreter: `test/mobile/custom_build/lite_predictor.cpp`.
## Lite predictor build specifics
1. Codegen: `gen.py` generates `RegisterCPU.cpp` and `RegisterSchema.cpp`. Now with this PR, once `static_dispatch` mode is enabled, `gen.py` will not generate `TORCH_LIBRARY` API calls in those cpp files, hence avoids interaction with the dispatcher. Once `USE_LIGHTWEIGHT_DISPATCH` is turned on, `cmake/Codegen.cmake` calls `gen_unboxing.py` which generates `UnboxingFunctions.h`, `UnboxingFunctions_[0-4].cpp` and `RegisterCodegenUnboxedKernels_[0-4].cpp`.
2. Build: `USE_LIGHTWEIGHT_DISPATCH` adds generated sources into `all_cpu_cpp` in `aten/src/ATen/CMakeLists.txt`. All other files remain unchanged. In reality all the `Operators_[0-4].cpp` are not necessary but we can rely on linker to strip them off.
## Current CI job test coverage update
Created a new CI job `linux-xenial-py3-clang5-mobile-lightweight-dispatch-build` that enables the following build options:
* `USE_LIGHTWEIGHT_DISPATCH=1`
* `BUILD_LITE_INTERPRETER=1`
* `STATIC_DISPATCH_BACKEND=CPU`
This job triggers `test/mobile/lightweight_dispatch/build.sh` and builds `libtorch`. Then the script runs C++ tests written in `test_lightweight_dispatch.cpp` and `test_codegen_unboxing.cpp`. Recent commits added tests to cover as many C++ argument type as possible: in `build.sh` we installed PyTorch Python API so that we can export test models in `tests_setup.py`. Then we run C++ test binary to run these models on lightweight dispatch enabled runtime.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69881
Reviewed By: iseeyuan
Differential Revision: D33692299
Pulled By: larryliu0820
fbshipit-source-id: 211e59f2364100703359b4a3d2ab48ca5155a023
(cherry picked from commit 58e1c9a25e3d1b5b656282cf3ac2f548d98d530b)
Rather than hardcode the value to 240 min, use `timeout_after` argument
to specify different limits depending on config
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73508
We're deprecating support for CUDA 11.1 so moving all of our CUDA 11.1
workflows to CUDA 11.3
Signed-off-by: Eli Uriegas <eliuriegasfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73449
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Today, we have two pieces that conspire to determine what workflows we run:
- `generate_ci_workflows.py`, which takes a declarative description of what we want the workflow to do and uses jinja to generate a workflow yaml file
- `generate-test-matrix`, which runs at CI time to dynamically generate test jobs.
This is bad:
- Having one layer of code generation is unfortunate, having two is confusing.
- You cannot tell from a workflow yaml file what test jobs will be run.
- We have to do this careful dance of plumbing the args to `generate-test-matrix` through setting env vars and other such ugliness.
- In cases where the build job fails and prevents `generate-test-matrix` from running, a ghost `test` job that doesn't actually exist noises up the HUD and our stats.
- A bunch of useless `generate-test-matrix` jobs (8 on PRs) noise up our signal.
As far as I can tell, this complexity is unnecessary--we have all the information we need to generate the build matrix statically. There does not appear to be an advantage in retaining generate-build-matrix, so I am removing `generate-test-matrix` to simplify the CI.
The *only* place where we were actually doing something dynamic is in our windows gpu workflow, where we would check at runtime whether the workflow was triggered from a PR or master and behave accordingly. This is more simply done by just having two separate workflows with different trigger conditions, which avoids the madness of needing to parse labels and forking the behavior dynamically, which has been a source of confusion in the past.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73001
Summary:
Remove fx2trt test from oss CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72595
Test Plan: CI
Reviewed By: houseroad
Differential Revision: D34112595
Pulled By: wushirong
fbshipit-source-id: 02376ef0f25381eff31b72dcbf964c1966af9793
(cherry picked from commit e3d698a942)
These were left out of the intial migration for some reason so this just
transfers over those tests
Signed-off-by: Eli Uriegas <eliuriegasfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71644
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Summary:
This PR implements the workflow changes described in https://fb.quip.com/oi8wAvajpR4g. Combined with the bot logic in d928549336 (can be moved to probot but is easier to test there), it fully implements the proposal.
The CIFlow comment is slightly outdated now but is still technically correct (all the commands will continue to work as before, just through a different mechanism).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70321
Reviewed By: atalman, janeyx99
Differential Revision: D33690370
Pulled By: suo
fbshipit-source-id: 8d81ffeb249cdae53c5526798a4a504560d0204f
(cherry picked from commit 5ed8d0dfae)
Summary:
Also adds a mechanism for all workflows to do this
Signed-off-by: Eli Uriegas <eliuriegasfb.com>
cc jeffdaily sunway513 jithunnair-amd ROCmSupport KyleCZH
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71567
Reviewed By: malfet
Differential Revision: D33687713
Pulled By: seemethere
fbshipit-source-id: a3c7ef41ed04f9caa82c180961d2f4b7c24582dd
(cherry picked from commit eef2eafffd)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71431
Adds a PR trigger based on paths to the binary build workflows to make
it easier to test / verify changes to the binary build workflows without
adding a bunch of skipped checks to the majority of our workflows
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: atalman
Differential Revision: D33641276
Pulled By: seemethere
fbshipit-source-id: 0ed65cbcebf06dfe998f81d67df817250dd1a716
(cherry picked from commit 598b55fd18)
Summary:
The many times a day was probably not intentional
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71255
Reviewed By: suo, atalman
Differential Revision: D33559155
Pulled By: janeyx99
fbshipit-source-id: c8703cea6f3188c9bcb0867b895261808d3164ee
Summary:
Our docker builds have not been running with our previous cron, changes this so it should work hopefully.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71232
Reviewed By: ejguan
Differential Revision: D33552231
Pulled By: janeyx99
fbshipit-source-id: 1a3e1607b03d37614eedf04093d73f1b96698840
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68388
Updates the gpu architectures as well as adding a trigger for
on_pull_request for the binary build workflows so that we can iterate on
this later
TODO:
* Create follow up PR to enable nightly linux GHA builds / disable CircleCI nighlty linux builds
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: janeyx99
Differential Revision: D33462294
Pulled By: seemethere
fbshipit-source-id: 5fa30517550d36f504b491cf6c1e5c9da56d8191
Summary:
The CMake build defaults to `USE_PER_OPERATOR_HEADERS = 1` which
generates extra headers in the `ATen/ops` folder that don't exist
otherwise. In particular, fb-internal builds using buck don't support
these headers and so all includes must be guarded with
`#ifdef AT_PER_OPERATOR_HEADERS`.
This adds a CI run which builds with `USE_PER_OPERATOR_HEADERS = 0` so
open source contributions don't have to wait for their PR to be
imported to find out it doesn't work in fb-internal. This flag
shouldn't effect runtime behavior though, so I don't run any tests.
cc seemethere malfet pytorch/pytorch-dev-infra
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69907
Reviewed By: malfet, atalman
Differential Revision: D33411864
Pulled By: seemethere
fbshipit-source-id: 18b34d7a83dc81cf8a6c396ba8369e1789f936e9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70453
Removes the current xla config, downstream `pytorch/xla` is broken for
clang compilation so temporarily removing this config until the xla team
can fix this upstream CI.
Context: https://github.com/pytorch/xla/pull/3255/files#r775980035
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: zengk95
Differential Revision: D33338463
Pulled By: seemethere
fbshipit-source-id: 1ef332c685d5e2cc7e2eb038e93bd656847fd099