mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
Update on "[PT-D] Prototype for megatron-lm style MLP layers "
We want to build a prototype of Megatron-LM so that we can apply PT-D op to models like transformer and other Meta flagship models like The basic idea of Megatron-LM is as following: 1. Col-wise sharding of linear weight. Perform the linear op for the first layer. 2. Perform a math op (optional), such as ReLU or GeLU. We use GeLU in our example unit test. The input is from step 1. 3. Row-wise sharing of linear weight. Perform the linear op for the second layer. The input is from step 2. We then save communications to concatenate the col-wise sharding results and spreading the input to different ranks for row-wise sharding. The change is as following: 1. Return a ShardedTensor for the col-wise sharding in the sharded_linear op. 2. Return a PartialTensor for the row-wise sharding in the sharded_linear op. 3. Add helper functions to merge/aggregate local results to a fully sync local result if needed. 4. Add helper function to create sharded tensor based on the local result. 5. Add a unit test to test the Megatron-LM idea mentioned above and compare with local ops, including the grad so that we can ensure the correctness of the implementation. 6. Refactor the unit test of sharded linear to reflect the changes in the code. Differential Revision: [D32978221](https://our.internmc.facebook.com/intern/diff/D32978221/) **NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D32978221/)! [ghstack-poisoned]
This commit is contained in:
commit
df93f9e37d
|
|
@ -31,23 +31,6 @@ def get_processor_arch_name(gpu_version):
|
|||
)
|
||||
|
||||
CONFIG_TREE_DATA = OrderedDict(
|
||||
macos=([None], OrderedDict(
|
||||
wheel=dimensions.STANDARD_PYTHON_VERSIONS,
|
||||
conda=dimensions.STANDARD_PYTHON_VERSIONS,
|
||||
libtorch=[
|
||||
"3.7",
|
||||
],
|
||||
)),
|
||||
macos_arm64=([None], OrderedDict(
|
||||
wheel=[
|
||||
"3.8",
|
||||
"3.9",
|
||||
],
|
||||
conda=[
|
||||
"3.8",
|
||||
"3.9",
|
||||
],
|
||||
)),
|
||||
windows=(
|
||||
# Stop building Win+CU102, see https://github.com/pytorch/pytorch/issues/65648
|
||||
[v for v in dimensions.GPU_VERSIONS if v not in dimensions.ROCM_VERSION_LABELS and v != "cuda102"],
|
||||
|
|
|
|||
393
.circleci/config.yml
generated
393
.circleci/config.yml
generated
|
|
@ -1678,136 +1678,6 @@ jobs:
|
|||
workflows:
|
||||
binary_builds:
|
||||
jobs:
|
||||
- binary_mac_build:
|
||||
name: binary_macos_wheel_3_7_cpu_nightly_build
|
||||
build_environment: "wheel 3.7 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_mac_build:
|
||||
name: binary_macos_wheel_3_8_cpu_nightly_build
|
||||
build_environment: "wheel 3.8 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_mac_build:
|
||||
name: binary_macos_wheel_3_9_cpu_nightly_build
|
||||
build_environment: "wheel 3.9 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_mac_build:
|
||||
name: binary_macos_wheel_3_10_cpu_nightly_build
|
||||
build_environment: "wheel 3.10 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_mac_build:
|
||||
name: binary_macos_conda_3_7_cpu_nightly_build
|
||||
build_environment: "conda 3.7 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_mac_build:
|
||||
name: binary_macos_conda_3_8_cpu_nightly_build
|
||||
build_environment: "conda 3.8 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_mac_build:
|
||||
name: binary_macos_conda_3_9_cpu_nightly_build
|
||||
build_environment: "conda 3.9 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_mac_build:
|
||||
name: binary_macos_conda_3_10_cpu_nightly_build
|
||||
build_environment: "conda 3.10 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_mac_build:
|
||||
name: binary_macos_libtorch_3_7_cpu_nightly_build
|
||||
build_environment: "libtorch 3.7 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_macos_arm64_build:
|
||||
name: binary_macos_arm64_wheel_3_8_cpu_nightly_build
|
||||
build_environment: "wheel 3.8 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_macos_arm64_build:
|
||||
name: binary_macos_arm64_wheel_3_9_cpu_nightly_build
|
||||
build_environment: "wheel 3.9 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_macos_arm64_build:
|
||||
name: binary_macos_arm64_conda_3_8_cpu_nightly_build
|
||||
build_environment: "conda 3.8 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_macos_arm64_build:
|
||||
name: binary_macos_arm64_conda_3_9_cpu_nightly_build
|
||||
build_environment: "conda 3.9 cpu"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- /.*/
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
- binary_windows_build:
|
||||
name: binary_windows_conda_3_7_cpu_nightly_build
|
||||
build_environment: "conda 3.7 cpu"
|
||||
|
|
@ -2172,188 +2042,6 @@ workflows:
|
|||
requires:
|
||||
- binary_windows_conda_3_10_cu115_nightly_build
|
||||
executor: windows-with-nvidia-gpu
|
||||
- binary_upload:
|
||||
name: binary_macos_wheel_3_7_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_wheel_3_7_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: wheel
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_wheel_3_8_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_wheel_3_8_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: wheel
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_wheel_3_9_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_wheel_3_9_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: wheel
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_wheel_3_10_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_wheel_3_10_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: wheel
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_conda_3_7_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_conda_3_7_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: conda
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_conda_3_8_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_conda_3_8_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: conda
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_conda_3_9_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_conda_3_9_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: conda
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_conda_3_10_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_conda_3_10_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: conda
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_libtorch_3_7_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_libtorch_3_7_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: libtorch
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_arm64_wheel_3_8_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_arm64_wheel_3_8_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: wheel
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_arm64_wheel_3_9_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_arm64_wheel_3_9_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: wheel
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_arm64_conda_3_8_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_arm64_conda_3_8_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: conda
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_macos_arm64_conda_3_9_cpu_nightly_upload
|
||||
context: org-member
|
||||
requires:
|
||||
- binary_macos_arm64_conda_3_9_cpu_nightly_build
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only:
|
||||
- /v[0-9]+(\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: conda
|
||||
upload_subfolder: cpu
|
||||
- binary_upload:
|
||||
name: binary_windows_conda_3_7_cpu_nightly_upload
|
||||
context: org-member
|
||||
|
|
@ -2810,87 +2498,6 @@ workflows:
|
|||
only:
|
||||
- postnightly
|
||||
name: update_s3_htmls
|
||||
- smoke_mac_test:
|
||||
name: smoke_macos_wheel_3_7_cpu_nightly
|
||||
build_environment: "wheel 3.7 cpu"
|
||||
requires:
|
||||
- update_s3_htmls
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- postnightly
|
||||
- smoke_mac_test:
|
||||
name: smoke_macos_wheel_3_8_cpu_nightly
|
||||
build_environment: "wheel 3.8 cpu"
|
||||
requires:
|
||||
- update_s3_htmls
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- postnightly
|
||||
- smoke_mac_test:
|
||||
name: smoke_macos_wheel_3_9_cpu_nightly
|
||||
build_environment: "wheel 3.9 cpu"
|
||||
requires:
|
||||
- update_s3_htmls
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- postnightly
|
||||
- smoke_mac_test:
|
||||
name: smoke_macos_wheel_3_10_cpu_nightly
|
||||
build_environment: "wheel 3.10 cpu"
|
||||
requires:
|
||||
- update_s3_htmls
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- postnightly
|
||||
- smoke_mac_test:
|
||||
name: smoke_macos_conda_3_7_cpu_nightly
|
||||
build_environment: "conda 3.7 cpu"
|
||||
requires:
|
||||
- update_s3_htmls
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- postnightly
|
||||
- smoke_mac_test:
|
||||
name: smoke_macos_conda_3_8_cpu_nightly
|
||||
build_environment: "conda 3.8 cpu"
|
||||
requires:
|
||||
- update_s3_htmls
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- postnightly
|
||||
- smoke_mac_test:
|
||||
name: smoke_macos_conda_3_9_cpu_nightly
|
||||
build_environment: "conda 3.9 cpu"
|
||||
requires:
|
||||
- update_s3_htmls
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- postnightly
|
||||
- smoke_mac_test:
|
||||
name: smoke_macos_conda_3_10_cpu_nightly
|
||||
build_environment: "conda 3.10 cpu"
|
||||
requires:
|
||||
- update_s3_htmls
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- postnightly
|
||||
- smoke_mac_test:
|
||||
name: smoke_macos_libtorch_3_7_cpu_nightly
|
||||
build_environment: "libtorch 3.7 cpu"
|
||||
requires:
|
||||
- update_s3_htmls
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- postnightly
|
||||
- smoke_windows_test:
|
||||
name: smoke_windows_conda_3_7_cpu_nightly
|
||||
build_environment: "conda 3.7 cpu"
|
||||
|
|
|
|||
|
|
@ -1,28 +1,15 @@
|
|||
#!/bin/bash
|
||||
set -eux -o pipefail
|
||||
|
||||
source "/Users/distiller/project/env"
|
||||
source "${BINARY_ENV_FILE:-/Users/distiller/project/env}"
|
||||
mkdir -p "$PYTORCH_FINAL_PACKAGE_DIR"
|
||||
|
||||
# For some reason `unbuffer` breaks if we change the PATH here, so we
|
||||
# write a script with the PATH change in it and unbuffer the whole
|
||||
# thing
|
||||
build_script="$workdir/build_script.sh"
|
||||
touch "$build_script"
|
||||
chmod +x "$build_script"
|
||||
|
||||
# Build
|
||||
cat >"$build_script" <<EOL
|
||||
export PATH="$workdir/miniconda/bin:$PATH"
|
||||
if [[ "$CIRCLE_BRANCH" == "nightly" ]]; then
|
||||
export USE_PYTORCH_METAL_EXPORT=1
|
||||
export USE_COREML_DELEGATE=1
|
||||
fi
|
||||
export USE_PYTORCH_METAL_EXPORT=1
|
||||
export USE_COREML_DELEGATE=1
|
||||
if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
"$workdir/builder/conda/build_pytorch.sh"
|
||||
"${BUILDER_ROOT}/conda/build_pytorch.sh"
|
||||
else
|
||||
export TORCH_PACKAGE_NAME="$(echo $TORCH_PACKAGE_NAME | tr '-' '_')"
|
||||
"$workdir/builder/wheel/build_wheel.sh"
|
||||
"${BUILDER_ROOT}/wheel/build_wheel.sh"
|
||||
fi
|
||||
EOL
|
||||
unbuffer "$build_script" | ts
|
||||
|
|
|
|||
20
.github/generated-ciflow-ruleset.json
generated
vendored
20
.github/generated-ciflow-ruleset.json
generated
vendored
|
|
@ -60,21 +60,33 @@
|
|||
"linux-binary-libtorch-cxx11-abi",
|
||||
"linux-binary-libtorch-pre-cxx11",
|
||||
"linux-binary-manywheel",
|
||||
"macos-arm64-binary-conda",
|
||||
"macos-arm64-binary-wheel",
|
||||
"macos-binary-conda",
|
||||
"macos-binary-libtorch-cxx11-abi",
|
||||
"macos-binary-libtorch-pre-cxx11",
|
||||
"macos-binary-wheel",
|
||||
"windows-binary-libtorch-cxx11-abi",
|
||||
"windows-binary-libtorch-pre-cxx11",
|
||||
"windows-binary-wheel"
|
||||
],
|
||||
"ciflow/binaries_conda": [
|
||||
"linux-binary-conda"
|
||||
"linux-binary-conda",
|
||||
"macos-arm64-binary-conda",
|
||||
"macos-binary-conda"
|
||||
],
|
||||
"ciflow/binaries_libtorch": [
|
||||
"linux-binary-libtorch-cxx11-abi",
|
||||
"linux-binary-libtorch-pre-cxx11",
|
||||
"macos-binary-libtorch-cxx11-abi",
|
||||
"macos-binary-libtorch-pre-cxx11",
|
||||
"windows-binary-libtorch-cxx11-abi",
|
||||
"windows-binary-libtorch-pre-cxx11"
|
||||
],
|
||||
"ciflow/binaries_wheel": [
|
||||
"linux-binary-manywheel",
|
||||
"macos-arm64-binary-wheel",
|
||||
"macos-binary-wheel",
|
||||
"windows-binary-wheel"
|
||||
],
|
||||
"ciflow/cpu": [
|
||||
|
|
@ -128,6 +140,12 @@
|
|||
"linux-xenial-py3.7-gcc5.4",
|
||||
"linux-xenial-py3.7-gcc7",
|
||||
"linux-xenial-py3.7-gcc7-no-ops",
|
||||
"macos-arm64-binary-conda",
|
||||
"macos-arm64-binary-wheel",
|
||||
"macos-binary-conda",
|
||||
"macos-binary-libtorch-cxx11-abi",
|
||||
"macos-binary-libtorch-pre-cxx11",
|
||||
"macos-binary-wheel",
|
||||
"pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single",
|
||||
"pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit",
|
||||
"win-vs2019-cpu-py3",
|
||||
|
|
|
|||
2
.github/merge_rules.json
vendored
2
.github/merge_rules.json
vendored
|
|
@ -2,7 +2,7 @@
|
|||
{
|
||||
"name": "ONNX exporter",
|
||||
"patterns": ["torch/onnx/**", "torch/csrc/jit/passes/onnx/**", "torch/csrc/jit/passes/onnx.*", "test/onnx/**", "docs/source/onnx.rst"],
|
||||
"approved_by": ["garymm"],
|
||||
"approved_by": ["BowenBao", "garymm"],
|
||||
"mandatory_app_id": 12274
|
||||
},
|
||||
{
|
||||
|
|
|
|||
21
.github/scripts/generate_binary_build_matrix.py
vendored
21
.github/scripts/generate_binary_build_matrix.py
vendored
|
|
@ -79,12 +79,15 @@ def list_without(in_list: List[str], without: List[str]) -> List[str]:
|
|||
def generate_conda_matrix(os: str) -> List[Dict[str, str]]:
|
||||
ret: List[Dict[str, str]] = []
|
||||
arches = ["cpu"]
|
||||
python_versions = FULL_PYTHON_VERSIONS
|
||||
if os == "linux":
|
||||
arches += CUDA_ARCHES
|
||||
elif os == "windows":
|
||||
# We don't build CUDA 10.2 for window see https://github.com/pytorch/pytorch/issues/65648
|
||||
arches += list_without(CUDA_ARCHES, ["10.2"])
|
||||
for python_version in FULL_PYTHON_VERSIONS:
|
||||
elif os == "macos-arm64":
|
||||
python_versions = list_without(python_versions, ["3.7"])
|
||||
for python_version in python_versions:
|
||||
# We don't currently build conda packages for rocm
|
||||
for arch_version in arches:
|
||||
gpu_arch_type = arch_type(arch_version)
|
||||
|
|
@ -153,6 +156,7 @@ def generate_libtorch_matrix(os: str, abi_version: str) -> List[Dict[str, str]]:
|
|||
def generate_wheels_matrix(os: str) -> List[Dict[str, str]]:
|
||||
arches = ["cpu"]
|
||||
package_type = "wheel"
|
||||
python_versions = FULL_PYTHON_VERSIONS
|
||||
if os == "linux":
|
||||
arches += CUDA_ARCHES + ROCM_ARCHES
|
||||
# NOTE: We only build manywheel packages for linux
|
||||
|
|
@ -160,8 +164,10 @@ def generate_wheels_matrix(os: str) -> List[Dict[str, str]]:
|
|||
elif os == "windows":
|
||||
# We don't build CUDA 10.2 for window see https://github.com/pytorch/pytorch/issues/65648
|
||||
arches += list_without(CUDA_ARCHES, ["10.2"])
|
||||
elif os == "macos-arm64":
|
||||
python_versions = list_without(python_versions, ["3.7"])
|
||||
ret: List[Dict[str, str]] = []
|
||||
for python_version in FULL_PYTHON_VERSIONS:
|
||||
for python_version in python_versions:
|
||||
for arch_version in arches:
|
||||
gpu_arch_type = arch_type(arch_version)
|
||||
gpu_arch_version = "" if arch_version == "cpu" else arch_version
|
||||
|
|
@ -181,14 +187,3 @@ def generate_wheels_matrix(os: str) -> List[Dict[str, str]]:
|
|||
}
|
||||
)
|
||||
return ret
|
||||
|
||||
|
||||
def generate_binary_build_matrix(os: str) -> List[Dict[str, str]]:
|
||||
return {
|
||||
"linux": [
|
||||
*generate_conda_matrix(os),
|
||||
*generate_libtorch_matrix(os, abi_version=PRE_CXX11_ABI),
|
||||
*generate_libtorch_matrix(os, abi_version=CXX11_ABI),
|
||||
*generate_wheels_matrix(os),
|
||||
]
|
||||
}[os]
|
||||
|
|
|
|||
72
.github/scripts/generate_ci_workflows.py
vendored
72
.github/scripts/generate_ci_workflows.py
vendored
|
|
@ -295,6 +295,9 @@ class BinaryBuildWorkflow:
|
|||
abi_version: str = ''
|
||||
ciflow_config: CIFlowConfig = field(default_factory=CIFlowConfig)
|
||||
is_scheduled: str = ''
|
||||
# Mainly for macos
|
||||
cross_compile_arm64: bool = False
|
||||
xcode_version: str = ''
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.abi_version:
|
||||
|
|
@ -302,7 +305,6 @@ class BinaryBuildWorkflow:
|
|||
else:
|
||||
self.build_environment = f"{self.os}-binary-{self.package_type}"
|
||||
|
||||
|
||||
def generate_workflow_file(self, workflow_template: jinja2.Template) -> None:
|
||||
output_file_path = GITHUB_DIR / f"workflows/generated-{self.build_environment}.yml"
|
||||
with open(output_file_path, "w") as output_file:
|
||||
|
|
@ -859,6 +861,8 @@ DOCKER_WORKFLOWS = [
|
|||
class OperatingSystem:
|
||||
LINUX = "linux"
|
||||
WINDOWS = "windows"
|
||||
MACOS = "macos"
|
||||
MACOS_ARM64 = "macos-arm64"
|
||||
|
||||
LINUX_BINARY_BUILD_WORFKLOWS = [
|
||||
BinaryBuildWorkflow(
|
||||
|
|
@ -952,6 +956,71 @@ WINDOWS_BINARY_BUILD_WORKFLOWS = [
|
|||
),
|
||||
]
|
||||
|
||||
MACOS_BINARY_BUILD_WORKFLOWS = [
|
||||
BinaryBuildWorkflow(
|
||||
os=OperatingSystem.MACOS,
|
||||
package_type="wheel",
|
||||
build_configs=generate_binary_build_matrix.generate_wheels_matrix(OperatingSystem.MACOS),
|
||||
ciflow_config=CIFlowConfig(
|
||||
labels={LABEL_CIFLOW_DEFAULT, LABEL_CIFLOW_BINARIES, LABEL_CIFLOW_BINARIES_WHEEL},
|
||||
isolated_workflow=True,
|
||||
),
|
||||
),
|
||||
BinaryBuildWorkflow(
|
||||
os=OperatingSystem.MACOS,
|
||||
package_type="conda",
|
||||
build_configs=generate_binary_build_matrix.generate_conda_matrix(OperatingSystem.MACOS),
|
||||
ciflow_config=CIFlowConfig(
|
||||
labels={LABEL_CIFLOW_DEFAULT, LABEL_CIFLOW_BINARIES, LABEL_CIFLOW_BINARIES_CONDA},
|
||||
isolated_workflow=True,
|
||||
),
|
||||
),
|
||||
BinaryBuildWorkflow(
|
||||
os=OperatingSystem.MACOS,
|
||||
package_type="libtorch",
|
||||
abi_version=generate_binary_build_matrix.CXX11_ABI,
|
||||
build_configs=generate_binary_build_matrix.generate_libtorch_matrix(
|
||||
OperatingSystem.MACOS, generate_binary_build_matrix.CXX11_ABI
|
||||
),
|
||||
ciflow_config=CIFlowConfig(
|
||||
labels={LABEL_CIFLOW_DEFAULT, LABEL_CIFLOW_BINARIES, LABEL_CIFLOW_BINARIES_LIBTORCH},
|
||||
isolated_workflow=True,
|
||||
),
|
||||
),
|
||||
BinaryBuildWorkflow(
|
||||
os=OperatingSystem.MACOS,
|
||||
package_type="libtorch",
|
||||
abi_version=generate_binary_build_matrix.PRE_CXX11_ABI,
|
||||
build_configs=generate_binary_build_matrix.generate_libtorch_matrix(
|
||||
OperatingSystem.MACOS, generate_binary_build_matrix.PRE_CXX11_ABI
|
||||
),
|
||||
ciflow_config=CIFlowConfig(
|
||||
labels={LABEL_CIFLOW_DEFAULT, LABEL_CIFLOW_BINARIES, LABEL_CIFLOW_BINARIES_LIBTORCH},
|
||||
isolated_workflow=True,
|
||||
),
|
||||
),
|
||||
BinaryBuildWorkflow(
|
||||
os=OperatingSystem.MACOS_ARM64,
|
||||
package_type="wheel",
|
||||
build_configs=generate_binary_build_matrix.generate_wheels_matrix(OperatingSystem.MACOS),
|
||||
cross_compile_arm64=True,
|
||||
ciflow_config=CIFlowConfig(
|
||||
labels={LABEL_CIFLOW_DEFAULT, LABEL_CIFLOW_BINARIES, LABEL_CIFLOW_BINARIES_WHEEL},
|
||||
isolated_workflow=True,
|
||||
),
|
||||
),
|
||||
BinaryBuildWorkflow(
|
||||
os=OperatingSystem.MACOS_ARM64,
|
||||
package_type="conda",
|
||||
cross_compile_arm64=True,
|
||||
build_configs=generate_binary_build_matrix.generate_conda_matrix(OperatingSystem.MACOS_ARM64),
|
||||
ciflow_config=CIFlowConfig(
|
||||
labels={LABEL_CIFLOW_DEFAULT, LABEL_CIFLOW_BINARIES, LABEL_CIFLOW_BINARIES_CONDA},
|
||||
isolated_workflow=True,
|
||||
),
|
||||
),
|
||||
]
|
||||
|
||||
def main() -> None:
|
||||
jinja_env = jinja2.Environment(
|
||||
variable_start_string="!{{",
|
||||
|
|
@ -969,6 +1038,7 @@ def main() -> None:
|
|||
(jinja_env.get_template("android_ci_workflow.yml.j2"), ANDROID_SHORT_WORKFLOWS),
|
||||
(jinja_env.get_template("linux_binary_build_workflow.yml.j2"), LINUX_BINARY_BUILD_WORFKLOWS),
|
||||
(jinja_env.get_template("windows_binary_build_workflow.yml.j2"), WINDOWS_BINARY_BUILD_WORKFLOWS),
|
||||
(jinja_env.get_template("macos_binary_build_workflow.yml.j2"), MACOS_BINARY_BUILD_WORKFLOWS),
|
||||
]
|
||||
# Delete the existing generated files first, this should align with .gitattributes file description.
|
||||
existing_workflows = GITHUB_DIR.glob("workflows/generated-*")
|
||||
|
|
|
|||
4
.github/templates/common.yml.j2
vendored
4
.github/templates/common.yml.j2
vendored
|
|
@ -353,13 +353,15 @@ concurrency:
|
|||
./build_docker.sh
|
||||
{%- endmacro -%}
|
||||
|
||||
{%- macro setup_miniconda(python_version) -%}
|
||||
{%- macro setup_miniconda(python_version, activate_environment=True) -%}
|
||||
- name: Setup miniconda
|
||||
uses: conda-incubator/setup-miniconda@v2
|
||||
with:
|
||||
auto-update-conda: true
|
||||
python-version: !{{ python_version }}
|
||||
{%- if activate_environment %}
|
||||
activate-environment: build
|
||||
{%- endif %}
|
||||
{%- endmacro -%}
|
||||
|
||||
{%- macro set_xcode_version(xcode_version) -%}
|
||||
|
|
|
|||
181
.github/templates/macos_binary_build_workflow.yml.j2
vendored
Normal file
181
.github/templates/macos_binary_build_workflow.yml.j2
vendored
Normal file
|
|
@ -0,0 +1,181 @@
|
|||
{% import 'common.yml.j2' as common %}
|
||||
|
||||
{%- block name -%}
|
||||
# Template is at: .github/templates/macos_binary_build_workflow.yml.j2
|
||||
# Generation script: .github/scripts/generate_ci_workflows.py
|
||||
name: !{{ build_environment }}
|
||||
{%- endblock %}
|
||||
|
||||
{%- macro binary_env(config) -%}
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: !{{ config["package_type"] }}
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
{%- if config["package_type"] == "libtorch" %}
|
||||
LIBTORCH_VARIANT: !{{ config["libtorch_variant"] }}
|
||||
DESIRED_DEVTOOLSET: !{{ config["devtoolset"] }}
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
{%- else %}
|
||||
DESIRED_PYTHON: "!{{ config["python_version"] }}"
|
||||
{%- endif %}
|
||||
{%- endmacro %}
|
||||
|
||||
{%- macro set_runner_specific_vars() -%}
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
{%- endmacro %}
|
||||
|
||||
on:
|
||||
# TODO: Migrate to new ciflow trigger, reference https://github.com/pytorch/pytorch/pull/70321
|
||||
push:
|
||||
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
|
||||
branches:
|
||||
- nightly
|
||||
tags:
|
||||
# NOTE: Binary build pipelines should only get triggered on release candidate builds
|
||||
# Release candidate tags look like: v1.11.0-rc1
|
||||
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
|
||||
{%- for label in ciflow_config.labels | sort %}
|
||||
{%- if label != "ciflow/default" %}
|
||||
- '!{{ label }}/*'
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
# Needed for conda builds
|
||||
ALPINE_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine"
|
||||
ANACONDA_USER: pytorch
|
||||
AWS_DEFAULT_REGION: us-east-1
|
||||
BUILD_ENVIRONMENT: !{{ build_environment }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
IN_CI: 1
|
||||
IS_GHA: 1
|
||||
PR_LABELS: ${{ toJson(github.event.pull_request.labels.*.name) }}
|
||||
PR_NUMBER: ${{ github.event.pull_request.number }}
|
||||
SKIP_ALL_TESTS: 1
|
||||
{%- if cross_compile_arm64 %}
|
||||
CROSS_COMPILE_ARM64: 1
|
||||
{% endif %}
|
||||
!{{ common.concurrency(build_environment) }}
|
||||
|
||||
jobs:
|
||||
{%- for config in build_configs %}
|
||||
!{{ config["build_name"] }}-build:
|
||||
runs-on: macos-10.15
|
||||
{%- if config["package_type"] == "libtorch" %}
|
||||
# libtorch builds take a long time on github hosted runners
|
||||
timeout-minutes: 720
|
||||
{%- else %}
|
||||
timeout-minutes: !{{ common.timeout_minutes }}
|
||||
{%- endif %}
|
||||
!{{ binary_env(config) }}
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
!{{ set_runner_specific_vars() }}
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: !{{ config["build_name"] }}
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
!{{ config["build_name"] }}-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: !{{ config["build_name"] }}-build
|
||||
!{{ binary_env(config) }}
|
||||
steps:
|
||||
!{{ common.setup_ec2_linux() }}
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: !{{ config["build_name"] }}
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
!{{ common.teardown_ec2_linux() }}
|
||||
{%- endfor %}
|
||||
575
.github/workflows/generated-macos-arm64-binary-conda.yml
generated
vendored
Normal file
575
.github/workflows/generated-macos-arm64-binary-conda.yml
generated
vendored
Normal file
|
|
@ -0,0 +1,575 @@
|
|||
# @generated DO NOT EDIT MANUALLY
|
||||
# Template is at: .github/templates/macos_binary_build_workflow.yml.j2
|
||||
# Generation script: .github/scripts/generate_ci_workflows.py
|
||||
name: macos-arm64-binary-conda
|
||||
|
||||
on:
|
||||
# TODO: Migrate to new ciflow trigger, reference https://github.com/pytorch/pytorch/pull/70321
|
||||
push:
|
||||
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
|
||||
branches:
|
||||
- nightly
|
||||
tags:
|
||||
# NOTE: Binary build pipelines should only get triggered on release candidate builds
|
||||
# Release candidate tags look like: v1.11.0-rc1
|
||||
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
|
||||
- 'ciflow/binaries/*'
|
||||
- 'ciflow/binaries_conda/*'
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
# Needed for conda builds
|
||||
ALPINE_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine"
|
||||
ANACONDA_USER: pytorch
|
||||
AWS_DEFAULT_REGION: us-east-1
|
||||
BUILD_ENVIRONMENT: macos-arm64-binary-conda
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
IN_CI: 1
|
||||
IS_GHA: 1
|
||||
PR_LABELS: ${{ toJson(github.event.pull_request.labels.*.name) }}
|
||||
PR_NUMBER: ${{ github.event.pull_request.number }}
|
||||
SKIP_ALL_TESTS: 1
|
||||
CROSS_COMPILE_ARM64: 1
|
||||
|
||||
concurrency:
|
||||
group: macos-arm64-binary-conda-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
conda-py3_8-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.8"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: conda-py3_8-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
conda-py3_8-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: conda-py3_8-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.8"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: conda-py3_8-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
conda-py3_9-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.9"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: conda-py3_9-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
conda-py3_9-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: conda-py3_9-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.9"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: conda-py3_9-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
conda-py3_10-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.10"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: conda-py3_10-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
conda-py3_10-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: conda-py3_10-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.10"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: conda-py3_10-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
754
.github/workflows/generated-macos-arm64-binary-wheel.yml
generated
vendored
Normal file
754
.github/workflows/generated-macos-arm64-binary-wheel.yml
generated
vendored
Normal file
|
|
@ -0,0 +1,754 @@
|
|||
# @generated DO NOT EDIT MANUALLY
|
||||
# Template is at: .github/templates/macos_binary_build_workflow.yml.j2
|
||||
# Generation script: .github/scripts/generate_ci_workflows.py
|
||||
name: macos-arm64-binary-wheel
|
||||
|
||||
on:
|
||||
# TODO: Migrate to new ciflow trigger, reference https://github.com/pytorch/pytorch/pull/70321
|
||||
push:
|
||||
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
|
||||
branches:
|
||||
- nightly
|
||||
tags:
|
||||
# NOTE: Binary build pipelines should only get triggered on release candidate builds
|
||||
# Release candidate tags look like: v1.11.0-rc1
|
||||
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
|
||||
- 'ciflow/binaries/*'
|
||||
- 'ciflow/binaries_wheel/*'
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
# Needed for conda builds
|
||||
ALPINE_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine"
|
||||
ANACONDA_USER: pytorch
|
||||
AWS_DEFAULT_REGION: us-east-1
|
||||
BUILD_ENVIRONMENT: macos-arm64-binary-wheel
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
IN_CI: 1
|
||||
IS_GHA: 1
|
||||
PR_LABELS: ${{ toJson(github.event.pull_request.labels.*.name) }}
|
||||
PR_NUMBER: ${{ github.event.pull_request.number }}
|
||||
SKIP_ALL_TESTS: 1
|
||||
CROSS_COMPILE_ARM64: 1
|
||||
|
||||
concurrency:
|
||||
group: macos-arm64-binary-wheel-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
wheel-py3_7-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.7"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: wheel-py3_7-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
wheel-py3_7-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: wheel-py3_7-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.7"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: wheel-py3_7-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
wheel-py3_8-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.8"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: wheel-py3_8-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
wheel-py3_8-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: wheel-py3_8-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.8"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: wheel-py3_8-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
wheel-py3_9-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.9"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: wheel-py3_9-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
wheel-py3_9-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: wheel-py3_9-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.9"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: wheel-py3_9-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
wheel-py3_10-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.10"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: wheel-py3_10-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
wheel-py3_10-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: wheel-py3_10-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.10"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: wheel-py3_10-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
752
.github/workflows/generated-macos-binary-conda.yml
generated
vendored
Normal file
752
.github/workflows/generated-macos-binary-conda.yml
generated
vendored
Normal file
|
|
@ -0,0 +1,752 @@
|
|||
# @generated DO NOT EDIT MANUALLY
|
||||
# Template is at: .github/templates/macos_binary_build_workflow.yml.j2
|
||||
# Generation script: .github/scripts/generate_ci_workflows.py
|
||||
name: macos-binary-conda
|
||||
|
||||
on:
|
||||
# TODO: Migrate to new ciflow trigger, reference https://github.com/pytorch/pytorch/pull/70321
|
||||
push:
|
||||
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
|
||||
branches:
|
||||
- nightly
|
||||
tags:
|
||||
# NOTE: Binary build pipelines should only get triggered on release candidate builds
|
||||
# Release candidate tags look like: v1.11.0-rc1
|
||||
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
|
||||
- 'ciflow/binaries/*'
|
||||
- 'ciflow/binaries_conda/*'
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
# Needed for conda builds
|
||||
ALPINE_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine"
|
||||
ANACONDA_USER: pytorch
|
||||
AWS_DEFAULT_REGION: us-east-1
|
||||
BUILD_ENVIRONMENT: macos-binary-conda
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
IN_CI: 1
|
||||
IS_GHA: 1
|
||||
PR_LABELS: ${{ toJson(github.event.pull_request.labels.*.name) }}
|
||||
PR_NUMBER: ${{ github.event.pull_request.number }}
|
||||
SKIP_ALL_TESTS: 1
|
||||
concurrency:
|
||||
group: macos-binary-conda-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
conda-py3_7-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.7"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: conda-py3_7-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
conda-py3_7-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: conda-py3_7-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.7"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: conda-py3_7-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
conda-py3_8-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.8"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: conda-py3_8-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
conda-py3_8-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: conda-py3_8-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.8"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: conda-py3_8-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
conda-py3_9-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.9"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: conda-py3_9-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
conda-py3_9-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: conda-py3_9-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.9"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: conda-py3_9-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
conda-py3_10-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.10"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: conda-py3_10-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
conda-py3_10-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: conda-py3_10-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: conda
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.10"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: conda-py3_10-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
788
.github/workflows/generated-macos-binary-libtorch-cxx11-abi.yml
generated
vendored
Normal file
788
.github/workflows/generated-macos-binary-libtorch-cxx11-abi.yml
generated
vendored
Normal file
|
|
@ -0,0 +1,788 @@
|
|||
# @generated DO NOT EDIT MANUALLY
|
||||
# Template is at: .github/templates/macos_binary_build_workflow.yml.j2
|
||||
# Generation script: .github/scripts/generate_ci_workflows.py
|
||||
name: macos-binary-libtorch-cxx11-abi
|
||||
|
||||
on:
|
||||
# TODO: Migrate to new ciflow trigger, reference https://github.com/pytorch/pytorch/pull/70321
|
||||
push:
|
||||
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
|
||||
branches:
|
||||
- nightly
|
||||
tags:
|
||||
# NOTE: Binary build pipelines should only get triggered on release candidate builds
|
||||
# Release candidate tags look like: v1.11.0-rc1
|
||||
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
|
||||
- 'ciflow/binaries/*'
|
||||
- 'ciflow/binaries_libtorch/*'
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
# Needed for conda builds
|
||||
ALPINE_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine"
|
||||
ANACONDA_USER: pytorch
|
||||
AWS_DEFAULT_REGION: us-east-1
|
||||
BUILD_ENVIRONMENT: macos-binary-libtorch-cxx11-abi
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
IN_CI: 1
|
||||
IS_GHA: 1
|
||||
PR_LABELS: ${{ toJson(github.event.pull_request.labels.*.name) }}
|
||||
PR_NUMBER: ${{ github.event.pull_request.number }}
|
||||
SKIP_ALL_TESTS: 1
|
||||
concurrency:
|
||||
group: macos-binary-libtorch-cxx11-abi-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
libtorch-cpu-shared-with-deps-cxx11-abi-build:
|
||||
runs-on: macos-10.15
|
||||
# libtorch builds take a long time on github hosted runners
|
||||
timeout-minutes: 720
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: shared-with-deps
|
||||
DESIRED_DEVTOOLSET: cxx11-abi
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: libtorch-cpu-shared-with-deps-cxx11-abi
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
libtorch-cpu-shared-with-deps-cxx11-abi-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: libtorch-cpu-shared-with-deps-cxx11-abi-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: shared-with-deps
|
||||
DESIRED_DEVTOOLSET: cxx11-abi
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: libtorch-cpu-shared-with-deps-cxx11-abi
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
libtorch-cpu-shared-without-deps-cxx11-abi-build:
|
||||
runs-on: macos-10.15
|
||||
# libtorch builds take a long time on github hosted runners
|
||||
timeout-minutes: 720
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: shared-without-deps
|
||||
DESIRED_DEVTOOLSET: cxx11-abi
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: libtorch-cpu-shared-without-deps-cxx11-abi
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
libtorch-cpu-shared-without-deps-cxx11-abi-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: libtorch-cpu-shared-without-deps-cxx11-abi-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: shared-without-deps
|
||||
DESIRED_DEVTOOLSET: cxx11-abi
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: libtorch-cpu-shared-without-deps-cxx11-abi
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
libtorch-cpu-static-with-deps-cxx11-abi-build:
|
||||
runs-on: macos-10.15
|
||||
# libtorch builds take a long time on github hosted runners
|
||||
timeout-minutes: 720
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: static-with-deps
|
||||
DESIRED_DEVTOOLSET: cxx11-abi
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: libtorch-cpu-static-with-deps-cxx11-abi
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
libtorch-cpu-static-with-deps-cxx11-abi-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: libtorch-cpu-static-with-deps-cxx11-abi-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: static-with-deps
|
||||
DESIRED_DEVTOOLSET: cxx11-abi
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: libtorch-cpu-static-with-deps-cxx11-abi
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
libtorch-cpu-static-without-deps-cxx11-abi-build:
|
||||
runs-on: macos-10.15
|
||||
# libtorch builds take a long time on github hosted runners
|
||||
timeout-minutes: 720
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: static-without-deps
|
||||
DESIRED_DEVTOOLSET: cxx11-abi
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: libtorch-cpu-static-without-deps-cxx11-abi
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
libtorch-cpu-static-without-deps-cxx11-abi-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: libtorch-cpu-static-without-deps-cxx11-abi-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: static-without-deps
|
||||
DESIRED_DEVTOOLSET: cxx11-abi
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: libtorch-cpu-static-without-deps-cxx11-abi
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
788
.github/workflows/generated-macos-binary-libtorch-pre-cxx11.yml
generated
vendored
Normal file
788
.github/workflows/generated-macos-binary-libtorch-pre-cxx11.yml
generated
vendored
Normal file
|
|
@ -0,0 +1,788 @@
|
|||
# @generated DO NOT EDIT MANUALLY
|
||||
# Template is at: .github/templates/macos_binary_build_workflow.yml.j2
|
||||
# Generation script: .github/scripts/generate_ci_workflows.py
|
||||
name: macos-binary-libtorch-pre-cxx11
|
||||
|
||||
on:
|
||||
# TODO: Migrate to new ciflow trigger, reference https://github.com/pytorch/pytorch/pull/70321
|
||||
push:
|
||||
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
|
||||
branches:
|
||||
- nightly
|
||||
tags:
|
||||
# NOTE: Binary build pipelines should only get triggered on release candidate builds
|
||||
# Release candidate tags look like: v1.11.0-rc1
|
||||
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
|
||||
- 'ciflow/binaries/*'
|
||||
- 'ciflow/binaries_libtorch/*'
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
# Needed for conda builds
|
||||
ALPINE_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine"
|
||||
ANACONDA_USER: pytorch
|
||||
AWS_DEFAULT_REGION: us-east-1
|
||||
BUILD_ENVIRONMENT: macos-binary-libtorch-pre-cxx11
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
IN_CI: 1
|
||||
IS_GHA: 1
|
||||
PR_LABELS: ${{ toJson(github.event.pull_request.labels.*.name) }}
|
||||
PR_NUMBER: ${{ github.event.pull_request.number }}
|
||||
SKIP_ALL_TESTS: 1
|
||||
concurrency:
|
||||
group: macos-binary-libtorch-pre-cxx11-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
libtorch-cpu-shared-with-deps-pre-cxx11-build:
|
||||
runs-on: macos-10.15
|
||||
# libtorch builds take a long time on github hosted runners
|
||||
timeout-minutes: 720
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: shared-with-deps
|
||||
DESIRED_DEVTOOLSET: pre-cxx11
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: libtorch-cpu-shared-with-deps-pre-cxx11
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
libtorch-cpu-shared-with-deps-pre-cxx11-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: libtorch-cpu-shared-with-deps-pre-cxx11-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: shared-with-deps
|
||||
DESIRED_DEVTOOLSET: pre-cxx11
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: libtorch-cpu-shared-with-deps-pre-cxx11
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
libtorch-cpu-shared-without-deps-pre-cxx11-build:
|
||||
runs-on: macos-10.15
|
||||
# libtorch builds take a long time on github hosted runners
|
||||
timeout-minutes: 720
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: shared-without-deps
|
||||
DESIRED_DEVTOOLSET: pre-cxx11
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: libtorch-cpu-shared-without-deps-pre-cxx11
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
libtorch-cpu-shared-without-deps-pre-cxx11-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: libtorch-cpu-shared-without-deps-pre-cxx11-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: shared-without-deps
|
||||
DESIRED_DEVTOOLSET: pre-cxx11
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: libtorch-cpu-shared-without-deps-pre-cxx11
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
libtorch-cpu-static-with-deps-pre-cxx11-build:
|
||||
runs-on: macos-10.15
|
||||
# libtorch builds take a long time on github hosted runners
|
||||
timeout-minutes: 720
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: static-with-deps
|
||||
DESIRED_DEVTOOLSET: pre-cxx11
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: libtorch-cpu-static-with-deps-pre-cxx11
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
libtorch-cpu-static-with-deps-pre-cxx11-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: libtorch-cpu-static-with-deps-pre-cxx11-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: static-with-deps
|
||||
DESIRED_DEVTOOLSET: pre-cxx11
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: libtorch-cpu-static-with-deps-pre-cxx11
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
libtorch-cpu-static-without-deps-pre-cxx11-build:
|
||||
runs-on: macos-10.15
|
||||
# libtorch builds take a long time on github hosted runners
|
||||
timeout-minutes: 720
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: static-without-deps
|
||||
DESIRED_DEVTOOLSET: pre-cxx11
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: libtorch-cpu-static-without-deps-pre-cxx11
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
libtorch-cpu-static-without-deps-pre-cxx11-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: libtorch-cpu-static-without-deps-pre-cxx11-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: libtorch
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
LIBTORCH_VARIANT: static-without-deps
|
||||
DESIRED_DEVTOOLSET: pre-cxx11
|
||||
# This is a dummy value for libtorch to work correctly with our batch scripts
|
||||
# without this value pip does not get installed for some reason
|
||||
DESIRED_PYTHON: "3.7"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: libtorch-cpu-static-without-deps-pre-cxx11
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
752
.github/workflows/generated-macos-binary-wheel.yml
generated
vendored
Normal file
752
.github/workflows/generated-macos-binary-wheel.yml
generated
vendored
Normal file
|
|
@ -0,0 +1,752 @@
|
|||
# @generated DO NOT EDIT MANUALLY
|
||||
# Template is at: .github/templates/macos_binary_build_workflow.yml.j2
|
||||
# Generation script: .github/scripts/generate_ci_workflows.py
|
||||
name: macos-binary-wheel
|
||||
|
||||
on:
|
||||
# TODO: Migrate to new ciflow trigger, reference https://github.com/pytorch/pytorch/pull/70321
|
||||
push:
|
||||
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
|
||||
branches:
|
||||
- nightly
|
||||
tags:
|
||||
# NOTE: Binary build pipelines should only get triggered on release candidate builds
|
||||
# Release candidate tags look like: v1.11.0-rc1
|
||||
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
|
||||
- 'ciflow/binaries/*'
|
||||
- 'ciflow/binaries_wheel/*'
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
# Needed for conda builds
|
||||
ALPINE_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine"
|
||||
ANACONDA_USER: pytorch
|
||||
AWS_DEFAULT_REGION: us-east-1
|
||||
BUILD_ENVIRONMENT: macos-binary-wheel
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
IN_CI: 1
|
||||
IS_GHA: 1
|
||||
PR_LABELS: ${{ toJson(github.event.pull_request.labels.*.name) }}
|
||||
PR_NUMBER: ${{ github.event.pull_request.number }}
|
||||
SKIP_ALL_TESTS: 1
|
||||
concurrency:
|
||||
group: macos-binary-wheel-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
wheel-py3_7-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.7"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: wheel-py3_7-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
wheel-py3_7-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: wheel-py3_7-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.7"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: wheel-py3_7-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
wheel-py3_8-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.8"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: wheel-py3_8-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
wheel-py3_8-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: wheel-py3_8-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.8"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: wheel-py3_8-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
wheel-py3_9-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.9"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: wheel-py3_9-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
wheel-py3_9-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: wheel-py3_9-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.9"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: wheel-py3_9-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
wheel-py3_10-cpu-build:
|
||||
runs-on: macos-10.15
|
||||
timeout-minutes: 240
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.10"
|
||||
# For sccache access (only on non-forked PRs)
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
|
||||
steps:
|
||||
# NOTE: These environment variables are put here so that they can be applied on every job equally
|
||||
# They are also here because setting them at a workflow level doesn't give us access to the
|
||||
# runner.temp variable, which we need.
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
# shellcheck disable=SC2129
|
||||
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
|
||||
# shellcheck disable=SC2129
|
||||
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
|
||||
- name: Install conda and dependencies
|
||||
run: |
|
||||
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
|
||||
curl --retry 3 -o "${RUNNER_TEMP}/conda.sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "${RUNNER_TEMP}/conda.sh"
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: ${{ env.PYTORCH_ROOT }}
|
||||
submodules: recursive
|
||||
- name: Clone pytorch/builder
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
repository: pytorch/builder
|
||||
path: ${{ env.BUILDER_ROOT }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
run: |
|
||||
sudo curl --retry 3 https://s3.amazonaws.com/ossci-macos/sccache_v2.15 --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
echo "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> "${GITHUB_ENV}"
|
||||
- name: Populate binary env
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
|
||||
- name: Build PyTorch binary
|
||||
run: |
|
||||
# shellcheck disable=SC1091
|
||||
source "${RUNNER_TEMP}/anaconda/bin/activate"
|
||||
"${PYTORCH_ROOT}/.circleci/scripts/binary_macos_build.sh"
|
||||
- uses: actions/upload-artifact@v2
|
||||
if: always()
|
||||
with:
|
||||
name: wheel-py3_10-cpu
|
||||
retention-days: 14
|
||||
if-no-files-found: error
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
wheel-py3_10-cpu-upload: # Uploading
|
||||
runs-on: linux.2xlarge # self hosted runner to download ec2 artifacts
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
needs: wheel-py3_10-cpu-build
|
||||
env:
|
||||
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
|
||||
BUILDER_ROOT: ${{ github.workspace }}/builder
|
||||
PACKAGE_TYPE: wheel
|
||||
SKIP_ALL_TESTS: 1
|
||||
DESIRED_CUDA: cpu
|
||||
DESIRED_PYTHON: "3.10"
|
||||
steps:
|
||||
- name: Display EC2 information
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
function get_ec2_metadata() {
|
||||
# Pulled from instance metadata endpoint for EC2
|
||||
# see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
|
||||
category=$1
|
||||
curl -fsSL "http://169.254.169.254/latest/meta-data/${category}"
|
||||
}
|
||||
echo "ami-id: $(get_ec2_metadata ami-id)"
|
||||
echo "instance-id: $(get_ec2_metadata instance-id)"
|
||||
echo "instance-type: $(get_ec2_metadata instance-type)"
|
||||
- name: Log in to ECR
|
||||
env:
|
||||
AWS_RETRY_MODE: standard
|
||||
AWS_MAX_ATTEMPTS: 5
|
||||
run: |
|
||||
AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\")
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \
|
||||
--password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
|
||||
- name: Chown workspace
|
||||
run: |
|
||||
retry () {
|
||||
"$@" || (sleep 1 && "$@") || (sleep 2 && "$@")
|
||||
}
|
||||
retry docker pull "${ALPINE_IMAGE}"
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --pull=never --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Clean workspace
|
||||
run: |
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: seemethere/add-github-ssh-key@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Preserve github env variables for use in docker
|
||||
run: |
|
||||
env | grep '^GITHUB' > "/tmp/github_env_${GITHUB_RUN_ID}"
|
||||
- name: Clone pytorch/pytorch
|
||||
uses: actions/checkout@v2
|
||||
- uses: actions/download-artifact@v2
|
||||
name: Download Build Artifacts
|
||||
with:
|
||||
name: wheel-py3_10-cpu
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/')}}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_SECRET_KEY }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
run: |
|
||||
docker run --rm -i \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
-e AWS_ACCESS_KEY_ID \
|
||||
-e AWS_SECRET_ACCESS_KEY \
|
||||
-e DRY_RUN \
|
||||
-e PACKAGE_TYPE \
|
||||
-e PKG_DIR=/artifacts \
|
||||
-e UPLOAD_CHANNEL \
|
||||
-e UPLOAD_SUBFOLDER \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-v "${GITHUB_WORKSPACE}:/v" \
|
||||
-w /v \
|
||||
308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/miniconda3:4.10.3 \
|
||||
bash -c '.circleci/scripts/binary_upload.sh'
|
||||
- name: Hold runner for 2 hours or until ssh sessions have drained
|
||||
# Always hold for active ssh sessions
|
||||
if: always()
|
||||
run: .github/scripts/wait_for_ssh_to_drain.sh
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
run: |
|
||||
# Ensure the working directory gets chowned back to the current user
|
||||
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
|
||||
- name: Kill containers, clean up images
|
||||
if: always()
|
||||
run: |
|
||||
# ignore expansion of "docker ps -q" since it could be empty
|
||||
# shellcheck disable=SC2046
|
||||
docker stop $(docker ps -q) || true
|
||||
# Prune all of the docker images
|
||||
docker system prune -af
|
||||
|
|
@ -30,6 +30,8 @@ struct TORCH_API FuncTorchTLSBase {
|
|||
// This is a hook to get into functorch -- functorch will determine
|
||||
// if it should raise an error message
|
||||
virtual int64_t checkSupportsAutogradFunction() const = 0;
|
||||
virtual void checkSupportsInplaceRequiresGrad() const = 0;
|
||||
virtual void checkSupportsRetainGrad() const = 0;
|
||||
};
|
||||
|
||||
// returns deepcopy of the functorch tls
|
||||
|
|
|
|||
|
|
@ -45,6 +45,44 @@ struct TORCH_API SparseCsrTensorImpl : public TensorImpl {
|
|||
const Tensor& values() const { return values_; }
|
||||
int nnz() { return values_.size(0); }
|
||||
|
||||
/**
|
||||
* Return a TensorImpl that is a shallow-copy of this TensorImpl.
|
||||
*
|
||||
* For usage of `version_counter` and `allow_tensor_metadata_change`,
|
||||
* see NOTE [ TensorImpl Shallow-Copying ].
|
||||
*/
|
||||
c10::intrusive_ptr<TensorImpl> shallow_copy_and_detach(
|
||||
const c10::VariableVersion& version_counter,
|
||||
bool allow_tensor_metadata_change) const override {
|
||||
auto impl = c10::make_intrusive<SparseCsrTensorImpl>(key_set(), dtype());
|
||||
copy_tensor_metadata(
|
||||
/*src_impl=*/this,
|
||||
/*dest_impl=*/impl.get(),
|
||||
/*version_counter=*/version_counter,
|
||||
/*allow_tensor_metadata_change=*/allow_tensor_metadata_change);
|
||||
impl->refresh_numel();
|
||||
return impl;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return a TensorImpl that is a shallow-copy of this TensorImpl.
|
||||
*
|
||||
* For usage of `version_counter` and `allow_tensor_metadata_change`,
|
||||
* see NOTE [ TensorImpl Shallow-Copying ].
|
||||
*/
|
||||
c10::intrusive_ptr<TensorImpl> shallow_copy_and_detach(
|
||||
c10::VariableVersion&& version_counter,
|
||||
bool allow_tensor_metadata_change) const override {
|
||||
auto impl = c10::make_intrusive<SparseCsrTensorImpl>(key_set(), dtype());
|
||||
copy_tensor_metadata(
|
||||
/*src_impl=*/this,
|
||||
/*dest_impl=*/impl.get(),
|
||||
/*version_counter=*/std::move(version_counter),
|
||||
/*allow_tensor_metadata_change=*/allow_tensor_metadata_change);
|
||||
impl->refresh_numel();
|
||||
return impl;
|
||||
}
|
||||
|
||||
private:
|
||||
explicit SparseCsrTensorImpl(
|
||||
at::DispatchKeySet key_set,
|
||||
|
|
@ -52,5 +90,24 @@ struct TORCH_API SparseCsrTensorImpl : public TensorImpl {
|
|||
at::Tensor crow_indices,
|
||||
at::Tensor col_indices,
|
||||
at::Tensor values);
|
||||
|
||||
/**
|
||||
* Copy the tensor metadata fields (e.g. sizes / strides / storage pointer / storage_offset)
|
||||
* from one TensorImpl to another TensorImpl.
|
||||
*
|
||||
* For usage of `version_counter` and `allow_tensor_metadata_change`, see NOTE [ TensorImpl Shallow-Copying ].
|
||||
*/
|
||||
static void copy_tensor_metadata(
|
||||
const SparseCsrTensorImpl* src_sparse_impl,
|
||||
SparseCsrTensorImpl* dest_sparse_impl,
|
||||
const c10::VariableVersion& version_counter,
|
||||
bool allow_tensor_metadata_change) {
|
||||
TensorImpl::copy_tensor_metadata(src_sparse_impl, dest_sparse_impl, version_counter, allow_tensor_metadata_change);
|
||||
|
||||
// Sparse-specific fields
|
||||
dest_sparse_impl->crow_indices_ = src_sparse_impl->crow_indices();
|
||||
dest_sparse_impl->col_indices_ = src_sparse_impl->col_indices();
|
||||
dest_sparse_impl->values_ = src_sparse_impl->values();
|
||||
}
|
||||
};
|
||||
} // namespace at
|
||||
|
|
|
|||
|
|
@ -427,27 +427,6 @@ Tensor& eye_out_cpu(int64_t n, int64_t m, Tensor& result) {
|
|||
|
||||
namespace {
|
||||
|
||||
// The ZeroTensor allocator ignores whatever allocation is requested and always
|
||||
// gives you nullptr
|
||||
struct ZeroTensorAllocator final : public at::Allocator {
|
||||
ZeroTensorAllocator(at::Device device) : device_(device) {};
|
||||
~ZeroTensorAllocator() override = default;
|
||||
static void deleter(void* const pointer) {
|
||||
TORCH_INTERNAL_ASSERT(!pointer);
|
||||
}
|
||||
DataPtr allocate(const size_t nbytes) const override {
|
||||
return {nullptr, nullptr, &deleter, device_};
|
||||
}
|
||||
DeleterFnPtr raw_deleter() const override {
|
||||
return deleter;
|
||||
}
|
||||
at::Device device_;
|
||||
};
|
||||
|
||||
at::Allocator* GetZeroTensorAllocator(ZeroTensorAllocator& zt) {
|
||||
return &zt;
|
||||
}
|
||||
|
||||
// Performs dtype inference for full
|
||||
TensorOptions infer_full_options(
|
||||
const Scalar& fill_value,
|
||||
|
|
@ -1074,11 +1053,11 @@ Tensor _efficientzerotensor(IntArrayRef size,
|
|||
c10::optional<Device> device,
|
||||
c10::optional<bool> pin_memory) {
|
||||
auto device_ = device_or_default(device);
|
||||
auto allocator = ZeroTensorAllocator(device_);
|
||||
auto allocator = at::native::ZeroTensorAllocator(device_);
|
||||
auto dtype_ = dtype_or_default(dtype);
|
||||
constexpr auto zero_ks = at::DispatchKeySet(at::DispatchKey::ZeroTensor);
|
||||
return at::detail::empty_generic(
|
||||
size, &allocator, zero_ks, dtype_, c10::nullopt);
|
||||
auto zero_ks = at::DispatchKeySet(c10::DispatchKey::CPU) | at::DispatchKeySet(c10::DispatchKey::ZeroTensor);
|
||||
auto out = at::detail::empty_generic(size, &allocator, zero_ks, dtype_, c10::nullopt);
|
||||
return out;
|
||||
}
|
||||
|
||||
Tensor& zeros_out(IntArrayRef size, Tensor& result) {
|
||||
|
|
|
|||
|
|
@ -87,6 +87,23 @@ inline void check_supported_max_int_with_precision(int64_t n, const Tensor& tens
|
|||
}
|
||||
}
|
||||
|
||||
// The ZeroTensor allocator ignores whatever allocation is requested and always
|
||||
// gives you nullptr
|
||||
struct ZeroTensorAllocator final : public at::Allocator {
|
||||
ZeroTensorAllocator(at::Device device) : device_(device) {};
|
||||
~ZeroTensorAllocator() override = default;
|
||||
static void deleter(void* const pointer) {
|
||||
TORCH_INTERNAL_ASSERT(!pointer);
|
||||
}
|
||||
DataPtr allocate(const size_t nbytes) const override {
|
||||
return {nullptr, nullptr, &deleter, device_};
|
||||
}
|
||||
DeleterFnPtr raw_deleter() const override {
|
||||
return deleter;
|
||||
}
|
||||
at::Device device_;
|
||||
};
|
||||
|
||||
using binary_fn = void (*)(TensorIterator&);
|
||||
|
||||
DECLARE_DISPATCH(binary_fn, complex_stub);
|
||||
|
|
|
|||
|
|
@ -417,14 +417,14 @@ ctc_loss_backward_collect_nonblank_gpu_kernel(scalar_t* __restrict__ gradient_da
|
|||
const scalar_t* __restrict__ grad_out_data, int64_t grad_out_batch_stride,
|
||||
const scalar_t* __restrict__ log_alpha_data, const scalar_t* __restrict__ log_beta_data,
|
||||
const scalar_t*log_probs_data, const int64_t* __restrict__ input_lengths,
|
||||
const target_t* __restrict__ targets_data, const int64_t* __restrict__ target_lengths, int64_t max_target_length,
|
||||
const target_t* __restrict__ targets_data, const int64_t* __restrict__ target_lengths,
|
||||
const scalar_t* __restrict__ neg_log_likelihood_data,
|
||||
int64_t gr_input_stride, int64_t gr_batch_stride, int64_t gr_char_stride,
|
||||
int64_t lp_input_stride, int64_t lp_batch_stride, int64_t lp_char_stride,
|
||||
int64_t la_batch_stride, int64_t la_input_stride, int64_t la_target_stride,
|
||||
int64_t lb_batch_stride, int64_t lb_input_stride, int64_t lb_target_stride,
|
||||
const int64_t* __restrict__ tg_batch_offsets, int64_t tg_target_stride,
|
||||
int64_t batch_size, int64_t num_labels, int64_t BLANK, bool zero_infinity) {
|
||||
int64_t batch_size, bool zero_infinity) {
|
||||
int64_t b = threadIdx.y + blockIdx.y * blockDim.y;
|
||||
int64_t s = threadIdx.x + blockIdx.x * blockDim.x; // note, this directly indexes into targets, not targets prime!
|
||||
|
||||
|
|
@ -676,14 +676,14 @@ Tensor ctc_loss_backward_gpu_template(const Tensor& grad_out, const Tensor& log_
|
|||
grad_out.data_ptr<scalar_t>(), grad_out.stride(0),
|
||||
log_alpha.data_ptr<scalar_t>(), log_beta.data_ptr<scalar_t>(),
|
||||
log_probs.data_ptr<scalar_t>(), input_lengths_t.data_ptr<int64_t>(),
|
||||
targets.data_ptr<target_t>(), target_lengths_t.data_ptr<int64_t>(), max_target_length,
|
||||
targets.data_ptr<target_t>(), target_lengths_t.data_ptr<int64_t>(),
|
||||
neg_log_likelihood.data_ptr<scalar_t>(),
|
||||
grad.stride(0), grad.stride(1), grad.stride(2),
|
||||
log_probs.stride(0), log_probs.stride(1), log_probs.stride(2),
|
||||
log_alpha.stride(0), log_alpha.stride(1), log_alpha.stride(2),
|
||||
log_beta.stride(0), log_beta.stride(1), log_beta.stride(2),
|
||||
tg_batch_offsets.data_ptr<int64_t>(), tg_target_stride,
|
||||
batch_size, num_labels, BLANK, zero_infinity);
|
||||
batch_size, zero_infinity);
|
||||
C10_CUDA_KERNEL_LAUNCH_CHECK();
|
||||
} else { // small problem, use naive algorithm
|
||||
// Still no block/grid configuration guru...
|
||||
|
|
|
|||
|
|
@ -40,6 +40,23 @@ Tensor empty_cuda(IntArrayRef size, c10::optional<ScalarType> dtype_opt, c10::op
|
|||
return at::detail::empty_cuda(size, dtype_opt, layout_opt, device_opt, pin_memory_opt, memory_format_opt);
|
||||
}
|
||||
|
||||
Tensor _efficientzerotensor_cuda(IntArrayRef size,
|
||||
c10::optional<ScalarType> dtype,
|
||||
c10::optional<Layout> layout,
|
||||
c10::optional<Device> device,
|
||||
c10::optional<bool> pin_memory) {
|
||||
auto device_ = device_or_default(device);
|
||||
if (!device_.has_index()) {
|
||||
device_.set_index(at::cuda::current_device());
|
||||
}
|
||||
auto allocator = at::native::ZeroTensorAllocator(device_);
|
||||
auto dtype_ = dtype_or_default(dtype);
|
||||
auto zero_ks = at::DispatchKeySet(c10::DispatchKey::CUDA) | at::DispatchKeySet(c10::DispatchKey::ZeroTensor);
|
||||
auto out = at::detail::empty_generic(size, &allocator, zero_ks, dtype_, c10::nullopt);
|
||||
return out;
|
||||
}
|
||||
|
||||
|
||||
Tensor empty_strided_cuda(IntArrayRef size, IntArrayRef stride, c10::optional<ScalarType> dtype_opt, c10::optional<Layout> layout_opt, c10::optional<Device> device_opt, c10::optional<bool> pin_memory_opt) {
|
||||
return at::detail::empty_strided_cuda(size, stride, dtype_opt, layout_opt, device_opt, pin_memory_opt);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4809,7 +4809,8 @@
|
|||
|
||||
- func: _efficientzerotensor(int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
|
||||
dispatch:
|
||||
CompositeExplicitAutograd: _efficientzerotensor
|
||||
CPU: _efficientzerotensor
|
||||
CUDA: _efficientzerotensor_cuda
|
||||
|
||||
- func: zeros(int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
|
||||
|
||||
|
|
|
|||
|
|
@ -328,9 +328,9 @@ TEST(StaticRuntime, CanEnableStaticRuntime) {
|
|||
)JIT";
|
||||
|
||||
EXPECT_TRUE(testCanEnableStaticRuntime(reshape_inplace_script));
|
||||
EXPECT_FALSE(testCanEnableStaticRuntime(for_script));
|
||||
EXPECT_FALSE(testCanEnableStaticRuntime(while_script));
|
||||
EXPECT_FALSE(testCanEnableStaticRuntime(if_script));
|
||||
EXPECT_TRUE(testCanEnableStaticRuntime(for_script));
|
||||
EXPECT_TRUE(testCanEnableStaticRuntime(while_script));
|
||||
EXPECT_TRUE(testCanEnableStaticRuntime(if_script));
|
||||
EXPECT_FALSE(testCanEnableStaticRuntime(is_script));
|
||||
EXPECT_FALSE(testCanEnableStaticRuntime(is_not_script));
|
||||
}
|
||||
|
|
@ -1429,3 +1429,62 @@ TEST(StaticModule, NotEnoughArgs) {
|
|||
)JIT";
|
||||
testStaticModuleThrows(kwargs_src, {}, {});
|
||||
}
|
||||
|
||||
TEST(CreateOwnedRefsForSpecialValues, TopLevel) {
|
||||
const auto src = R"IR(
|
||||
graph():
|
||||
%c: int = prim::Constant[value=42]()
|
||||
return (%c)
|
||||
)IR";
|
||||
|
||||
auto graph = getGraphFromIR(src);
|
||||
CreateOwnedRefsForSpecialValues(*graph);
|
||||
EXPECT_TRUE(hasNodeWithKind(graph, "static_runtime::create_owned_ref"));
|
||||
}
|
||||
|
||||
TEST(CreateOwnedRefsForSpecialValues, ValueFromOuterScope) {
|
||||
const auto src = R"IR(
|
||||
graph(%cond: bool, %1: int):
|
||||
%c: int = aten::add(%1, %1)
|
||||
%x: int = prim::If(%c)
|
||||
block0():
|
||||
-> (%c)
|
||||
block1():
|
||||
-> (%c)
|
||||
return (%x)
|
||||
)IR";
|
||||
|
||||
auto graph = getGraphFromIR(src);
|
||||
CreateOwnedRefsForSpecialValues(*graph);
|
||||
EXPECT_TRUE(hasNodeWithKind(graph, "static_runtime::create_owned_ref"));
|
||||
}
|
||||
|
||||
TEST(ForceNonEmptyOutputs, TwoSubBlocks) {
|
||||
const auto src = R"IR(
|
||||
graph(%cond: bool):
|
||||
%lst : int[] = prim::ListConstruct()
|
||||
%1 : int = prim::Constant[value=1]()
|
||||
%2 : int = prim::Constant[value=2]()
|
||||
prim::If(%cond)
|
||||
block0():
|
||||
aten::append(%lst, %1)
|
||||
-> ()
|
||||
block1():
|
||||
aten::append(%lst, %2)
|
||||
-> ()
|
||||
return (%lst)
|
||||
)IR";
|
||||
|
||||
auto graph = getGraphFromIR(src);
|
||||
ForceNonEmptyOutputs(*graph);
|
||||
|
||||
for (auto* node : graph->nodes()) {
|
||||
if (node->blocks().empty()) {
|
||||
continue;
|
||||
}
|
||||
EXPECT_EQ(node->outputs().size(), 1);
|
||||
for (auto* sub_block : node->blocks()) {
|
||||
EXPECT_EQ(sub_block->outputs().size(), 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2432,3 +2432,149 @@ TEST(StaticRuntime, Int) {
|
|||
std::vector<IValue> args{at::tensor({3.14})};
|
||||
testStaticRuntime(src, args);
|
||||
}
|
||||
|
||||
TEST(StaticRuntime, ReturnConstant) {
|
||||
const auto src = R"JIT(
|
||||
def forward(self):
|
||||
return 1
|
||||
)JIT";
|
||||
|
||||
testStaticRuntime(src, {});
|
||||
}
|
||||
|
||||
TEST(StaticRuntime, SimpleIf) {
|
||||
const auto src = R"JIT(
|
||||
def forward(self, cond: bool, x):
|
||||
if cond:
|
||||
return torch.mul(x, 42).clone()
|
||||
else:
|
||||
return x.clone()
|
||||
)JIT";
|
||||
|
||||
std::vector<IValue> args_false{false, at::randn({1})};
|
||||
std::vector<IValue> args_true{true, at::randn({1})};
|
||||
std::vector<IValue> args_big_tensor{true, at::randn({3, 3, 3})};
|
||||
|
||||
testStaticRuntime(src, args_false);
|
||||
testStaticRuntime(src, args_true);
|
||||
testStaticRuntime(src, args_true, args_big_tensor);
|
||||
}
|
||||
|
||||
TEST(StaticRuntime, NestedIf) {
|
||||
const auto src = R"JIT(
|
||||
def forward(self, cond1: bool, cond2: bool, x):
|
||||
y = x * 42
|
||||
if cond1:
|
||||
y = y * y
|
||||
if cond2:
|
||||
y += x
|
||||
else:
|
||||
if cond2:
|
||||
return x.clone()
|
||||
|
||||
return y.clone()
|
||||
)JIT";
|
||||
|
||||
for (auto cond1 : {true, false}) {
|
||||
for (auto cond2 : {true, false}) {
|
||||
std::vector<IValue> args1{cond1, cond2, at::randn({1})};
|
||||
std::vector<IValue> args2{cond1, cond2, at::randn({3, 3, 3})};
|
||||
testStaticRuntime(src, args1, args2);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
TEST(StaticRuntime, DeeplyNestedIf) {
|
||||
const auto src = R"JIT(
|
||||
def forward(self, cond1: bool, cond2: bool, cond3: bool, x):
|
||||
y = x * 42
|
||||
if cond1:
|
||||
y = y * y
|
||||
if cond2:
|
||||
y += x
|
||||
|
||||
if cond2 and cond3:
|
||||
y += 1
|
||||
|
||||
if cond2:
|
||||
if cond3:
|
||||
y += 2
|
||||
else:
|
||||
y = y * y
|
||||
y += 4
|
||||
else:
|
||||
if cond2:
|
||||
return x.clone()
|
||||
if cond3 or cond2:
|
||||
y += 42
|
||||
|
||||
return y.clone()
|
||||
)JIT";
|
||||
|
||||
for (auto cond1 : {true, false}) {
|
||||
for (auto cond2 : {true, false}) {
|
||||
for (auto cond3 : {true, false}) {
|
||||
std::vector<IValue> args1{cond1, cond2, cond3, at::randn({1})};
|
||||
std::vector<IValue> args2{cond1, cond2, cond3, at::randn({3, 3, 3})};
|
||||
testStaticRuntime(src, args1, args2);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
TEST(StaticRuntime, BasicForLoop) {
|
||||
const auto src = R"JIT(
|
||||
def forward(self, x, loop_max: int):
|
||||
y = x.clone()
|
||||
for i in range(loop_max):
|
||||
y += 1
|
||||
return y
|
||||
)JIT";
|
||||
|
||||
std::vector<IValue> args1{at::randn({1}), 10};
|
||||
std::vector<IValue> args2{at::randn({3, 3, 3}), 10};
|
||||
|
||||
testStaticRuntime(src, args1, args2);
|
||||
}
|
||||
|
||||
TEST(StaticRuntime, BasicWhileLoop) {
|
||||
const auto src = R"JIT(
|
||||
def forward(self, x, loop_max: int):
|
||||
y = x.clone()
|
||||
loop_count = 0
|
||||
while loop_count < loop_max:
|
||||
y += 1
|
||||
loop_count += 1
|
||||
return y
|
||||
)JIT";
|
||||
|
||||
std::vector<IValue> args1{at::randn({1}), 10};
|
||||
std::vector<IValue> args2{at::randn({3, 3, 3}), 10};
|
||||
|
||||
testStaticRuntime(src, args1, args2);
|
||||
}
|
||||
|
||||
TEST(StaticRuntime, NestedLoops) {
|
||||
const auto src = R"JIT(
|
||||
def forward(self, x, loop_max: int):
|
||||
y = x.clone()
|
||||
even: List[int] = []
|
||||
odd: List[int] = []
|
||||
|
||||
for i in range(loop_max):
|
||||
if i % 2:
|
||||
odd.append(i)
|
||||
else:
|
||||
even.append(i)
|
||||
|
||||
for j in range(i):
|
||||
y += 1
|
||||
|
||||
return y, even, odd
|
||||
)JIT";
|
||||
|
||||
std::vector<IValue> args1{at::randn({1}), 10};
|
||||
std::vector<IValue> args2{at::randn({3, 3, 3}), 10};
|
||||
|
||||
testStaticRuntime(src, args1, args2);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@
|
|||
#include <gtest/gtest.h>
|
||||
#include <torch/csrc/jit/ir/irparser.h>
|
||||
#include <torch/csrc/jit/runtime/graph_executor.h>
|
||||
#include <torch/csrc/jit/runtime/graph_iterator.h>
|
||||
#include <torch/csrc/jit/runtime/static/impl.h>
|
||||
#include <torch/csrc/jit/runtime/static/memory_planner.h>
|
||||
#include <torch/csrc/jit/runtime/static/passes.h>
|
||||
|
|
@ -220,10 +221,25 @@ Node* getNodeWithKind(const StaticModule& smodule, const std::string& kind) {
|
|||
return smodule.findNodeWithKindForTesting(kind);
|
||||
}
|
||||
|
||||
Node* getNodeWithKind(std::shared_ptr<Graph>& graph, const std::string& kind) {
|
||||
const auto symbol = c10::Symbol::fromQualString(kind);
|
||||
DepthFirstGraphNodeIterator it(graph);
|
||||
for (auto* node = it.next(); node != nullptr; node = it.next()) {
|
||||
if (node->kind() == symbol) {
|
||||
return node;
|
||||
}
|
||||
}
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
bool hasNodeWithKind(const StaticModule& smodule, const std::string& kind) {
|
||||
return getNodeWithKind(smodule, kind) != nullptr;
|
||||
}
|
||||
|
||||
bool hasNodeWithKind(std::shared_ptr<Graph>& graph, const std::string& kind) {
|
||||
return getNodeWithKind(graph, kind) != nullptr;
|
||||
}
|
||||
|
||||
std::shared_ptr<Graph> getGraphFromScript(const std::string& jit_script) {
|
||||
script::Module module("module");
|
||||
module.define(jit_script);
|
||||
|
|
|
|||
|
|
@ -41,8 +41,10 @@ bool hasProcessedNodeWithName(
|
|||
at::Tensor getTensor(const at::IValue& ival);
|
||||
|
||||
Node* getNodeWithKind(const StaticModule& smodule, const std::string& kind);
|
||||
Node* getNodeWithKind(std::shared_ptr<Graph>& graph, const std::string& kind);
|
||||
|
||||
bool hasNodeWithKind(const StaticModule& smodule, const std::string& kind);
|
||||
bool hasNodeWithKind(std::shared_ptr<Graph>& graph, const std::string& kind);
|
||||
|
||||
void compareResultsWithJIT(
|
||||
StaticRuntime& runtime,
|
||||
|
|
|
|||
|
|
@ -280,14 +280,17 @@ class SerializationMixin(object):
|
|||
self.assertEqual(i, j)
|
||||
|
||||
def test_serialization_sparse(self):
|
||||
def _test_serialization(conversion):
|
||||
x = torch.zeros(3, 3)
|
||||
x[1][1] = 1
|
||||
x = x.to_sparse()
|
||||
x = conversion(x)
|
||||
with tempfile.NamedTemporaryFile() as f:
|
||||
torch.save({"tensor": x}, f)
|
||||
f.seek(0)
|
||||
y = torch.load(f)
|
||||
self.assertEqual(x, y["tensor"])
|
||||
_test_serialization(lambda x: x.to_sparse())
|
||||
_test_serialization(lambda x: x.to_sparse_csr())
|
||||
|
||||
def test_serialization_sparse_invalid(self):
|
||||
x = torch.zeros(3, 3)
|
||||
|
|
@ -318,6 +321,36 @@ class SerializationMixin(object):
|
|||
"size is inconsistent with indices"):
|
||||
y = torch.load(f)
|
||||
|
||||
def test_serialization_sparse_csr_invalid(self):
|
||||
x = torch.zeros(3, 3)
|
||||
x[1][1] = 1
|
||||
x = x.to_sparse_csr()
|
||||
|
||||
class TensorSerializationSpoofer(object):
|
||||
def __init__(self, tensor):
|
||||
self.tensor = tensor
|
||||
|
||||
def __reduce_ex__(self, proto):
|
||||
invalid_crow_indices = self.tensor.crow_indices().clone()
|
||||
invalid_crow_indices[0] = 3
|
||||
return (
|
||||
torch._utils._rebuild_sparse_tensor,
|
||||
(
|
||||
self.tensor.layout,
|
||||
(
|
||||
invalid_crow_indices,
|
||||
self.tensor.col_indices(),
|
||||
self.tensor.values(),
|
||||
self.tensor.size())))
|
||||
|
||||
with tempfile.NamedTemporaryFile() as f:
|
||||
torch.save({"spoofed": TensorSerializationSpoofer(x)}, f)
|
||||
f.seek(0)
|
||||
with self.assertRaisesRegex(
|
||||
RuntimeError,
|
||||
"rebuilding sparse tensor for layout torch.sparse_csr"):
|
||||
y = torch.load(f)
|
||||
|
||||
def test_serialize_device(self):
|
||||
device_str = ['cpu', 'cpu:0', 'cuda', 'cuda:0']
|
||||
device_obj = [torch.device(d) for d in device_str]
|
||||
|
|
|
|||
|
|
@ -1367,6 +1367,21 @@ class TestSparseCSR(TestCase):
|
|||
run_test(shape, max(shape), index_dtype, dim0, dim1)
|
||||
run_test(shape, shape[0] * shape[1], index_dtype, dim0, dim1)
|
||||
|
||||
# TODO: This is a stopgap for a rigorous extension of our autograd tests
|
||||
# to test the functionality of detach
|
||||
@skipMeta
|
||||
@dtypes(*get_all_dtypes())
|
||||
def test_exercise_detach(self, device, dtype):
|
||||
shape = (3, 3)
|
||||
nnz = 4
|
||||
for index_dtype in [torch.int32, torch.int64]:
|
||||
inp = self.genSparseCSRTensor(shape, nnz, dtype=dtype, device=device, index_dtype=index_dtype)
|
||||
detached_inp = inp.detach()
|
||||
self.assertEqual(inp.values(), detached_inp.values())
|
||||
self.assertEqual(inp.crow_indices(), detached_inp.crow_indices())
|
||||
self.assertEqual(inp.col_indices(), detached_inp.col_indices())
|
||||
|
||||
|
||||
|
||||
# e.g., TestSparseCSRCPU and TestSparseCSRCUDA
|
||||
instantiate_device_type_tests(TestSparseCSR, globals())
|
||||
|
|
|
|||
|
|
@ -5477,6 +5477,10 @@ class TestDevicePrecision(TestCase):
|
|||
actual = x[..., :1].clamp(lb, ub)
|
||||
self.assertEqual(expect, actual)
|
||||
|
||||
def test_cuda_device_idx(self, device):
|
||||
x = torch.zeros(3, device=device)
|
||||
y = torch._efficientzerotensor(3, device=device)
|
||||
self.assertEqual(x.device, y.device)
|
||||
|
||||
# we implemented custom deallocation for subclasses, so it behooves
|
||||
# us to make sure all of these bits work. We'll use __del__ to
|
||||
|
|
|
|||
|
|
@ -37,6 +37,7 @@
|
|||
#include "torch/csrc/autograd/python_return_types.h"
|
||||
|
||||
#include <ATen/core/Tensor.h>
|
||||
#include <ATen/FuncTorchTLS.h>
|
||||
#include "c10/util/Optional.h"
|
||||
#include "c10/core/Stream.h"
|
||||
|
||||
|
|
@ -790,6 +791,12 @@ static PyObject * THPVariable_requires_grad_(PyObject* self, PyObject* args, PyO
|
|||
return handle_torch_function(r, self, args, kwargs, THPVariableClass, "torch.Tensor");
|
||||
}
|
||||
|
||||
// temporary hack to improve functorch UX.
|
||||
const auto& functorch_tls = at::functorch::functorchTLSAccessor();
|
||||
if (functorch_tls) {
|
||||
functorch_tls->checkSupportsInplaceRequiresGrad();
|
||||
}
|
||||
|
||||
auto requires_grad = r.toBool(0);
|
||||
// should we throw if requires_grad is true? var.requires_grad = True throws here
|
||||
// but it's nice to let this be a no-op.
|
||||
|
|
|
|||
|
|
@ -38,6 +38,8 @@ CONSTANT_LIST = CodeTemplate("""std::vector<c10::IValue>({
|
|||
${constant_list}
|
||||
}), // constants list""")
|
||||
|
||||
CONSTANTS_LIST_EMPTY = """std::vector<c10::IValue>(), // constants list"""
|
||||
|
||||
ONE_TYPE = CodeTemplate("""c10::parseType("${type_str}"),""")
|
||||
|
||||
TYPE_LIST = CodeTemplate("""std::vector<c10::TypePtr>({
|
||||
|
|
@ -181,6 +183,8 @@ def construct_constants(constants_list_from_yaml: List[Any]) -> str:
|
|||
constant=convert_constant
|
||||
)
|
||||
)
|
||||
if len(constants_list_part) == 0:
|
||||
return CONSTANTS_LIST_EMPTY
|
||||
return CONSTANT_LIST.substitute(constant_list="".join(constants_list_part).lstrip("\n"))
|
||||
|
||||
def construct_operators(operator_list_from_yaml: List[Any]) -> str:
|
||||
|
|
|
|||
|
|
@ -196,12 +196,15 @@ class FileManager:
|
|||
else:
|
||||
shard[key] = []
|
||||
|
||||
|
||||
def merge_env(into: Dict[str, List[str]], from_: Dict[str, List[str]]) -> None:
|
||||
for k, v in from_.items():
|
||||
assert k in sharded_keys, f"undeclared sharded key {k}"
|
||||
into[k] += v
|
||||
|
||||
if self.dry_run:
|
||||
# Dry runs don't write any templates, so incomplete environments are fine
|
||||
items = ()
|
||||
|
||||
for item in items:
|
||||
key = key_fn(item)
|
||||
sid = string_stable_hash(key) % num_shards
|
||||
|
|
|
|||
|
|
@ -106,6 +106,7 @@ def DisableTorchFunction(): ...
|
|||
# Defined in torch/csrc/utils/tensor_layouts.cpp
|
||||
strided : layout = ...
|
||||
sparse_coo : layout = ...
|
||||
sparse_csr : layout = ...
|
||||
_mkldnn : layout = ...
|
||||
|
||||
# Defined in torch/csrc/MemoryFormat.cpp
|
||||
|
|
|
|||
|
|
@ -254,6 +254,17 @@ class Tensor(torch._C._TensorBase):
|
|||
raise NotImplementedError(
|
||||
'sparse tensor __reduce_ex__ for layout `%s`' % (self.layout))
|
||||
return (torch._utils._rebuild_sparse_tensor, args_sparse)
|
||||
elif self.is_sparse_csr:
|
||||
if self.layout == torch.sparse_csr:
|
||||
args_sparse_csr = (self.layout,
|
||||
(self.crow_indices(),
|
||||
self.col_indices(),
|
||||
self.values(),
|
||||
self.size()))
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
'sparse csr tensor __reduce_ex__ for layout `%s`' % (self.layout))
|
||||
return (torch._utils._rebuild_sparse_csr_tensor, args_sparse_csr)
|
||||
else:
|
||||
# TODO: Once we decide to break serialization FC, no longer
|
||||
# need to wrap with TypedStorage
|
||||
|
|
|
|||
|
|
@ -160,8 +160,18 @@ _sparse_tensors_to_validate: List["torch.Tensor"] = []
|
|||
def _validate_loaded_sparse_tensors():
|
||||
try:
|
||||
for t in _sparse_tensors_to_validate:
|
||||
if t.is_sparse:
|
||||
torch._validate_sparse_coo_tensor_args(t._indices(), t._values(),
|
||||
t.size())
|
||||
elif t.is_sparse_csr:
|
||||
# TODO: Validation currently involves an expensive traversal
|
||||
# on CPU, which may include a device transfer.
|
||||
torch._validate_sparse_csr_tensor_args(t.crow_indices(), t.col_indices(),
|
||||
t.values(), t.size())
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
'_validate_loaded_sparse_tensors for layout `%s`' % (t.layout))
|
||||
|
||||
finally:
|
||||
_sparse_tensors_to_validate.clear()
|
||||
|
||||
|
|
@ -174,6 +184,15 @@ def _rebuild_sparse_tensor(layout, data):
|
|||
|
||||
raise NotImplementedError("rebuilding sparse tensor for layout %s" % (layout))
|
||||
|
||||
def _rebuild_sparse_csr_tensor(layout, data):
|
||||
if layout == torch.sparse_csr:
|
||||
crow_indices, col_indices, values, size = data
|
||||
result = torch._sparse_csr_tensor_unsafe(crow_indices, col_indices, values, size)
|
||||
_sparse_tensors_to_validate.append(result)
|
||||
return result
|
||||
|
||||
raise NotImplementedError("rebuilding sparse tensor for layout %s" % (layout))
|
||||
|
||||
|
||||
def _rebuild_device_tensor_from_numpy(data, dtype, device, requires_grad):
|
||||
tensor = torch.from_numpy(data).to(dtype=dtype, device=device)
|
||||
|
|
|
|||
|
|
@ -14,6 +14,7 @@
|
|||
|
||||
#include <ATen/ATen.h>
|
||||
#include <ATen/MemoryOverlap.h>
|
||||
#include <ATen/FuncTorchTLS.h>
|
||||
#include <c10/util/Exception.h>
|
||||
|
||||
#include <list>
|
||||
|
|
@ -436,6 +437,13 @@ int64_t VariableHooks::_version(const at::TensorBase & self) const {
|
|||
|
||||
void VariableHooks::retain_grad(const at::TensorBase& self) const {
|
||||
TORCH_CHECK(self.requires_grad(), "can't retain_grad on Tensor that has requires_grad=False");
|
||||
|
||||
// temporary hack to improve functorch UX.
|
||||
const auto& functorch_tls = at::functorch::functorchTLSAccessor();
|
||||
if (functorch_tls) {
|
||||
functorch_tls->checkSupportsRetainGrad();
|
||||
}
|
||||
|
||||
if (self.is_leaf()) { // no-op for leaves
|
||||
return;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -67,11 +67,6 @@ bool canEnableStaticRuntime(const std::shared_ptr<torch::jit::Graph>& graph) {
|
|||
bool can_support = true;
|
||||
bool has_blocks = false;
|
||||
for (auto* node : graph->block()->nodes()) {
|
||||
if (node->blocks().size() > 0) {
|
||||
has_blocks = true;
|
||||
VLOG(1) << "Found nested sub-blocks in graph at node: "
|
||||
<< PrintNode(node);
|
||||
}
|
||||
const auto kind = node->kind();
|
||||
if (kind == prim::Constant) {
|
||||
continue;
|
||||
|
|
@ -83,11 +78,6 @@ bool canEnableStaticRuntime(const std::shared_ptr<torch::jit::Graph>& graph) {
|
|||
LOG(WARNING) << "Found unsupported op: " << kind.toQualString();
|
||||
}
|
||||
}
|
||||
if (has_blocks) {
|
||||
LOG(WARNING)
|
||||
<< "Found nested sub-block in graph. Static Runtime doesn't support nested sub-blocks.";
|
||||
can_support = false;
|
||||
}
|
||||
return can_support;
|
||||
}
|
||||
|
||||
|
|
@ -203,6 +193,19 @@ void PrepareGraphForStaticModule(
|
|||
std::vector<IValue> sample_inputs) {
|
||||
TORCH_CHECK(canEnableStaticRuntime(graph));
|
||||
OptimizeGraph(graph, opts, std::move(sample_inputs));
|
||||
|
||||
// Static runtime moves its outputs out of the runtime
|
||||
// by default. In some rare cases, this is not actually safe to
|
||||
// do - for example, if the value is a constant, static runtime
|
||||
// needs to hold onto a copy. Rather than adding special logic
|
||||
// to handle this rare case, we use this pass to detect it and
|
||||
// create an owned reference that can be safely moved out of the
|
||||
// runtime.
|
||||
CreateOwnedRefsForSpecialValues(*graph);
|
||||
|
||||
// We assume that each sub-block has at least one output. If we
|
||||
// detect any that have 0, force the sub-block to return None.
|
||||
ForceNonEmptyOutputs(*graph);
|
||||
}
|
||||
|
||||
std::pair<std::shared_ptr<Graph>, c10::optional<Module>> PrepareForStaticModule(
|
||||
|
|
@ -327,18 +330,12 @@ ManagedTensorRanges::ManagedTensorRanges(
|
|||
const FastSet<const Value*> graph_inputs(
|
||||
block.inputs().begin(), block.inputs().end());
|
||||
|
||||
auto isUntrackedValue = [&alias_db, &graph_inputs](const Value* value) {
|
||||
return !alias_db.isMutableType(value) ||
|
||||
graph_inputs.find(value) != graph_inputs.end();
|
||||
};
|
||||
|
||||
const auto num_nodes = nodes.size();
|
||||
for (const auto i : c10::irange(num_nodes)) {
|
||||
auto* node = nodes[i];
|
||||
for (auto* input : node->inputs()) {
|
||||
auto* lifetime = getLifetime(input);
|
||||
if (!lifetime) {
|
||||
DCHECK(isUntrackedValue(input));
|
||||
continue;
|
||||
}
|
||||
DCHECK(lifetime->end <= i);
|
||||
|
|
@ -354,7 +351,6 @@ ManagedTensorRanges::ManagedTensorRanges(
|
|||
for (auto* graph_output : block.outputs()) {
|
||||
auto* lifetime = getLifetime(graph_output);
|
||||
if (!lifetime) {
|
||||
DCHECK(isUntrackedValue(graph_output));
|
||||
continue;
|
||||
}
|
||||
lifetime->end = num_nodes;
|
||||
|
|
@ -1826,11 +1822,12 @@ static bool checkNoMemoryOverlap(const at::Tensor& a, const at::Tensor& b) {
|
|||
}
|
||||
|
||||
bool ProcessedNode::verify_no_memory_overlap(bool force_check) const {
|
||||
const static std::array<c10::Symbol, 4> special_case_ops = {
|
||||
const static std::array<c10::Symbol, 5> special_case_ops = {
|
||||
fromQualString("prim::TypeCheck"),
|
||||
fromQualString("static_runtime::select_tensor"),
|
||||
fromQualString("static_runtime::VarTupleUnpack"),
|
||||
fromQualString("static_runtime::dict_unpack")};
|
||||
fromQualString("static_runtime::dict_unpack"),
|
||||
fromQualString("static_runtime::create_owned_ref")};
|
||||
if (!force_check &&
|
||||
std::find(
|
||||
begin(special_case_ops), end(special_case_ops), node()->kind()) !=
|
||||
|
|
|
|||
|
|
@ -692,5 +692,100 @@ REGISTER_NATIVE_OPERATOR_FUNCTOR(
|
|||
};
|
||||
});
|
||||
|
||||
// See [Create owned refs for special values]
|
||||
REGISTER_NATIVE_OPERATOR_FUNCTOR(
|
||||
static_runtime::create_owned_ref,
|
||||
static_runtime_create_owned_ref,
|
||||
[](Node*) -> SROperator {
|
||||
return
|
||||
[](ProcessedNode* p_node) { p_node->Output(0) = p_node->Input(0); };
|
||||
});
|
||||
|
||||
REGISTER_NATIVE_OPERATOR_FUNCTOR(prim::If, prim_If, [](Node*) -> SROperator {
|
||||
return [](ProcessedNode* p_node) {
|
||||
auto condition = p_node->Input(0).toBool();
|
||||
auto* block_runners = p_node->block_runners();
|
||||
DCHECK(block_runners);
|
||||
DCHECK_EQ(block_runners->size(), 2);
|
||||
auto& runner = (*block_runners)[!condition];
|
||||
|
||||
auto output = runner({});
|
||||
if (!output.isTuple()) {
|
||||
p_node->Output(0) = std::move(output);
|
||||
return;
|
||||
}
|
||||
auto& elems = output.toTupleRef().elements();
|
||||
DCHECK_EQ(elems.size(), p_node->num_outputs());
|
||||
for (const auto i : c10::irange(elems.size())) {
|
||||
p_node->Output(i) = elems[i];
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
namespace {
|
||||
|
||||
std::vector<IValue> collectLoopSubBlockInputs(const ProcessedNode& p_node) {
|
||||
const auto num_inputs = p_node.num_inputs();
|
||||
DCHECK_GE(num_inputs, 2);
|
||||
// The first two inputs to the loop node are the max trip count
|
||||
// and initial condition. We don't collect them here, since those
|
||||
// are not inputs for the sub-block.
|
||||
const auto num_args = num_inputs - 2;
|
||||
|
||||
std::vector<IValue> result;
|
||||
result.reserve(num_args + 1);
|
||||
// First argument to the loop sub-block is always the loop counter, initially
|
||||
// zero.
|
||||
result.emplace_back(0);
|
||||
|
||||
for (const auto i : c10::irange(num_args)) {
|
||||
result.push_back(p_node.Input(2 + i));
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
} // namespace
|
||||
|
||||
REGISTER_NATIVE_OPERATOR_FUNCTOR(
|
||||
prim::Loop,
|
||||
prim_Loop,
|
||||
[](Node*) -> SROperator {
|
||||
return [](ProcessedNode* p_node) {
|
||||
const auto max_trip_count = p_node->Input(0).toInt();
|
||||
auto condition = p_node->Input(1).toBool();
|
||||
|
||||
auto* block_runners = p_node->block_runners();
|
||||
DCHECK(block_runners);
|
||||
DCHECK_EQ(block_runners->size(), 1);
|
||||
auto& runner = (*block_runners)[0];
|
||||
|
||||
auto args = collectLoopSubBlockInputs(*p_node);
|
||||
int64_t loop_count = 0;
|
||||
|
||||
while (condition && loop_count < max_trip_count) {
|
||||
auto output = runner(args);
|
||||
|
||||
if (output.isTuple()) {
|
||||
auto& elems = output.toTupleRef().elements();
|
||||
DCHECK(elems.size() == args.size());
|
||||
for (const auto i : c10::irange(1, args.size())) {
|
||||
args[i] = elems[i];
|
||||
}
|
||||
condition = elems[0].toBool();
|
||||
} else {
|
||||
condition = output.toBool();
|
||||
}
|
||||
args[0] = ++loop_count;
|
||||
}
|
||||
|
||||
const auto num_outputs = p_node->num_outputs();
|
||||
DCHECK_EQ(args.size(), num_outputs + 1);
|
||||
for (const auto i : c10::irange(num_outputs)) {
|
||||
p_node->Output(i) = std::move(args[i + 1]);
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
} // namespace jit
|
||||
} // namespace torch
|
||||
|
|
|
|||
|
|
@ -387,6 +387,7 @@ TORCH_LIBRARY_FRAGMENT(static_runtime, m) {
|
|||
m.def(torch::schema(
|
||||
"static_runtime::select_tensor(Tensor(a) a, Tensor(b) b, bool use_b) -> Tensor(a|b)",
|
||||
c10::AliasAnalysisKind::FROM_SCHEMA));
|
||||
m.def(torch::schema("static_runtime::create_owned_ref(...) -> ..."));
|
||||
}
|
||||
|
||||
void FuseSignLog1P(std::shared_ptr<torch::jit::Graph>& graph) {
|
||||
|
|
@ -926,5 +927,94 @@ void UseVariadicGroupedAccessor(const std::shared_ptr<Graph>& graph) {
|
|||
fromQualString("static_runtime::variadic_grouped_accessor_op_v2"));
|
||||
}
|
||||
|
||||
namespace {
|
||||
|
||||
void CreateOwnedRefsForSpecialValuesHelper(Graph& graph, Block* block) {
|
||||
for (auto* node : block->nodes()) {
|
||||
for (auto* sub_block : node->blocks()) {
|
||||
CreateOwnedRefsForSpecialValuesHelper(graph, sub_block);
|
||||
}
|
||||
}
|
||||
|
||||
auto outputs = block->outputs();
|
||||
for (const auto i : c10::irange(outputs.size())) {
|
||||
auto* output = outputs[i];
|
||||
|
||||
if (output->type()->kind() == c10::TypeKind::NoneType) {
|
||||
// No need to create owned refs of NoneType since moving
|
||||
// from None will have no effect
|
||||
continue;
|
||||
}
|
||||
|
||||
if (toIValue(output).has_value() ||
|
||||
// If the output's owning block is not this one, it's from an outer
|
||||
// scope
|
||||
output->node()->owningBlock() != block) {
|
||||
auto* create_owned_ref_node =
|
||||
graph.create(fromQualString("static_runtime::create_owned_ref"));
|
||||
create_owned_ref_node->addInput(output);
|
||||
create_owned_ref_node->output()->copyMetadata(output);
|
||||
|
||||
block->appendNode(create_owned_ref_node);
|
||||
block->replaceOutput(i, create_owned_ref_node->output());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void ForceNonEmptyOutputsHelper(Value* none_value, Block* block) {
|
||||
for (auto* node : block->nodes()) {
|
||||
bool needs_output = false;
|
||||
for (auto* sub_block : node->blocks()) {
|
||||
if (sub_block->outputs().empty()) {
|
||||
sub_block->registerOutput(none_value);
|
||||
needs_output = true;
|
||||
}
|
||||
|
||||
ForceNonEmptyOutputsHelper(none_value, sub_block);
|
||||
}
|
||||
|
||||
if (needs_output) {
|
||||
// Loop sub-blocks should always return at least one output (the new loop
|
||||
// condition)
|
||||
DCHECK(node->kind() == prim::If);
|
||||
auto* output = node->addOutput();
|
||||
output->setType(c10::NoneType::get());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Node* findOrCreateNoneConstant(Graph& graph) {
|
||||
// Only search the top-level block
|
||||
for (auto* node : graph.nodes()) {
|
||||
if (node->kind() != prim::Constant) {
|
||||
continue;
|
||||
}
|
||||
const auto ival_opt = toIValue(node->output());
|
||||
DCHECK(ival_opt.has_value());
|
||||
if (ival_opt->isNone()) {
|
||||
return node;
|
||||
}
|
||||
}
|
||||
|
||||
auto* none_node = graph.create(prim::Constant);
|
||||
none_node->output()->setType(c10::NoneType::get());
|
||||
graph.prependNode(none_node);
|
||||
return none_node;
|
||||
}
|
||||
|
||||
} // namespace
|
||||
|
||||
void CreateOwnedRefsForSpecialValues(Graph& graph) {
|
||||
CreateOwnedRefsForSpecialValuesHelper(graph, graph.block());
|
||||
}
|
||||
|
||||
void ForceNonEmptyOutputs(Graph& graph) {
|
||||
auto* none_node = findOrCreateNoneConstant(graph);
|
||||
ForceNonEmptyOutputsHelper(none_node->output(), graph.block());
|
||||
if (!none_node->hasUses()) {
|
||||
none_node->destroy();
|
||||
}
|
||||
}
|
||||
|
||||
} // namespace jit
|
||||
} // namespace torch
|
||||
|
|
|
|||
|
|
@ -41,6 +41,27 @@ inline c10::Symbol fromQualString(const std::string& qual_string) {
|
|||
return c10::Symbol::fromQualString(qual_string);
|
||||
}
|
||||
|
||||
// [Create owned refs for special values]
|
||||
// StaticRuntimeBlockRunner moves its outputs to the return value at the end of
|
||||
// run_impl. However, there's a corner case where this can cause problems. If
|
||||
// we return a constant, then the only reference in the constants_ array can
|
||||
// be destroyed by this move.
|
||||
// We could add special logic to handle this in run_impl. But since this is a
|
||||
// relatively rare corner case, it's simpler to just add an op that does nothing
|
||||
// but create an owned reference to its input. This owned reference can be
|
||||
// safely moved out of StaticRuntimeBlockRunner. Note that for scalars,
|
||||
// this actually does a copy.
|
||||
// Note that we have to do the same thing if we are returning a value from an
|
||||
// outer scope in a sub-block.
|
||||
void CreateOwnedRefsForSpecialValues(Graph& graph);
|
||||
|
||||
// [Force non-empty outputs]
|
||||
// It is technically possible for sub-blocks to not return anything. This is
|
||||
// problematic for StaticRuntimeBlockRunner because it assumes that at least one
|
||||
// output is being returned. Rather than slowing down SR with special logic for
|
||||
// this corner case, we simply force blocks that return nothing to return None.
|
||||
void ForceNonEmptyOutputs(Graph& graph);
|
||||
|
||||
TORCH_API void UseVariadicGroupedAccessor(const std::shared_ptr<Graph>& graph);
|
||||
|
||||
} // namespace jit
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import dataclasses as dc
|
||||
import logging
|
||||
import typing as t
|
||||
from typing import Type, Set, Optional
|
||||
from typing import Type, Set, Optional, Callable, List
|
||||
|
||||
import tensorrt as trt
|
||||
import torch
|
||||
|
|
@ -53,7 +53,7 @@ def lower_to_trt(
|
|||
enable_fuse=True,
|
||||
verbose_log=False,
|
||||
timing_cache_prefix="",
|
||||
save_timing_cache=True,
|
||||
save_timing_cache=False,
|
||||
cuda_graph_batch_size=-1,
|
||||
) -> nn.Module:
|
||||
"""
|
||||
|
|
@ -215,6 +215,7 @@ class LowerTrtInterpreter:
|
|||
strict_type_constraints=self.lower_setting.strict_type_constraints,
|
||||
algorithm_selector=algo_selector,
|
||||
timing_cache=cache_data,
|
||||
profiling_verbosity=trt.ProfilingVerbosity.DETAILED,
|
||||
)
|
||||
|
||||
# Update timing cache file if needed
|
||||
|
|
@ -306,12 +307,14 @@ class Lowerer(LowerFunc):
|
|||
remove_duplicate_output_args: RemoveDuplicateOutputArgsFunc
|
||||
trt_interpreter: LowerTrtInterpreter
|
||||
fp16: bool
|
||||
trt_module_observer: Optional[Callable[[str, nn.Module, List[torch.Tensor]], None]] = None
|
||||
|
||||
|
||||
@classmethod
|
||||
def create(
|
||||
cls,
|
||||
lower_setting: LowerSetting,
|
||||
trt_module_observer: Optional[Callable[[str, nn.Module, List[torch.Tensor]], None]] = None
|
||||
) -> "Lowerer":
|
||||
"""Instantiate a `Lowerer` instance."""
|
||||
|
||||
|
|
@ -326,6 +329,7 @@ class Lowerer(LowerFunc):
|
|||
remove_duplicate_output_args=remove_duplicate_output_args,
|
||||
trt_interpreter=LowerTrtInterpreter.create(lower_setting),
|
||||
fp16=lower_setting.fp16_mode,
|
||||
trt_module_observer=trt_module_observer,
|
||||
)
|
||||
|
||||
def __call__(
|
||||
|
|
@ -350,6 +354,9 @@ class Lowerer(LowerFunc):
|
|||
split_module, splits = self.split(const_split_mod, input) # type: ignore[arg-type]
|
||||
split_module.eval() # type: ignore[attr-defined]
|
||||
for _split in splits: # type: ignore[attr-defined]
|
||||
if self.trt_module_observer:
|
||||
self.trt_module_observer(_split.name, _split.module, _split.input) # type: ignore[arg-type]
|
||||
|
||||
if _split.device == "acc":
|
||||
# Ensure parent module is updated with the traced sub-net before running
|
||||
# remove_duplicate_output_args.
|
||||
|
|
|
|||
|
|
@ -181,6 +181,21 @@ class OpSupports:
|
|||
return True
|
||||
return create_op_support(_decline_if_input_dtype)
|
||||
|
||||
@classmethod
|
||||
def decline_if_node_in_names(cls, disallow_set: t.Set[str]) -> OperatorSupportBase:
|
||||
"""
|
||||
If a node has a name that is in the disallow set, reported it as non-supported.
|
||||
"""
|
||||
def _decline_if_node_in_names(
|
||||
submodules: t.Mapping[str, torch.nn.Module],
|
||||
node: torch.fx.Node,
|
||||
) -> bool:
|
||||
if node.name in disallow_set:
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
return create_op_support(_decline_if_node_in_names)
|
||||
|
||||
|
||||
def _get_arg_dtype(arg: torch.fx.Node) -> t.Any:
|
||||
assert isinstance(arg, torch.fx.Node)
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@ class _StorageBase(object):
|
|||
_cdata: Any
|
||||
is_cuda: bool = False
|
||||
is_sparse: bool = False
|
||||
is_sparse_csr: bool = False
|
||||
device: torch.device
|
||||
|
||||
def __init__(self, *args, **kwargs): ... # noqa: E704
|
||||
|
|
|
|||
|
|
@ -9068,9 +9068,6 @@ op_db: List[OpInfo] = [
|
|||
assert_autodiffed=True,
|
||||
rhs_make_tensor_kwargs=dict(exclude_zero=True),
|
||||
skips=(
|
||||
# 69913: RuntimeError: CUDA error: an illegal memory access was encountered
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_fn_fwgrad_bwgrad',
|
||||
device_type='cuda', dtypes=[torch.double, torch.cdouble]),
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_forward_mode_AD',
|
||||
device_type='cuda', dtypes=[torch.double, torch.cdouble]),
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_inplace_forward_mode_AD',
|
||||
|
|
@ -9087,9 +9084,6 @@ op_db: List[OpInfo] = [
|
|||
assert_autodiffed=True,
|
||||
rhs_make_tensor_kwargs=dict(exclude_zero=True),
|
||||
skips=(
|
||||
# 69913: RuntimeError: CUDA error: an illegal memory access was encountered
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_fn_fwgrad_bwgrad',
|
||||
device_type='cuda', dtypes=[torch.double, torch.cdouble]),
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_forward_mode_AD',
|
||||
device_type='cuda', dtypes=[torch.double, torch.cdouble]),
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_inplace_forward_mode_AD',
|
||||
|
|
@ -9106,9 +9100,6 @@ op_db: List[OpInfo] = [
|
|||
assert_autodiffed=True,
|
||||
rhs_make_tensor_kwargs=dict(exclude_zero=True),
|
||||
skips=(
|
||||
# 69913: RuntimeError: CUDA error: an illegal memory access was encountered
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_fn_fwgrad_bwgrad',
|
||||
device_type='cuda', dtypes=[torch.double, torch.cdouble]),
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_forward_mode_AD',
|
||||
device_type='cuda', dtypes=[torch.double, torch.cdouble]),
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_inplace_forward_mode_AD',
|
||||
|
|
@ -9689,11 +9680,6 @@ op_db: List[OpInfo] = [
|
|||
# RuntimeError:
|
||||
# Arguments for call are not valid.
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestJit', 'test_variant_consistency_jit', dtypes=(torch.float32, torch.complex64)), # noqa: B950
|
||||
# 69925: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
|
||||
DecorateInfo(unittest.expectedFailure, 'TestGradients', 'test_fn_fwgrad_bwgrad', device_type='cuda'),
|
||||
# (ROCm) Memory exception on virtual address 0x7f6f3deb7000, node id 4: Page not present
|
||||
DecorateInfo(unittest.skip("Skipped! ROCm memory exception"), 'TestGradients', 'test_fn_fwgrad_bwgrad',
|
||||
device_type='cuda', dtypes=[torch.float64, torch.complex128], active_if=TEST_WITH_ROCM),
|
||||
),
|
||||
supports_inplace_autograd=False,
|
||||
sample_inputs_func=sample_inputs_gradient),
|
||||
|
|
@ -9751,10 +9737,12 @@ op_db: List[OpInfo] = [
|
|||
# These tests started breaking after touching the SVD.
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_fn_grad', device_type='cpu',
|
||||
dtypes=(torch.complex128,), active_if=IS_WINDOWS),
|
||||
# Will be removed once https://github.com/pytorch/pytorch/issues/62328 is fixed
|
||||
# For complex dtypes: Will be removed once https://github.com/pytorch/pytorch/issues/62328 is fixed
|
||||
# Probable fix (open PR): https://github.com/pytorch/pytorch/pull/62570
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_fn_grad', device_type='cuda',
|
||||
dtypes=(torch.complex128,)),
|
||||
# Illegal Memory Access failure: https://github.com/pytorch/pytorch/issues/72203
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_fn_grad', device_type='cuda'),
|
||||
# Illegal Memory Access failure: https://github.com/pytorch/pytorch/issues/72204
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestMathBits', 'test_neg_view', device_type='cuda'),
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestCommon', 'test_dtypes'),
|
||||
DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', 'test_fn_gradgrad'),
|
||||
)),
|
||||
|
|
@ -14035,15 +14023,7 @@ op_db: List[OpInfo] = [
|
|||
supports_forward_ad=True,
|
||||
supports_fwgrad_bwgrad=True,
|
||||
supports_out=False,
|
||||
sample_inputs_func=sample_cumulative_trapezoid,
|
||||
skips=(
|
||||
# Two failures:
|
||||
# 1. (CUDA) RuntimeError: Expected all tensors to be on the same device, but found at
|
||||
# least two devices, cuda:0 and cpu!
|
||||
# 2. (ROCm) Memory exception on virtual address 0x7f6a2216f000, node id 4: Page not present
|
||||
DecorateInfo(unittest.skip("Skipped! ROCm memory exception"), 'TestGradients',
|
||||
'test_fn_fwgrad_bwgrad', device_type='cuda'),
|
||||
)),
|
||||
sample_inputs_func=sample_cumulative_trapezoid,),
|
||||
OpInfo('unsqueeze',
|
||||
dtypes=all_types_and_complex_and(torch.bool, torch.float16, torch.bfloat16),
|
||||
supports_out=False,
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user