Summary:
Fixes https://github.com/pytorch/pytorch/issues/68261
This PR changes the number of test shard from 2-->3 for all Asan test, aiming to improve the run time for Asan tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69843
Reviewed By: janeyx99
Differential Revision: D33160771
Pulled By: xidachen
fbshipit-source-id: dba1d318cc49b923e18704839471d8753cc00eca
Summary:
This is partial revert of bb522c9d7a to revert addition of workflows for CUDA 11.5 windows that fails
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69365
Reviewed By: suo
Differential Revision: D32831418
Pulled By: atalman
fbshipit-source-id: 184346d22623f88594312a4ce2e4d29cc67e8338
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69172
Migrates the docs push jobs to Github Actions by implementing a simple
WITH_PUSH switch to do the actual push.
Adds 2 new workflows for GHA:
* linux-docs (on trunk)
* linux-docs-push (on schedule)
linux-docs-push is the only workflow that actually gets access to
credentials so it should be relatively safe.
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D32767239
Pulled By: seemethere
fbshipit-source-id: 5b100f986cf4023c323f4f96f0fe7942fec49ad2
Summary:
Do not run distributed tests as part of separate shard, but keep it inside one of the two shards (to limit concurrency problems)
Fixes https://github.com/pytorch/pytorch/issues/68260
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68784
Reviewed By: seemethere, janeyx99
Differential Revision: D32653440
Pulled By: malfet
fbshipit-source-id: ebe5bbc30bdf67e930f2c766c920932700f3a4e4
Summary:
This fixes custom class registration issue when `typeid` is not guaranteed to be unique across multiple libraries, which is the case for libc++ runtime on MacOS 11 in particular for M1
From [libcxx/include/typeinfo](78d6a7767e/include/typeinfo (L139)):
```
// -------------------------------------------------------------------------- //
// NonUniqueARMRTTIBit
// -------------------------------------------------------------------------- //
// This implementation of type_info does not assume always a unique copy of
// the RTTI for a given type inside a program. It packs the pointer to the
// type name into a uintptr_t and reserves the high bit of that pointer (which
// is assumed to be free for use under the ABI in use) to represent whether
// that specific copy of the RTTI can be assumed unique inside the program.
// To implement equality-comparison of type_infos, we check whether BOTH
// type_infos are guaranteed unique, and if so, we simply compare the addresses
// of their type names instead of doing a deep string comparison, which is
// faster. If at least one of the type_infos can't guarantee uniqueness, we
// have no choice but to fall back to a deep string comparison.
```
But `std::type_index` hash is computed always assuming that implementation is unique
By adding a slow path this problem can be fixed in those scenarios.
Fixes https://github.com/pytorch/pytorch/issues/68039
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68717
Reviewed By: seemethere
Differential Revision: D32605187
Pulled By: malfet
fbshipit-source-id: 8d50e56885b8c97dad3bc34a69c47ef879456dd1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68180
Since we've open sourced the tracing-based selective build, we can deprecate the
op-dependency-graph-based selective build and the static analyzer tool that
produces the dependency graph.
ghstack-source-id: 143108377
Test Plan: CIs
Reviewed By: seemethere
Differential Revision: D32358467
fbshipit-source-id: c61523706b85a49361416da2230ec1b035b8b99c
Summary:
in scope of: https://github.com/pytorch/pytorch/issues/67301. Main changes:
* generated-pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit deleted from circle
* pytorch_android_gradle_custom_build_single removed since it is no longer used
* generated-pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit added to GHA
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67695
Reviewed By: malfet, seemethere, ejguan
Differential Revision: D32115620
Pulled By: b0noI
fbshipit-source-id: 113d48303c090303ae13512819bac2f069a2913f
Summary:
in scope of: https://github.com/pytorch/pytorch/issues/67301. Main changes:
* pytorch_android_gradle_custom_build_single removed from the circle (however template is still there since it is used by another similar workflow: pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit, which will be migrated next)
* new GHA workflow added: pytorch_android_gradle_custom_build_single
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67577
Reviewed By: malfet, mruberry
Differential Revision: D32087709
Pulled By: b0noI
fbshipit-source-id: f9581558ddc1453b63264bf19fe5a4c245b7c007
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67455
Migrates docker builds that don't have dependent jobs within the pytorch
repository to our new GHA docker build job
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: malfet, janeyx99
Differential Revision: D31997671
Pulled By: seemethere
fbshipit-source-id: 9d6f58fa8ea8731cf12457fe64dc65e70f3d9f25
Summary:
linux-xenial-cuda10.2 and linux-bionic-cuda10.2 are very similar, no
need to run both configs
Moved all auxiliary builds from xenial to bionic
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67344
Reviewed By: seemethere, janeyx99
Differential Revision: D31964850
Pulled By: malfet
fbshipit-source-id: d07ce266c843c7fd69b281e678c4247b0bf6da20
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67264
Downgrades linux gpu instances from 4xlarge -> 8xlarge
We were seeing capacity issues in terms of scaling 8xlarge instances,
downgrading this to 4xlarge (which only have a single gpu) to see if
that helps resolve some of the capacity issues we were seeing
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: janeyx99
Differential Revision: D31933488
Pulled By: seemethere
fbshipit-source-id: b41922ebb675e663cb035cd3795bc9bae94dcac7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67215
We were regularly seeing gaps in our docker image builds due to specific
workflows not being run when docker builds occurred on PRs, this should
remove that ambiguity and ensure that all docker builds be re-built if a
rebuild is deemed necessary
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D31910422
Pulled By: seemethere
fbshipit-source-id: f346e64f1857e35a995c49bf30521a3acd8af0b1
Summary:
CAFFE2 has been deprecated for a while, but still included in every PyTorch build.
We should stop building it by default, although CI should still validate that caffe2 code is buildable.
Build even fewer dependencies when compiling mobile builds without Caffe2
Introduce `TEST_CAFFE2` in torch.common.utils
Skip `TestQuantizedEmbeddingOps` and `TestJit.test_old_models_bc` is code is compiled without Caffe2
Should be landed after https://github.com/pytorch/builder/pull/864
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66658
Reviewed By: driazati, seemethere, janeyx99
Differential Revision: D31669156
Pulled By: malfet
fbshipit-source-id: 1cc45e2d402daf913a4685eb9f841cc3863e458d
Summary:
`linux-xenial-py3-clang5-mobile-build`, `linux-xenial-py3-clang5-mobile-custom-build-dynamic`, `linux-xenial-py3-clang5-mobile-custom-build-dynamic` and `linux-xenial-py3-clang5-mobile-code-analysis` are just the flavors of regular linux build job with no tests.
`linux-xenial-py3-clang5-mobile-code-analysis` is the master only job
`code-analysis` job is dispatch to `.jenkins/pytorch/build-mobile-code-analysis.sh` in
583217fe37/.jenkins/pytorch/build.sh (L23-L25)
and all `mobile-build` jobs are dispatched to `.jenkins/pytorch/build-mobile.sh` in
583217fe37/.jenkins/pytorch/build.sh (L19-L21)
Rename `is_libtorch` `CIWorkflow` property into `build_generates_artifacts` and change defaults from False to True
Both libtorch and mobile build jobs do not generate build artifacts
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66673
Reviewed By: janeyx99
Differential Revision: D31674434
Pulled By: malfet
fbshipit-source-id: 24d05d55366202cd4d9c25ecab429cb8f670ded0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66260
Every workflow has ciflow enabled so this is not needed anymore
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: dagitses, janeyx99
Differential Revision: D31493340
Pulled By: seemethere
fbshipit-source-id: 8718fe5d22f4be6e0900962576782a9f23162a39
Summary:
Noticed that `periodic-pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7-slow-gradcheck` job has a `ciflow/default`, but does not have a `ciflow/scheduled` label
Added asserts to enforce that jobs with non-trival is_scheduled property do not have default and do have scheduled labesl
Rename `periodic-pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7-slow-gradcheck` to `periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck`
Fixes #{issue number}
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66300
Reviewed By: seemethere
Differential Revision: D31493323
Pulled By: malfet
fbshipit-source-id: 194c1d7a4e659847d94a547b87a0d7d08e66406d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65730
This should close out the door on migrating all scheduled workflows we have for CircleCI
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
cc ezyang seemethere malfet pytorch/pytorch-dev-infra
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D31225188
Pulled By: seemethere
fbshipit-source-id: 4c49e88ec017edc30e07325dbc613ff54dd164d8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65731
It originally had purpose but after ciflow was introduced every PR had
on_pull_request set so it's not really as useful as it once was
Also removes the equally as confusing only_build_on_pull_request
variable as well
This change should produce no functional changes in our generated workflows
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
cc ezyang seemethere malfet pytorch/pytorch-dev-infra
Test Plan: Imported from OSS
Reviewed By: janeyx99
Differential Revision: D31225398
Pulled By: seemethere
fbshipit-source-id: 7bd8e8175794ab7d09b0632321bf52538435e858
Summary:
CIFLow workflows should always run on push event
On pull-request workflow should run if label conditions are met or if
no `ciflow/` labels are associated with it, workflow is enabled by
default
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65733
Reviewed By: zhouzhuojie
Differential Revision: D31251278
Pulled By: malfet
fbshipit-source-id: 31ce745cb224df7c6fec1682ec4180513e3dadf3
Summary:
Part of migrating from Circle.
Once we get a successful force_on_cpu test, we can move it to trunk only.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65094
Reviewed By: seemethere
Differential Revision: D31086289
Pulled By: janeyx99
fbshipit-source-id: e1d135cc844d51f0b243b40efb49edca277d9de8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65099
Utilizes ciflow to enable only specific workflows for
pytorch/pytorch-canary to reduce noise on that specific repository
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: jbschlosser
Differential Revision: D30973691
Pulled By: seemethere
fbshipit-source-id: 371765535b42a00bd72c2551c4faebf733d759f0
Summary:
As we default to linux CUDA 11.3 on PRs, we should do the same thing with Windows (instead of having 10.2 be the default). This means that 10.2 will now be master only, and 11.3 windows smoke tests will run on every PR.
This also copies over the "run smoke tests only" config--removing that will be in a separate PR once there's more certain decision making.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65090
Reviewed By: seemethere
Differential Revision: D30968382
Pulled By: janeyx99
fbshipit-source-id: c73f9a2cc800b678909365c4d80627d29fc09f94
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64958
This is a re-do of #64846 which was missing a path prefix for windows test reports
Test Plan: Imported from OSS
Reviewed By: seemethere
Differential Revision: D30915253
Pulled By: driazati
fbshipit-source-id: d14d0a64d2f8aabc335db9c4d0d2b63512887c66
Summary:
Previously we just weren't uploading Windows test report XML files to S3, only to GitHub actions. This was different than Linux where we use both (though maybe we can kill the GHA upload in a follow up PR since I don't think it's very useful anymore). This factors it all out into a macro so they both do the same thing. This also fixes the naming of uploaded files to include info about the job name (the full config, so they can be matched to the job visually or by the included job id).
See https://hud.pytorch.org/pr/64846 for results
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64846
Reviewed By: seemethere
Differential Revision: D30878101
Pulled By: driazati
fbshipit-source-id: 0730f17fa3f46a32c131f52669084c3103b0e616