Fixes https://github.com/pytorch/pytorch/issues/77509
This PR supersedes https://github.com/pytorch/pytorch/pull/77510.
It allows both `bazel query //...` and `bazel build --config=gpu //...` to work.
Concretely the changes are:
1. Add "GenerateAten" mnemonic -- this is a convenience thing, so anybody who uses [Remote Execution](https://bazel.build/docs/remote-execution) can add a
```
build:rbe --strategy=GenerateAten=sandboxed,local
```
line to the `~/.bazelrc` and build this action locally (it doesn't have hermetic dependencies at the moment).
2. Replaced few `http_archive` repos by the proper existing submodules to avoid code drift.
3. Updated `pybind11_bazel` and added `python_version="3"` to `python_configure`. This prevents hard-to-debug error that are caused by an attempt to build with python2 on the systems where it's a default python (Ubuntu 18.04 for example).
4. Added `unused_` repos, they purpose is to hide the unwanted submodules of submodules that often have bazel targets in them.
5. Updated CI to build //... -- this is a great step forward to prevent regressions in targets not only in the top-level BUILD.bazel file, but in other folders too.
6. Switch default bazel build to use gpu support.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78870
Approved by: https://github.com/ezyang
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71412
This is only in CMake and internal builds right now. Add to Bazel for
parity.
ghstack-source-id: 150235094
Test Plan: Built and ran locally. Rely on CI to verify.
Reviewed By: malfet
Differential Revision: D33635743
fbshipit-source-id: b9e5abbef5feabd52c53a9c2b95713b87ce81681
(cherry picked from commit 11700dbc80200093fdd74b1be066b4e740cee516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70201
Included functions:
save_mobile_module -> saves a mobile::Module to flatbuffer
load_mobile_module_from_file -> loads a flatbuffer into mobile::Module
parse_mobile_module -> parses from bytes or deserialized flatbuffer module object
Compared to previous attempts, this diff only adds flatbuffer to cmake target and leaves fbcode/xplat ones unchanged.
Test Plan: unittest
Reviewed By: malfet, gmagogsfm
Differential Revision: D33239362
fbshipit-source-id: b9ca36b83d6af2d78cc50b9eb9e2a6fa7fce0763
Summary:
Fixes https://github.com/pytorch/pytorch/issues/35316
On master, bazel cuda build is disabled due to lack of a proper `cu_library` rule. This PR:
- Add `rules_cuda` to the WORKSPACE and forward `cu_library` to `rules_cuda`.
- Use a simple local cuda and cudnn repositories (adopted from TRTorch) for cuda 11.3.
- Fix current broken cuda build.
- Enable cuda build in CI, not just for `:torch` target but all the test binaries to catch undefined symbols.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66241
Reviewed By: ejguan
Differential Revision: D31544091
Pulled By: malfet
fbshipit-source-id: fd3c34d0e8f80fee06f015694a4c13a8e9e12206
Summary:
## Context
We take the first step at tackling the GPU-bazel support by adding bazel external workspaces `local_config_cuda` and `cuda`, where the first one has some hardcoded values and lists of files, and the second one provides a nicer, high-level wrapper that maps into the already expected by pytorch bazel targets that are guarded with `if_cuda` macro.
The prefix `local_config_` signifies the fact that we are breaking the bazel hermeticity philosophy by explicitly relaying on the CUDA installation that is present on the machine.
## Testing
Notice an important scenario that is unlocked by this change: compilation of cpp code that depends on cuda libraries (i.e. cuda.h and so on).
Before:
```
sergei.vorobev@cs-sv7xn77uoy-gpu-1628706590:~/src/pytorch4$ bazelisk build --define=cuda=true //:c10
ERROR: /home/sergei.vorobev/src/pytorch4/tools/config/BUILD:12:1: no such package 'tools/toolchain': BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package.
- /home/sergei.vorobev/src/pytorch4/tools/toolchain and referenced by '//tools/config:cuda_enabled_and_capable'
ERROR: While resolving configuration keys for //:c10: Analysis failed
ERROR: Analysis of target '//:c10' failed; build aborted: Analysis failed
INFO: Elapsed time: 0.259s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (2 packages loaded, 2 targets configured)
```
After:
```
sergei.vorobev@cs-sv7xn77uoy-gpu-1628706590:~/src/pytorch4$ bazelisk build --define=cuda=true //:c10
INFO: Analyzed target //:c10 (6 packages loaded, 246 targets configured).
INFO: Found 1 target...
Target //:c10 up-to-date:
bazel-bin/libc10.lo
bazel-bin/libc10.so
INFO: Elapsed time: 0.617s, Critical Path: 0.04s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
```
The `//:c10` target is a good testing one for this, because it has such cases where the [glob is different](075024b9a3/BUILD.bazel (L76-L81)), based on do we compile for CUDA or not.
## What is out of scope of this PR
This PR is a first in a series of providing the comprehensive GPU bazel build support. Namely, we don't tackle the [cu_library](11a40ad915/tools/rules/cu.bzl (L2)) implementation here. This would be a separate large chunk of work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63604
Reviewed By: soulitzer
Differential Revision: D30442083
Pulled By: malfet
fbshipit-source-id: b2a8e4f7e5a25a69b960a82d9e36ba568eb64595
Summary:
fmt is a formatting library for C++. It has several properties that make it nice
for inclusion in PyTorch:
- Widely used
- Basically copies how Python does it
- Support for all the compilers and platforms we care about
- Standards track (C++20)
- Small code size
- Header only
This PR includes it as a submodule and sets up the build.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37356
Differential Revision: D21262619
Pulled By: suo
fbshipit-source-id: 1d9a1a5ed08a634213748e7b02fc718ef8dac4c9