The previously used image, `tensorflow/build:latest-python3.9`, is no longer used by XLA. XLA's CI and testing workflows now use the [ml-build](us-docker.pkg.dev/ml-oss-artifacts-published/ml-public-container/ml-build) image, which is based on Ubuntu 22.04 and includes Clang 18. This update aligns the `build_from_source` and `developer_guide` documentation with current XLA practices by replacing the outdated image with `ml-build`. PiperOrigin-RevId: 770894266
4.0 KiB
XLA developer guide
This guide shows you how to get started developing the XLA project.
Before you begin, complete the following prerequisites:
- Go to Contributing page and review the contribution process.
- If you haven't already done so, sign the Contributor License Agreement.
- Install or configure the following dependencies:
Then follow the steps below to get the source code, set up an environment, build the repository, and create a pull request.
Get the code
-
Create a fork of the XLA repository.
-
Clone your fork of the repo, replacing
{USER}with your GitHub username:git clone https://github.com/{USER}/xla.git -
Change into the
xladirectory:cd xla -
Configure the remote upstream repo:
git remote add upstream https://github.com/openxla/xla.git
Set up an environment
-
Install Bazel.
To build XLA, you must have Bazel installed. The recommended way to install Bazel is using Bazelisk, which automatically downloads the correct Bazel version for XLA. If Bazelisk is unavailable, you can install Bazel manually.
-
Create and run the ml-build Docker container.
To set up a Docker container for building XLA with support for both CPU and GPU, run the following command:
docker run -itd --rm \ --name xla \ -w /xla \ -v $PWD:/xla \ us-docker.pkg.dev/ml-oss-artifacts-published/ml-public-container/ml-build:latest \ bashIf building with GPU/CUDA support, add
--gpus allto grant the container access to all available GPUs. This enables automatic detection of CUDA compute capabilities.
Build
Configure for CPU:
docker exec xla ./configure.py --backend=CPU
Configure for GPU:
docker exec xla ./configure.py --backend=CUDA
CUDA compute capabilities will be detected automatically by running
nvidia-smi. If GPUs are not available during the build, you must specify
the compute capabilities manually. For example:
# Automatically detects compute capabilities (requires GPUs)
./configure.py --backend=CUDA
# Manually specify compute capabilities (for builds without GPUs)
./configure.py --backend=CUDA --cuda_compute_capabilities="9.0"
Build:
docker exec xla bazel build \
--spawn_strategy=sandboxed \
--test_output=all \
//xla/...
Note: You can build XLA on a machine without GPUs. In that case:
- Do not use
--gpus allflag when starting the Docker container. - During
./configure.py, manually specify the CUDA compute capabilities using the--cuda_compute_capabilitiesflag.
Note: Thanks to hermetic CUDA rules, you don't need to build XLA inside a Docker container. You can build XLA for GPU directly on your machine - even if it doesn't have a GPU or the NVIDIA driver installed.
# Automatically detects compute capabilities (requires GPUs)
./configure.py --backend=CUDA
# Manually specify compute capabilities (for builds without GPUs)
./configure.py --backend=CUDA --cuda_compute_capabilities="9.0"
bazel build \
--spawn_strategy=sandboxed \
--test_output=all \
//xla/...
Your first build will take quite a while because it has to build the entire stack, including XLA, MLIR, and StableHLO.
To learn more about building XLA, see Build from source.
Create a pull request
When you're ready to send changes for review, create a pull request.
To learn about the XLA code review philosophy, see Review Process.