[DOC] Update Docker image in build_from_source and dev_guide docs.

The previously used image, `tensorflow/build:latest-python3.9`, is no longer used by XLA.

XLA's CI and testing workflows now use the [ml-build](us-docker.pkg.dev/ml-oss-artifacts-published/ml-public-container/ml-build) image, which is based on Ubuntu 22.04 and includes Clang 18.

This update aligns the `build_from_source` and `developer_guide` documentation with current XLA practices by replacing the outdated image with `ml-build`.

PiperOrigin-RevId: 770894266
This commit is contained in:
Alex Pivovarov 2025-06-12 19:48:36 -07:00 committed by TensorFlower Gardener
parent 141469a311
commit 65781570c5
2 changed files with 118 additions and 40 deletions

View File

@ -19,50 +19,83 @@ example). Refer to the *Sample session* section for details.
### CPU support
We recommend using a suitable docker container to build/test XLA, such as
[TensorFlow's docker container](https://www.tensorflow.org/install/docker):
We recommend using a suitable Docker image - such as
[ml-build](https://us-docker.pkg.dev/ml-oss-artifacts-published/ml-public-container/ml-build)
, which is also used in XLA's CI workflows on GitHub - for building and testing
XLA. The ml-build image comes with Clang 18 pre-installed.
```
docker run --name xla -w /xla -it -d --rm -v $PWD:/xla tensorflow/build:latest-python3.9 bash
```sh
docker run -itd --rm \
--name xla \
-w /xla \
-v $PWD:/xla \
us-docker.pkg.dev/ml-oss-artifacts-published/ml-public-container/ml-build:latest \
bash
```
Using a docker container you can build XLA with CPU support using the following
commands:
Using a Docker container, you can build XLA with CPU support by running the
following commands:
```
```sh
docker exec xla ./configure.py --backend=CPU
docker exec xla bazel build //xla/... --spawn_strategy=sandboxed --test_output=all
docker exec xla bazel build \
--spawn_strategy=sandboxed \
--test_output=all \
//xla/...
```
If you want to build XLA targets with CPU support without Docker you need to
install clang. XLA currently builds on CI with clang-17, but earlier versions
should also work:
If you want to build XLA targets with CPU support **without using Docker**,
youll need to install Clang. XLA is currently built with Clang 18 in CI,
but earlier versions should also work.
```
apt install clang
```
Then configure and build targets using the following commands:
To configure and build the targets, run the following commands:
```sh
./configure.py --backend=CPU
bazel build --test_output=all --spawn_strategy=sandboxed //xla/...
bazel build \
--spawn_strategy=sandboxed \
--test_output=all \
//xla/...
```
### GPU support
We recommend using the same docker container as above to build XLA with GPU
support:
We recommend using the same Docker container mentioned above to build XLA with
GPU support.
```
docker run --name xla_gpu -w /xla -it -d --rm -v $PWD:/xla tensorflow/build:latest-python3.9 bash
To start Docker container with access to all GPUs, run the following command:
```sh
docker run -itd --rm \
--gpus all \
--name xla_gpu \
-w /xla \
-v $PWD:/xla \
us-docker.pkg.dev/ml-oss-artifacts-published/ml-public-container/ml-build:latest \
bash
```
To build XLA with GPU support use the following command:
To build XLA with GPU support, run the following commands:
```
```sh
docker exec xla_gpu ./configure.py --backend=CUDA
docker exec xla_gpu bazel build --test_output=all --spawn_strategy=sandboxed //xla/...
docker exec xla_gpu bazel build \
--spawn_strategy=sandboxed \
--test_output=all \
//xla/...
```
**Note:** You can build XLA on a machine without GPUs. In that case:
- Do **not** use `--gpus all` flag when starting the Docker container.
- Specify CUDA compute capabilities manually, For example:
```
docker exec xla_gpu ./configure.py --backend=CUDA \
--cuda_compute_capabilities="9.0"
```
For more details regarding
@ -71,10 +104,13 @@ For more details regarding
You can build XLA targets with GPU support without Docker as well. Configure and
build targets using the following commands:
```
```sh
./configure.py --backend=CUDA
bazel build --test_output=all --spawn_strategy=sandboxed //xla/...
bazel build \
--spawn_strategy=sandboxed \
--test_output=all \
//xla/...
```
For more details regarding
@ -130,7 +166,7 @@ You should now be in the `/jax` directory within the container
```
Optionally, you can overwrite `HERMETIC` envs, e.g.:
```
```bash
--repo_env=HERMETIC_CUDA_COMPUTE_CAPABILITIES="sm_90"
```

View File

@ -40,40 +40,82 @@ the repository, and create a pull request.
is unavailable, you can [install Bazel](https://bazel.build/install)
manually.
2. Create and run a
[TensorFlow Docker container](https://www.tensorflow.org/install/docker).
2. Create and run the
[ml-build](https://us-docker.pkg.dev/ml-oss-artifacts-published/ml-public-container/ml-build)
Docker container.
To get the TensorFlow Docker image for both CPU and GPU building, run the
following command:
To set up a Docker container for building XLA with support for both CPU and
GPU, run the following command:
```sh
docker run --name xla -w /xla -it -d --rm -v $PWD:/xla tensorflow/build:latest-python3.9 bash
docker run -itd --rm \
--name xla \
-w /xla \
-v $PWD:/xla \
us-docker.pkg.dev/ml-oss-artifacts-published/ml-public-container/ml-build:latest \
bash
```
If building with GPU/CUDA support, add `--gpus all` to grant the container
access to all available GPUs. This enables automatic detection of CUDA
compute capabilities.
## Build
Build for CPU:
Configure for CPU:
```sh
docker exec xla ./configure.py --backend=CPU
docker exec xla bazel build --test_output=all --spawn_strategy=sandboxed //xla/...
```
Build for GPU:
Configure for GPU:
```sh
docker exec xla ./configure.py --backend=CUDA
docker exec xla bazel build --test_output=all --spawn_strategy=sandboxed //xla/...
```
**NB:** please note that with hermetic CUDA rules, you don't have to build XLA
in Docker. You can build XLA for GPU on your machine without GPUs and without
NVIDIA driver installed:
CUDA compute capabilities will be detected automatically by running
`nvidia-smi`. If GPUs are not available during the build, you must specify
the compute capabilities manually. For example:
```sh
# Automatically detects compute capabilities (requires GPUs)
./configure.py --backend=CUDA
bazel build --test_output=all --spawn_strategy=sandboxed //xla/...
# Manually specify compute capabilities (for builds without GPUs)
./configure.py --backend=CUDA --cuda_compute_capabilities="9.0"
```
Build:
```sh
docker exec xla bazel build \
--spawn_strategy=sandboxed \
--test_output=all \
//xla/...
```
**Note:** You can build XLA on a machine without GPUs. In that case:
- Do **not** use `--gpus all` flag when starting the Docker container.
- During `./configure.py`, manually specify the CUDA compute capabilities
using the `--cuda_compute_capabilities` flag.
**Note:** Thanks to hermetic CUDA rules, you don't need to build XLA inside a
Docker container. You can build XLA for GPU directly on your machine - even if
it doesn't have a GPU or the NVIDIA driver installed.
```sh
# Automatically detects compute capabilities (requires GPUs)
./configure.py --backend=CUDA
# Manually specify compute capabilities (for builds without GPUs)
./configure.py --backend=CUDA --cuda_compute_capabilities="9.0"
bazel build \
--spawn_strategy=sandboxed \
--test_output=all \
//xla/...
```
Your first build will take quite a while because it has to build the entire