tensorflow/third_party/xla
A. Unique TensorFlower 4618f903c4 Reverts bec8916f32
PiperOrigin-RevId: 826722506
2025-10-31 20:19:17 -07:00
..
.github PR #33141: Bump github/codeql-action from 4.30.9 to 4.31.0 2025-10-29 02:52:45 -07:00
.kokoro
build_tools Add option to tag PJRT wheels with nightly timestamp 2025-10-29 14:33:36 -07:00
docs [xla] Update documentation to use xla::Future 2025-10-28 15:56:02 -07:00
third_party Update XNNPACK in XLA 2025-10-31 14:36:06 -07:00
tools
xla Reverts bec8916f32 2025-10-31 20:19:17 -07:00
.bazelrc Support building XLA with Bzlmod 2025-10-10 11:09:33 -07:00
.bazelversion Update Bazel version to 7.7.0. 2025-10-30 10:27:38 -07:00
.clang-format
.clang-tidy
.gitignore Support building XLA with Bzlmod 2025-10-10 11:09:33 -07:00
AUTHORS
BUILD.bazel
CONTRIBUTING.md
LICENSE
MODULE.bazel Add option to tag PJRT wheels with nightly timestamp 2025-10-29 14:33:36 -07:00
opensource_only.files
README.md
requirements_lock_3_11.txt
requirements_lock_3_12.txt
tensorflow.bazelrc Remove usage of mirrored tar files from CI because hermetic xz tool helps to unpack tar.xz faster. 2025-10-22 16:08:18 -07:00
warnings.bazelrc
WORKSPACE Update rules_ml_toolchain to version with nvcc wrapper fixes . 2025-10-29 20:42:44 -07:00
workspace1.bzl
workspace2.bzl Update XNNPACK in XLA 2025-10-27 13:13:02 -07:00
workspace3.bzl
workspace4.bzl
workspace0.bzl Update rules_ml_toolchain to version with nvcc wrapper fixes . 2025-10-29 20:42:44 -07:00

XLA

XLA (Accelerated Linear Algebra) is an open-source machine learning (ML) compiler for GPUs, CPUs, and ML accelerators.

OpenXLA Ecosystem

The XLA compiler takes models from popular ML frameworks such as PyTorch, TensorFlow, and JAX, and optimizes them for high-performance execution across different hardware platforms including GPUs, CPUs, and ML accelerators.

openxla.org is the project's website.

Get started

If you want to use XLA to compile your ML project, refer to the corresponding documentation for your ML framework:

If you're not contributing code to the XLA compiler, you don't need to clone and build this repo. Everything here is intended for XLA contributors who want to develop the compiler and XLA integrators who want to debug or add support for ML frontends and hardware backends.

Contribute

If you'd like to contribute to XLA, review How to Contribute and then see the developer guide.

Contacts

  • For questions, contact the maintainers - maintainers at openxla.org

Resources

Code of Conduct

While under TensorFlow governance, all community spaces for SIG OpenXLA are subject to the TensorFlow Code of Conduct.