tensorflow/third_party/xla
Ilya Tikhonovskiy 4f3f2c9444 [XLA:GPU] add NanCount thunk to thunk_buffer_debug_pass
We call the pass for f32 and bf16 output buffers.

PiperOrigin-RevId: 826808271
2025-11-01 03:12:22 -07:00
..
.github PR #33141: Bump github/codeql-action from 4.30.9 to 4.31.0 2025-10-29 02:52:45 -07:00
.kokoro
build_tools Add option to tag PJRT wheels with nightly timestamp 2025-10-29 14:33:36 -07:00
docs [xla] Update documentation to use xla::Future 2025-10-28 15:56:02 -07:00
third_party Update XNNPACK in XLA 2025-10-31 14:36:06 -07:00
tools Remove an outdated comment about Win 2022 RBE pool readiness. 2025-10-01 07:39:02 -07:00
xla [XLA:GPU] add NanCount thunk to thunk_buffer_debug_pass 2025-11-01 03:12:22 -07:00
.bazelrc Support building XLA with Bzlmod 2025-10-10 11:09:33 -07:00
.bazelversion Update Bazel version to 7.7.0. 2025-10-30 10:27:38 -07:00
.clang-format
.clang-tidy
.gitignore Support building XLA with Bzlmod 2025-10-10 11:09:33 -07:00
AUTHORS
BUILD.bazel
CONTRIBUTING.md
LICENSE
MODULE.bazel Add option to tag PJRT wheels with nightly timestamp 2025-10-29 14:33:36 -07:00
opensource_only.files Remove deprecated non-hermetic CUDA and NCCL repo rules. 2025-10-03 02:34:48 -07:00
README.md
requirements_lock_3_11.txt
requirements_lock_3_12.txt
tensorflow.bazelrc Remove usage of mirrored tar files from CI because hermetic xz tool helps to unpack tar.xz faster. 2025-10-22 16:08:18 -07:00
warnings.bazelrc [xla] Update warnings.bazelrc 2025-10-01 06:34:54 -07:00
WORKSPACE Update rules_ml_toolchain to version with nvcc wrapper fixes . 2025-10-29 20:42:44 -07:00
workspace1.bzl
workspace2.bzl Update XNNPACK in XLA 2025-10-27 13:13:02 -07:00
workspace3.bzl
workspace4.bzl Simplify tf_vendored repo rule. 2025-10-02 00:42:00 -07:00
workspace0.bzl Update rules_ml_toolchain to version with nvcc wrapper fixes . 2025-10-29 20:42:44 -07:00

XLA

XLA (Accelerated Linear Algebra) is an open-source machine learning (ML) compiler for GPUs, CPUs, and ML accelerators.

OpenXLA Ecosystem

The XLA compiler takes models from popular ML frameworks such as PyTorch, TensorFlow, and JAX, and optimizes them for high-performance execution across different hardware platforms including GPUs, CPUs, and ML accelerators.

openxla.org is the project's website.

Get started

If you want to use XLA to compile your ML project, refer to the corresponding documentation for your ML framework:

If you're not contributing code to the XLA compiler, you don't need to clone and build this repo. Everything here is intended for XLA contributors who want to develop the compiler and XLA integrators who want to debug or add support for ML frontends and hardware backends.

Contribute

If you'd like to contribute to XLA, review How to Contribute and then see the developer guide.

Contacts

  • For questions, contact the maintainers - maintainers at openxla.org

Resources

Code of Conduct

While under TensorFlow governance, all community spaces for SIG OpenXLA are subject to the TensorFlow Code of Conduct.