mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
# Motivation This PR intends to extend `cuda_lazy_init` to `device_lazy_init` which is a device-agnostic API that can support any backend. And change `maybe_initialize_cuda` to `maybe_initialize_device` to support lazy initialization for CUDA while maintaining scalability. # Design We maintain a flag for each backend to manage the lazy initialization state separately. # Additional Context No need more UTs. This is a reland PR, the original PR is [refactor lazy init to device-agnostic](https://github.com/pytorch/pytorch/pull/118846). This is a common PR, and does not trigger xpu ciflow. Differential Revision: [D53478332](https://our.internmc.facebook.com/intern/diff/D53478332) Pull Request resolved: https://github.com/pytorch/pytorch/pull/119248 Approved by: https://github.com/EikanWang, https://github.com/gujinghui, https://github.com/jgong5, https://github.com/atalman |
||
|---|---|---|
| .. | ||
| api | ||
| decompositions | ||
| dest | ||
| executorch | ||
| fuse_attention_patterns | ||
| operator_versions | ||
| selective_build | ||
| shape_functions | ||
| static_runtime | ||
| __init__.py | ||
| BUCK.oss | ||
| BUILD.bazel | ||
| build.bzl | ||
| code_template.py | ||
| context.py | ||
| gen_backend_stubs.py | ||
| gen_executorch.py | ||
| gen_functionalization_type.py | ||
| gen_lazy_tensor.py | ||
| gen_vmap_plumbing.py | ||
| gen.py | ||
| local.py | ||
| model.py | ||
| native_function_generation.py | ||
| utils.py | ||
| yaml_utils.py | ||