Yu, Guangye
b0810168a3
Generalize poison fork logic for each device backend ( #144664 )
...
# Motivation
Generalize the posion_fork code to make it reusable across different devices.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144664
Approved by: https://github.com/EikanWang , https://github.com/albanD
2025-04-13 09:54:30 +00:00
PyTorch MergeBot
a0ab243c3a
Revert "Generalize poison fork logic for each device backend ( #144664 )"
...
This reverts commit 83bd0b63b5 .
Reverted https://github.com/pytorch/pytorch/pull/144664 on behalf of https://github.com/atalman due to failing internal tests ([comment](https://github.com/pytorch/pytorch/pull/144664#issuecomment-2795157082 ))
2025-04-10 21:02:14 +00:00
Yu, Guangye
83bd0b63b5
Generalize poison fork logic for each device backend ( #144664 )
...
# Motivation
Generalize the posion_fork code to make it reusable across different devices.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144664
Approved by: https://github.com/EikanWang , https://github.com/albanD
2025-04-10 02:34:53 +00:00
PyTorch MergeBot
bf1132c196
Revert "Generalize poison fork logic for each device backend ( #144664 )"
...
This reverts commit d86c14156d .
Reverted https://github.com/pytorch/pytorch/pull/144664 on behalf of https://github.com/atalman due to failing periodic test: python test/test_cpp_extensions_mtia_backend.py TestCppExtensionMTIABackend.test_device_context ([comment](https://github.com/pytorch/pytorch/pull/144664#issuecomment-2784506104 ))
2025-04-07 20:09:53 +00:00
Yu, Guangye
d86c14156d
Generalize poison fork logic for each device backend ( #144664 )
...
# Motivation
Generalize the posion_fork code to make it reusable across different devices.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144664
Approved by: https://github.com/EikanWang , https://github.com/albanD
2025-04-07 02:06:21 +00:00
cyyever
456c87c8a2
[8/N] Fix extra warnings brought by clang-tidy-17 ( #139151 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139151
Approved by: https://github.com/ezyang
Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
2024-10-30 14:20:08 +00:00
rzou
889e3eeed3
Avoid cuda init to FakeTensorMode ( #124413 )
...
Also partially fixes #122109
This PR:
- We add a C++ flag (only_lift_cpu_tensors) to toggle the
torch.tensor(1, device='cuda') ctor strategy.
When false (default), it does the current PyTorch behavior
of unconditionally constructing a concrete CUDA tensor then calling
lift_fresh on it. When true, we instead construct a concrete CPU
tensor, call lift_fresh, and then call Tensor.to(device) (under any ambient
modes).
- FakeTensorMode flips this flag depending on if CUDA is available or
not. We don't unconditionally set the flag to True because that is
likely BC-breaking.
Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124413
Approved by: https://github.com/eellison
2024-04-19 02:39:35 +00:00
sifengyang
46903d978b
fix maybe_initialize_device for custom device. ( #121379 )
...
1. fix maybe_initialize_device for custom device.
@wanchaol @albanD
@albanD I am very sorry that I have resubmitted a PR by new e-mail.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121379
Approved by: https://github.com/albanD
2024-04-09 16:58:52 +00:00
Edward Z. Yang
268b0cc714
Do not run CUDA lazy init if it is triggered with fake mode on. ( #122636 )
...
Partially fixes https://github.com/pytorch/pytorch/issues/122109
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122636
Approved by: https://github.com/zou3519
2024-03-26 05:43:59 +00:00
Yu, Guangye
5c46600f84
[RELAND] refactor lazy init to device-agnostic ( #119248 )
...
# Motivation
This PR intends to extend `cuda_lazy_init` to `device_lazy_init` which is a device-agnostic API that can support any backend. And change `maybe_initialize_cuda` to `maybe_initialize_device` to support lazy initialization for CUDA while maintaining scalability.
# Design
We maintain a flag for each backend to manage the lazy initialization state separately.
# Additional Context
No need more UTs.
This is a reland PR, the original PR is [refactor lazy init to device-agnostic](https://github.com/pytorch/pytorch/pull/118846 ).
This is a common PR, and does not trigger xpu ciflow.
Differential Revision: [D53478332](https://our.internmc.facebook.com/intern/diff/D53478332 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119248
Approved by: https://github.com/EikanWang , https://github.com/gujinghui , https://github.com/jgong5 , https://github.com/atalman
2024-02-07 15:58:51 +00:00
PyTorch MergeBot
ab613a4019
Revert "refactor lazy init to device-agnostic ( #118846 )"
...
This reverts commit 520771d7b3 .
Reverted https://github.com/pytorch/pytorch/pull/118846 on behalf of https://github.com/atalman due to Failing, tests https://github.com/pytorch/torchdistx/blob/main/src/python/torchdistx/_C/fake.cc#L11 ([comment](https://github.com/pytorch/pytorch/pull/118846#issuecomment-1927651305 ))
2024-02-05 18:06:30 +00:00
Yu, Guangye
520771d7b3
refactor lazy init to device-agnostic ( #118846 )
...
# Motivation
This PR intends to extend `cuda_lazy_init` to `device_lazy_init` which is a device-agnostic API that can support any backend. And change `maybe_initialize_cuda` to `maybe_initialize_device` to support lazy initialization for CUDA while maintaining scalability.
# Design
We maintain a flag for each backend to manage the lazy initialization state separately.
# Additional Context
No need more UTs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118846
Approved by: https://github.com/malfet
2024-02-02 12:10:39 +00:00