pytorch/torch
wizzniu c07dc64017 Update pin memory related APIs to not pass 'device' argument (#131858)
Based on https://github.com/pytorch/pytorch/pull/126376, this PR tries to update all PT callers (e.g., `Tensor.is_pinned()`, `Tensor.pin_memory()`) to not pass `device` argument.
As for `storage/untyped_storage.is_pinned()/pin_memory()`, we keep the `device` argument but passing `device` is discouraged. And if not given, the default `device` is still 'cuda' for BC.
Additionally, based on device-agnostic pin_memory, `pin_memory_device` argument of `torch.utils.data.DataLoader` is discouraged  now. For BC, explictly passing this argument is still effective. If not given, the default `device` will be the current accelerator.

Fixes #124908
Relates https://github.com/pytorch/pytorch/pull/126376

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131858
Approved by: https://github.com/albanD

Co-authored-by: albanD <desmaison.alban@gmail.com>
2025-01-15 17:23:35 +00:00
..
_awaits
_C Revert "[CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441)" 2025-01-14 00:48:28 +00:00
_C_flatbuffer
_custom_op
_decomp Revert "Migrate from Tuple -> tuple in torch/_decomp (#144260)" 2025-01-10 01:47:29 +00:00
_dispatch Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_dynamo Graph freezing preparation for non-Inductor backends (#139902) 2025-01-15 11:25:04 +00:00
_export [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
_functorch Add non_c_binding torch functions to allowlist for AOTAutogradCache, confirm no special handlers for them (#144802) 2025-01-15 05:41:36 +00:00
_higher_order_ops [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
_inductor Restore support for other types of async_compile pools (spawn, fork) (#144491) 2025-01-15 06:04:49 +00:00
_lazy remove allow-untyped-defs from torch/_lazy/config.py (#143603) 2024-12-20 05:34:19 +00:00
_library [reland][export] don't decompose custom triton op when exporting (#144284) 2025-01-11 01:34:35 +00:00
_logging Implement increment and add_to_set for CompileEventLogger (#143427) 2025-01-14 02:42:49 +00:00
_numpy [BE] fix ruff rule E226: add missing whitespace around operator in f-strings (#144415) 2025-01-08 21:55:00 +00:00
_prims Remove extra copy torch/_prims (#144407) 2025-01-08 20:14:48 +00:00
_prims_common Pass allow_rhs_unbacked to the stride test in metadata test too (#143040) 2024-12-19 09:37:50 +00:00
_refs Fix torch._refs.tensor error with empty list (#143461) 2025-01-08 01:29:00 +00:00
_strobelight Propagate callable parameter types using ParamSpec (#142306) (#143797) 2024-12-29 23:03:14 +00:00
_subclasses Fix FakeTensor device creation for MPS (#144796) 2025-01-15 05:01:25 +00:00
_vendor
accelerator torch/accelerator: fix device type comparison (#143541) 2024-12-23 10:54:53 +00:00
amp
ao [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
autograd [5/N] Apply Ruff fixes and pyupgrade to Python 3.9 (#144205) 2025-01-15 04:00:47 +00:00
backends Revert "[CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441)" 2025-01-14 00:48:28 +00:00
compiler Add AOTAutogradCache support for cache hot loading APIs (#144499) 2025-01-13 07:07:18 +00:00
contrib [BE][Easy] enable PYFMT for torch/[a-s]*/ (#138447) 2024-12-23 14:04:00 +00:00
cpu
csrc Expose several APIs to public (torch python APIs) (#144525) 2025-01-15 14:34:45 +00:00
cuda Support with statement on torch.Stream (#140138) 2025-01-10 02:05:19 +00:00
distributed Update pin memory related APIs to not pass 'device' argument (#131858) 2025-01-15 17:23:35 +00:00
distributions ReshapeTransform: added missing argument in docstring (#144401) 2025-01-13 17:59:59 +00:00
export [reland][export] don't decompose custom triton op when exporting (#144284) 2025-01-11 01:34:35 +00:00
fft [BE][Easy] enable PYFMT for torch/[a-s]*/ (#138447) 2024-12-23 14:04:00 +00:00
func [BE][Easy] enable PYFMT for torch/[a-s]*/ (#138447) 2024-12-23 14:04:00 +00:00
futures [BE][Easy] enable PYFMT for torch/[a-s]*/ (#138447) 2024-12-23 14:04:00 +00:00
fx [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
jit remove allow-untyped-defs from torch/jit/_pickle.py (#144625) 2025-01-12 00:06:25 +00:00
legacy
lib
linalg [BE][Easy] enable PYFMT for torch/[a-s]*/ (#138447) 2024-12-23 14:04:00 +00:00
masked Update torch.masked.mean to upcast dtype for bool tensors (#139999) 2025-01-08 10:35:19 +00:00
monitor [BE][Easy] enable PYFMT for torch/[a-s]*/ (#138447) 2024-12-23 14:04:00 +00:00
mps Stop ignoring mypy errors in torch/testing/_internal/common_utils.py (#144483) 2025-01-14 22:32:51 +00:00
mtia Revert "[MTIA] (3/n) Implement PyTorch APIs to query/reset device peak memory usage (#143347)" 2024-12-21 04:04:16 +00:00
multiprocessing [BE][CI] bump ruff to 0.8.4 (#143753) 2024-12-24 12:24:10 +00:00
nested [BE][Easy] enable PYFMT for torch/[a-s]*/ (#138447) 2024-12-23 14:04:00 +00:00
nn [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
onnx [onnx] Fix bug for exporting torch.cdist into onnx and support 'compute_mode' (#144213) 2025-01-09 20:07:20 +00:00
optim Revert "Removed unused _RequiredParameter (#144771)" 2025-01-15 15:51:33 +00:00
package Revert "Use absolute path path.resolve() -> path.absolute() (#129409)" 2025-01-04 14:17:20 +00:00
profiler [Profiler] Fix device setting error of other backends in torch.profiler (#144237) 2025-01-10 10:41:11 +00:00
quantization
signal [BE] typing for decorators (#144161) 2025-01-04 16:40:09 +00:00
sparse
special [BE][Easy] enable PYFMT for torch/[a-s]*/ (#138447) 2024-12-23 14:04:00 +00:00
testing Stop ignoring mypy errors in torch/testing/_internal/common_utils.py (#144483) 2025-01-14 22:32:51 +00:00
utils Update pin memory related APIs to not pass 'device' argument (#131858) 2025-01-15 17:23:35 +00:00
xpu Refine torch.xpu.get_device_properties API error message (#144379) 2025-01-10 06:27:51 +00:00
__config__.py remove allow-untyped-defs for torch/__config__.py (#143320) 2024-12-17 00:16:09 +00:00
__future__.py
__init__.py Revert "Fix torch.normal ignores default_device (#144070)" 2025-01-14 17:41:58 +00:00
_appdirs.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_classes.py
_compile.py [BE] typing for decorators (#144161) 2025-01-04 16:40:09 +00:00
_custom_ops.py
_deploy.py
_environment.py
_guards.py [ca] add compiled autograd to CompileId (#141907) 2024-12-21 00:41:24 +00:00
_jit_internal.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_linalg_utils.py
_lobpcg.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_lowrank.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_meta_registrations.py [Quant][Inductor][X86] Separate unary post op fusion and lowering for qconv (#144312) 2025-01-15 00:50:54 +00:00
_namedtensor_internals.py
_ops.py Propagate callable parameter types using ParamSpec (#142306) (#144047) 2025-01-06 16:16:18 +00:00
_python_dispatcher.py
_size_docs.py remove allow-untyped-defs from torch/_size_docs.py (#143942) 2024-12-29 01:00:46 +00:00
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py Update pin memory related APIs to not pass 'device' argument (#131858) 2025-01-15 17:23:35 +00:00
_tensor_str.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_tensor.py __cuda_array_interface__: Use "<V2" for bfloat16. (#143042) 2024-12-14 06:27:52 +00:00
_thread_safe_fork.py
_torch_docs.py Support with statement on torch.Stream (#140138) 2025-01-10 02:05:19 +00:00
_utils_internal.py [reland] Kill capture_pre_autograd_graph API (#143426) 2024-12-18 12:07:09 +00:00
_utils.py Reraise worker errors as runtime errors in more cases when the original exception can't be constructed (#140911) 2024-12-14 03:11:36 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
abi-check.cpp
CMakeLists.txt Revert "export AOTI_TORCH_EXPORT on Windows. (#140030)" 2025-01-06 18:15:52 +00:00
custom_class_detail.h Enable readability-redundant-declaration (#143982) 2024-12-31 00:20:10 +00:00
custom_class.h
extension.h
functional.py
hub.py
library.h Enable more readability-redundant checks (#143963) 2024-12-30 14:49:33 +00:00
library.py [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
overrides.py [dim_order] raised runtime error when tensor has ambiguous dim order (#141632) 2024-12-08 23:16:57 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Add config.save.use_pinned_memory_for_d2h to serialization config (#143342) 2024-12-20 21:01:18 +00:00
storage.py Update pin memory related APIs to not pass 'device' argument (#131858) 2025-01-15 17:23:35 +00:00
torch_version.py
types.py
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.