pytorch/torch
Jun Luo 221daeb1a7 Fix deepcopy for tensor with MTIA device key. (#107427)
Summary: Tensor with MTIA device type doesn't have storage and we need to treat it same as other tensors which don't have storage.

Test Plan: CI tests.

Differential Revision: D48456004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107427
Approved by: https://github.com/cx-yin, https://github.com/ezyang
2023-08-23 20:47:36 +00:00
..
_awaits
_C [profiler] _RecordFunctionFast - faster python bindings for record_function (#107195) 2023-08-22 18:48:30 +00:00
_C_flatbuffer
_custom_op Extend impl_backward to be usable with torch.library operators (#106817) 2023-08-14 14:33:46 +00:00
_decomp Use expect_true to make split with unbacked sizes work. (#106788) 2023-08-15 20:31:30 +00:00
_dispatch Fix some fake mode confusion between inner/outer fake mode in export (#106515) 2023-08-04 15:42:23 +00:00
_dynamo [dynamo] Treat monkey patched .forward as dynamic (#107104) 2023-08-23 19:03:02 +00:00
_export Make ExportedProgram valid tracing callable (#107657) 2023-08-23 08:01:57 +00:00
_functorch pt2: make aot_eager backend handle basic float8 operations (#107783) 2023-08-23 18:10:53 +00:00
_higher_order_ops [quant][pt2e][fix] Remove the requirement of using no_grad for reference model that contains quantized conv2d (#106924) 2023-08-10 19:16:10 +00:00
_inductor Back out "[inductor] make thread order consistent with loop order (#106827)" (#107796) 2023-08-23 18:02:54 +00:00
_lazy
_logging Add frame/recompile counter to all log messages in tracing context (#107530) 2023-08-21 13:02:12 +00:00
_numpy torch._numpy: keep f16 CUDA tensors in f16 where possible (#107768) 2023-08-23 18:35:47 +00:00
_prims [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_prims_common Remove dynamo+nvfuser (#105789) 2023-08-08 22:29:32 +00:00
_refs [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_subclasses Fix torch.compile FunctionalTensor inputs for higherOrderOps (#107604) 2023-08-23 02:42:18 +00:00
amp Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
ao [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
autograd [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
backends [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
compiler
contrib
cpu Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
csrc Remove unnecessary import in python_variable.cpp (#107794) 2023-08-23 19:43:39 +00:00
cuda Correctly format original traceback for delayed CUDA error (#107297) 2023-08-17 03:13:31 +00:00
distributed [2/N][DeviceMesh] Overriding __getitem__ for DeviceMesh to support Mesh Slicing (#107730) 2023-08-23 20:35:30 +00:00
distributions [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
export Expose torch.export.constrain_as_{size,value} APIs (#107735) 2023-08-23 20:13:40 +00:00
fft
func [pt2] support vmap (#101707) 2023-08-09 03:39:33 +00:00
futures
fx pt2: make aot_eager backend handle basic float8 operations (#107783) 2023-08-23 18:10:53 +00:00
jit [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
legacy
lib Revert "Remove some unnecessary <iostream> includes from headers (#106914)" 2023-08-22 17:16:48 +00:00
linalg [CUDA][Linalg} Patch crash of linalg.eigh when input matrix is ill-conditioned, in some cusolver version (#107082) 2023-08-16 21:15:15 +00:00
masked [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
monitor
mps [MPS] Introduce torch.mps.Event() APIs (#102121) 2023-08-08 03:45:45 +00:00
multiprocessing Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
nested
nn fix pad_sequence docstring (#107669) 2023-08-23 18:01:39 +00:00
onnx [ONNX] More debug logging from fx to onnx (#107654) 2023-08-23 18:05:15 +00:00
optim [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
package [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
profiler Adding allocated and reserved memory values to memory timline view. (#107056) 2023-08-21 17:20:13 +00:00
quantization Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
signal
sparse Revert "[core][pruning][feature] cuSPARSELt kernels and ops (#102133)" 2023-08-09 16:03:14 +00:00
special
testing Revert the removal of a SampleInput for gather (#107776) 2023-08-23 19:01:03 +00:00
utils Revert "reseed all Generators in Dataloader's _worker_loop() -- via GC (#107131)" 2023-08-23 17:08:07 +00:00
__config__.py
__future__.py
__init__.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py Extend impl_backward to be usable with torch.library operators (#106817) 2023-08-14 14:33:46 +00:00
_deploy.py
_guards.py [dynamo] Store originating source in the Guard object (#107634) 2023-08-22 02:16:31 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_lowrank.py
_meta_registrations.py [CPU] Enable fused_attention pattern matcher (#107128) 2023-08-20 08:53:24 +00:00
_namedtensor_internals.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_ops.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_tensor_docs.py Modify signature for tensor.tile in doc (#106295) 2023-08-01 19:51:52 +00:00
_tensor_str.py
_tensor.py Fix deepcopy for tensor with MTIA device key. (#107427) 2023-08-23 20:47:36 +00:00
_torch_docs.py Fix torch.bucketize docs for "right" (#104474) 2023-08-17 03:08:07 +00:00
_utils_internal.py [Dynamo] Improve PT2 fbcode logging observability (#106932) 2023-08-11 20:46:04 +00:00
_utils.py Back out "Reland "Make adding buffers more like adding parameters (#104069)" (#106224)" (#106743) 2023-08-08 15:27:34 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt Revert "[Reland] Upgrade NVTX to NVTX3 (#97582)" 2023-08-15 20:55:12 +00:00
custom_class_detail.h
custom_class.h Revert "Remove some unnecessary <iostream> includes from headers (#106914)" 2023-08-22 17:16:48 +00:00
extension.h reduce header file to boost cpp_wrapper build. (#107585) 2023-08-22 11:58:47 +00:00
functional.py fix torch.norm for custom device (#106198) 2023-08-02 06:25:52 +00:00
hub.py
library.h
library.py Enable registering fallthroughs to (op, dk) from torch.library (#106086) 2023-07-28 19:37:59 +00:00
overrides.py Expose torch.export.constrain_as_{size,value} APIs (#107735) 2023-08-23 20:13:40 +00:00
py.typed
quasirandom.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
random.py
README.txt
return_types.py
script.h
serialization.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
storage.py
torch_version.py
types.py [BE]: Apply PYI autofixes to various types (#107521) 2023-08-20 02:42:21 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.