pytorch/torch
Wanchao Liang 4cc474dec4 [dtensor] support torch.save/load with DTensor (#103106)
This PR actually enables DTensor to be pickable and add tests to test
torch.save/load works correctly for DTensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103106
Approved by: https://github.com/kumpera
2023-06-09 04:11:15 +00:00
..
_awaits
_C trigger tracing for MTIA events (#102288) 2023-06-05 15:10:31 +00:00
_C_flatbuffer
_custom_op Add API to construct the functional variant of an op (#102293) 2023-06-02 13:36:50 +00:00
_decomp Back out "Remove check from _prims_common, replace with torch._check* (#102219)", Back out "Forwatd fix for D46427687" (#103128) 2023-06-07 01:41:41 +00:00
_dispatch Switch most Python RAII guard usages to context manager (#102642) 2023-06-01 16:28:37 +00:00
_dynamo Update error message with torch logging instructions (#102892) 2023-06-09 00:07:08 +00:00
_export Add early validation logic to dynamic_dim (#102982) 2023-06-08 20:23:49 +00:00
_functorch Add a little more error checking to minifier (#103057) 2023-06-07 14:40:12 +00:00
_higher_order_ops Make HigherOrderOperator stop appearing like torch.ops.* in FX (#103108) 2023-06-08 01:55:27 +00:00
_inductor ImportLib py3.10 bug in AOTInductor (#103277) 2023-06-09 02:12:34 +00:00
_lazy
_logging perf hint logging in inductor (#102250) 2023-05-27 03:43:30 +00:00
_prims Back out "Remove check from _prims_common, replace with torch._check* (#102219)", Back out "Forwatd fix for D46427687" (#103128) 2023-06-07 01:41:41 +00:00
_prims_common Back out "Remove check from _prims_common, replace with torch._check* (#102219)", Back out "Forwatd fix for D46427687" (#103128) 2023-06-07 01:41:41 +00:00
_refs [Inductor] Fix x.view(dtype) decomp and make inductor support it (#102920) 2023-06-07 17:10:54 +00:00
_subclasses [ROCm] force HIP context initialization for inductor UTs (#103149) 2023-06-07 21:42:33 +00:00
amp change error_message for XPU Autocast data type check (#102073) 2023-05-24 08:36:43 +00:00
ao [quant][pt2] Fix convert in Conv + BN + ReLU QAT fusion (#102993) 2023-06-08 22:10:29 +00:00
autograd trigger tracing for MTIA events (#102288) 2023-06-05 15:10:31 +00:00
backends [Typing] Export torch.backends as subpackage (#102099) 2023-05-24 07:03:17 +00:00
contrib
cpu
csrc [codemod] Use C++17 [[fallthrough]] in caffe2/torch/csrc/utils/python_arg_parser.cpp (#103039) 2023-06-08 17:41:48 +00:00
cuda nn.Linear with BSR inputs: spare the user from explicit Triton kernel registrations (#98403) 2023-05-31 13:09:45 +00:00
distributed [dtensor] support torch.save/load with DTensor (#103106) 2023-06-09 04:11:15 +00:00
distributions Enable mypy allow redefinition (#102046) 2023-05-24 07:05:30 +00:00
fft
func
futures
fx [export] Initial deserialization v2 (#102716) 2023-06-07 16:02:35 +00:00
jit Create public interface for torch.jit (#101678) 2023-06-05 13:14:32 +00:00
legacy
lib
linalg Fix Math Typesetting for torch.linalg.matrix_exp (#101363) 2023-05-15 00:31:12 +00:00
masked
monitor
mps [MPS] Add support for Custom Kernels (#100661) 2023-05-15 17:02:33 +00:00
multiprocessing [BE]: enable PLE error codes in ruff and fix bugs (#101079) 2023-05-11 23:57:25 +00:00
nested
nn numpy1.25 deprecation: np.product -> np.prod (#103263) 2023-06-09 02:18:53 +00:00
onnx [ONNX] Add FX exporter MaxPool tests (#102773) 2023-06-06 23:31:49 +00:00
optim add foreach support for custom device (#102047) 2023-06-07 13:59:20 +00:00
package Integrating new API usage metadata logger (#101762) 2023-05-26 00:24:26 +00:00
profiler [Profiler] Include more uncategorized events in memory profile (#101200) 2023-06-08 16:22:49 +00:00
quantization
signal
sparse sampled_addmm: BSR support (#101163) 2023-05-25 12:33:50 +00:00
special
testing numpy1.25 deprecation: np.product -> np.prod (#103263) 2023-06-09 02:18:53 +00:00
utils [MPS] Prerequisite for MPS C++ extension (#102483) 2023-06-07 17:28:31 +00:00
__config__.py
__future__.py
__init__.py Fix regressions caused by https://github.com/pytorch/pytorch/pull/103128 2023-06-07 09:39:02 -07:00
_appdirs.py
_classes.py
_deploy.py
_guards.py Pretty dataclass dynamo explain (#102869) 2023-06-07 22:38:57 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Dropout support for memory efficient attention (#102038) 2023-06-08 21:50:12 +00:00
_namedtensor_internals.py
_ops.py Make HigherOrderOperator stop appearing like torch.ops.* in FX (#103108) 2023-06-08 01:55:27 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_tensor_docs.py
_tensor_str.py Add torch._utils.render_call, improve printoptions (#102623) 2023-05-31 22:08:04 +00:00
_tensor.py This extra message would have helped with Wav2Vec2 debugging. (#103002) 2023-06-06 04:28:16 +00:00
_torch_docs.py Docs: update default device description (#101283) 2023-05-16 17:07:31 +00:00
_utils_internal.py Add top level function to check if running with deploy (#101420) 2023-05-16 16:05:49 +00:00
_utils.py Preserve coalesce state in sparse COO tensor serialization (#102647) 2023-06-03 01:37:52 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt
custom_class_detail.h
custom_class.h
extension.h
functional.py [BE] Do not expose torch.functional.opt_einsum (#102004) 2023-05-23 01:52:40 +00:00
hub.py Add --offload-to-disk support to minifier (#100546) 2023-05-05 05:25:03 +00:00
library.h [PyTorch] Delete c10::guts::if_constexpr (#101991) 2023-05-23 23:19:35 +00:00
library.py [torch.library] Change Library.__del__ into weakref.finalize (#101829) 2023-05-22 19:51:08 +00:00
overrides.py Switch most Python RAII guard usages to context manager (#102642) 2023-06-01 16:28:37 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Integrating new API usage metadata logger (#101762) 2023-05-26 00:24:26 +00:00
storage.py add storage dtype for custom device (#102481) 2023-06-01 12:46:19 +00:00
torch_version.py
types.py
version.py.tpl [bazel] add build for functorch (#101475) 2023-05-18 20:29:08 +00:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.