pytorch/torch
2023-07-05 23:40:03 +00:00
..
_awaits
_C DDP + C10D sparse all_reduce changes (#103916) (#104256) 2023-06-28 00:37:52 +00:00
_C_flatbuffer
_custom_op Add API to construct the functional variant of an op (#102293) 2023-06-02 13:36:50 +00:00
_decomp [decomp] Add test tracking core ATen operators (#104262) 2023-07-04 16:41:44 +00:00
_dispatch Reland of https://github.com/pytorch/pytorch/pull/101818 (#103888) 2023-06-21 21:00:56 +00:00
_dynamo Enable fused foreach Adam compilation (#104121) 2023-07-05 23:40:03 +00:00
_export [RFC]: Add test for graph partition after assertion ops functionalization. (#104287) 2023-06-28 22:13:27 +00:00
_functorch Revert "Re-enable low memory dropout (#103330)" 2023-07-05 19:00:40 +00:00
_higher_order_ops [HigherOrderOp] Remove _deprecated_global_ns from some ops (#104105) 2023-06-28 00:03:29 +00:00
_inductor Revert "Re-enable low memory dropout (#103330)" 2023-07-05 19:00:40 +00:00
_lazy
_logging [logging] add custom format option to logging artifacts (#104443) 2023-06-30 19:54:14 +00:00
_prims [HigherOrderOp] Remove _deprecated_global_ns from some ops (#104105) 2023-06-28 00:03:29 +00:00
_prims_common [pt2] add metas for max_unpool2d and max_unpool3d (#103821) 2023-07-01 01:33:35 +00:00
_refs [decomp] Add test tracking core ATen operators (#104262) 2023-07-04 16:41:44 +00:00
_subclasses [pt2] grad support (#102264) 2023-06-21 10:13:09 +00:00
amp pre_dispatch tracing: support autocast and no_grad/enable_grad ctx managers, add a pre_dispatch_eager dynamo backend (#103024) 2023-06-29 14:17:42 +00:00
ao [Quant][PT2E] Enable conv2d unary and binary recipe for x86 inductor quantizer (#98826) 2023-07-04 00:01:10 +00:00
autograd Deprecate "Type" and support more devices for save_on_cpu (#103245) 2023-06-09 05:05:01 +00:00
backends [BE] Deprecate has_XYZ attributes (#103279) 2023-06-10 05:17:17 +00:00
compiler torch.compiler public namespace (#102182) 2023-06-13 19:52:17 +00:00
contrib
cpu Quantization oneDNN backend only support VNNI CPU (#103653) 2023-06-19 09:50:07 +00:00
csrc [BE] switch fprintf to fmt::print (#104640) 2023-07-05 21:11:39 +00:00
cuda [pt2] grad support (#102264) 2023-06-21 10:13:09 +00:00
distributed Revert "[6/n][FSDP] Update _sharded_pre_load_state_dict_hook to use DTensor when use_dtensor=True in ShardedStateDictConfig (#104087)" 2023-07-01 07:50:31 +00:00
distributions Fix Dirichlet.log_prob() when x=0 and alpha=1 (#103605) 2023-06-15 16:16:50 +00:00
fft
func [pt2] grad support (#102264) 2023-06-21 10:13:09 +00:00
futures
fx Preserve original co_filename when FX symbolic_trace (#103885) 2023-07-05 22:00:05 +00:00
jit Revert "[dynamo] Lazy disable_dynamo API out-of-dynamo (#104317)" 2023-07-05 06:21:48 +00:00
legacy
lib Use size_t in THManagedMapAllocator (#103331) 2023-06-13 04:50:30 +00:00
linalg [Doc] linalg.ldl_factor: render the Shape of tensor A (#99777) 2023-06-28 09:28:45 +00:00
masked Fix autograd issue with identity conversions (#92022) 2023-06-21 21:23:03 +00:00
monitor
mps [doc] Improve mps package description (#104184) 2023-06-27 15:50:36 +00:00
multiprocessing
nested
nn Adding precision issue note docs for functional.interpolate (#104622) 2023-07-05 16:20:57 +00:00
onnx [ONNX] Add autograd_inlining flag to torch.onnx.export (#104067) 2023-07-05 15:27:36 +00:00
optim Enable fused foreach Adam compilation (#104121) 2023-07-05 23:40:03 +00:00
package Integrating new API usage metadata logger (#101762) 2023-05-26 00:24:26 +00:00
profiler Fix broken torch._inductor.config import (#104477) 2023-07-01 02:23:44 +00:00
quantization
signal
sparse [core][pruning][sparse][feature] SparseSemiStructured tensor subclass (#102135) 2023-06-27 19:21:06 +00:00
special
testing Re-land: Turn translation validation on for tests and accuracy runs by default. (#104467) 2023-07-05 19:01:50 +00:00
utils add fused support for xpu devices (#104517) 2023-07-05 21:07:00 +00:00
__config__.py
__future__.py
__init__.py Revert "[dynamo] Lazy disable_dynamo API out-of-dynamo (#104317)" 2023-07-05 06:21:48 +00:00
_appdirs.py
_classes.py
_deploy.py
_guards.py Lift user defined attributes into inputs for certain cases (user defined types and tensors) (#103386) 2023-06-20 23:45:19 +00:00
_jit_internal.py default should be used as default value in boolean_dispatch (#103463) 2023-06-14 03:16:31 +00:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py [pt2] add metas for multilabel_margin_loss ops (#104388) 2023-07-05 13:42:22 +00:00
_namedtensor_internals.py
_ops.py Raise AttributeError in _OpsNamespace if __self__ attribute is requested (#104096) 2023-06-27 01:42:06 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_tensor_docs.py Added is_xla (#103100) 2023-06-22 23:31:04 +00:00
_tensor_str.py Add torch._utils.render_call, improve printoptions (#102623) 2023-05-31 22:08:04 +00:00
_tensor.py This extra message would have helped with Wav2Vec2 debugging. (#103002) 2023-06-06 04:28:16 +00:00
_torch_docs.py doc: fix fake_quantize_per_tensor_affine docs (#104453) 2023-06-30 22:59:00 +00:00
_utils_internal.py
_utils.py fix hpu storage serialization (#101680) 2023-06-21 21:19:49 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt enable more ASAN tests (#101483) 2023-06-15 05:21:15 +00:00
custom_class_detail.h
custom_class.h
extension.h
functional.py
hub.py
library.h [PyTorch] Delete c10::guts::if_constexpr (#101991) 2023-05-23 23:19:35 +00:00
library.py
overrides.py sparse_mask: backward support for sparse lhs (take 2) (#104341) 2023-07-03 14:12:44 +00:00
py.typed
quasirandom.py
random.py Correct warning message info in fork_rng (#104525) 2023-07-04 19:08:16 +00:00
README.txt
return_types.py
script.h
serialization.py Add docstring to torch.serialization.register_package (#104046) 2023-06-26 23:28:32 +00:00
storage.py fix hpu storage serialization (#101680) 2023-06-21 21:19:49 +00:00
torch_version.py
types.py
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.