pytorch/torch
2023-03-22 14:19:59 +00:00
..
_awaits
_C Allow new_full's fill_value argument type to be complex (#91345) 2023-03-21 12:34:00 +00:00
_C_flatbuffer
_decomp Improve size mismatch error messaging referencing mat/vet sizes (#96863) 2023-03-17 21:07:48 +00:00
_dispatch
_dynamo [dynamo] handle dim in size kwargs (#96992) (#97098) 2023-03-22 14:19:59 +00:00
_export [aot autograd] merge all outputs of funtionalization analysis into single metadata (#95991) 2023-03-08 16:22:54 +00:00
_functorch Changed logging in aotautograd a little (#97289) 2023-03-22 09:33:30 +00:00
_inductor inductor(cpu): support mkldnn packed linear to improve bfloat16 performance (#96954) 2023-03-22 12:25:59 +00:00
_lazy
_logging Improve TORCH_LOGS settings error msg (#97264) 2023-03-22 13:26:53 +00:00
_prims Bump black version to 23.1.0 (#96578) 2023-03-15 06:27:59 +00:00
_prims_common Add support for nonzero, some improvements to reduce guards (#95387) 2023-02-24 00:27:45 +00:00
_refs Allow new_full's fill_value argument type to be complex (#91345) 2023-03-21 12:34:00 +00:00
_subclasses Don't run fallback if symbolic sizes in fake tensor (#97148) 2023-03-21 02:23:44 +00:00
amp Error only if autocast actually enabled (#96097) 2023-03-21 03:13:13 +00:00
ao Init quantization backend config for inductor (#96476) 2023-03-22 07:56:56 +00:00
autograd Deprecate gradcheck check_sparse_nnz argument as duplicate of masked argument (#97187) 2023-03-22 14:11:03 +00:00
backends DOC: Various typo fixes (#97095) 2023-03-20 20:46:04 +00:00
contrib
cpu
csrc Test and fix guard fail message in CompileProfiler (#97055) 2023-03-22 02:17:57 +00:00
cuda Change 1D Tensor of 1 element to 0D Tensor (#96994) 2023-03-21 18:24:19 +00:00
distributed [PTD][Checkpoint] Add checkpointing support for DTensor submesh (#96802) 2023-03-21 08:17:17 +00:00
distributions Fix gumbel cdf (#91698) 2023-03-07 23:04:47 +00:00
fft
func [functorch] linearize (#94173) 2023-02-09 15:45:08 +00:00
futures
fx DCE inference graphs too (#97275) 2023-03-22 01:02:21 +00:00
jit Add location information for assertions in torch.jit.annotations.try_ann_to_type (#96423) 2023-03-11 21:49:13 +00:00
legacy
lib Simplify cmake code (#91546) 2023-02-08 01:05:19 +00:00
linalg
masked std/var: support floating point correction value (#94073) 2023-02-23 05:50:45 +00:00
monitor
mps [MPS] Enable Memory Leak Detection for test_mps.py (#94646) 2023-02-13 17:56:24 +00:00
multiprocessing FIX make sure we import the correct object from multiprocessing (#81862) 2023-03-21 14:48:17 +00:00
nested
nn Update parallel_apply.py for assertion error when len(modules) != len(inputs) (#94671) 2023-03-21 17:46:23 +00:00
onnx [ONNX] 'Transform' as base class for passes (#95935) 2023-03-21 03:31:22 +00:00
optim Change 1D Tensor of 1 element to 0D Tensor (#96994) 2023-03-21 18:24:19 +00:00
package Bump black version to 23.1.0 (#96578) 2023-03-15 06:27:59 +00:00
profiler [BE] Remove unnecessary dict comprehensions (#97116) 2023-03-20 00:56:57 +00:00
quantization AO migration: replace torch internal callsites (#94170) 2023-02-07 02:32:23 +00:00
signal
sparse bsr_dense_mm Triton kernel: fix out kwarg (#96648) 2023-03-14 18:01:22 +00:00
special
testing Deprecate gradcheck check_sparse_nnz argument as duplicate of masked argument (#97187) 2023-03-22 14:11:03 +00:00
utils [draft for discussion] add per-dispatch key modes (#97052) 2023-03-21 23:45:45 +00:00
__config__.py
__future__.py
__init__.py component-level configurable logging for dynamo, inductor, aot (#94858) 2023-03-18 04:17:31 +00:00
_appdirs.py
_classes.py [BE] [2/3] Rewrite super() calls in functorch and torch (#94588) 2023-02-10 21:16:33 +00:00
_deploy.py
_guards.py Extend aot autograd dedup guards to params, stop using positions (#96774) 2023-03-21 05:59:33 +00:00
_jit_internal.py Fix usages of contextmanager without finally (#96170) 2023-03-08 20:59:27 +00:00
_linalg_utils.py
_lobpcg.py Bump black version to 23.1.0 (#96578) 2023-03-15 06:27:59 +00:00
_lowrank.py
_meta_registrations.py [SDPA] Remove the chunk_grad from mem-eff attention (#96880) 2023-03-17 21:28:25 +00:00
_namedtensor_internals.py
_ops.py [draft for discussion] add per-dispatch key modes (#97052) 2023-03-21 23:45:45 +00:00
_python_dispatcher.py
_sources.py [BE] [2/3] Rewrite super() calls in functorch and torch (#94588) 2023-02-10 21:16:33 +00:00
_storage_docs.py
_tensor_docs.py Add masked_grad kw argument to to_dense (#96095) 2023-03-16 21:38:11 +00:00
_tensor_str.py [BE] Remove dependency on six and future (#94709) 2023-02-14 09:14:14 +00:00
_tensor.py Add HPU to compatible shallow copy list and remove lazy HPU changes (#94673) 2023-02-14 17:15:25 +00:00
_torch_docs.py docs: Match open bracket with close bracket in unsqueeze (#95215) 2023-02-24 03:56:59 +00:00
_utils_internal.py
_utils.py Bump black version to 23.1.0 (#96578) 2023-03-15 06:27:59 +00:00
_VF.py [BE] [2/3] Rewrite super() calls in functorch and torch (#94588) 2023-02-10 21:16:33 +00:00
_vmap_internals.py
_weights_only_unpickler.py Add float to list of allowed ops (#94910) 2023-02-15 23:13:21 +00:00
abi-check.cpp
CMakeLists.txt fix some tiny code issues (#95757) 2023-03-01 23:27:32 +00:00
custom_class_detail.h
custom_class.h More fixes and improved clang-tidy checkers (#93213) 2023-02-01 14:44:17 +00:00
extension.h
functional.py Require DOCTEST_SHOW environ to run plt.show (#96522) 2023-03-10 21:47:20 +00:00
hub.py [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308) 2023-02-07 21:10:56 +00:00
library.h Fix dispatching issue of the new device type. (#97273) 2023-03-21 23:23:06 +00:00
library.py
overrides.py Refactor NT offsets metadata to be a Tensor (#96909) 2023-03-21 18:51:35 +00:00
py.typed
quasirandom.py [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308) 2023-02-07 21:10:56 +00:00
random.py [MPS] Add Python Module Bindings for the MPS backend (#94417) 2023-02-12 21:22:30 +00:00
README.txt
return_types.py
script.h
serialization.py Fix usages of contextmanager without finally (#96170) 2023-03-08 20:59:27 +00:00
storage.py Make share_memory_ call thread safe within itself. (#96664) 2023-03-14 19:27:01 +00:00
torch_version.py
types.py Allow new_full's fill_value argument type to be complex (#91345) 2023-03-21 12:34:00 +00:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.