pytorch/torch
2023-12-20 17:56:21 +00:00
..
_awaits
_C Revert "Serve multistream graph captures from correct pool (#114647)" 2023-12-20 17:11:42 +00:00
_C_flatbuffer
_custom_op Allow functionalization to work with optional mutable (#114803) 2023-11-30 23:48:03 +00:00
_decomp [inductor] Updated upsample_bilinear2d decomposition (#104182) 2023-12-14 14:50:06 +00:00
_dispatch
_dynamo Ensure wrapping subclasses with as_subclass is supported (#116091) 2023-12-20 14:37:08 +00:00
_export [export][reland] non-strict export with dynamic shapes (#116048) 2023-12-19 23:57:22 +00:00
_functorch Support nn_module_stack in torch.export(strict=False) (#115454) 2023-12-20 01:43:39 +00:00
_higher_order_ops Support nn_module_stack in torch.export(strict=False) (#115454) 2023-12-20 01:43:39 +00:00
_inductor Revert "Serve multistream graph captures from correct pool (#114647)" 2023-12-20 17:11:42 +00:00
_lazy
_library Refactor can_auto_functionalize (#115134) 2023-12-05 22:43:06 +00:00
_logging Add basic autograd TORCH_LOGS support (#115438) 2023-12-20 15:23:44 +00:00
_numpy [BE]: Enable a PLC0131, PLC0132, PLC0205. Fix PLC0132 bug. (#115015) 2023-12-02 20:35:10 +00:00
_prims Add support for torch.Generator type in TorchScript (#110413) 2023-11-21 23:07:21 +00:00
_prims_common [inductor] Allow sympy expressions to participate in type promotion (#115676) 2023-12-13 22:22:37 +00:00
_refs Add decomposition for torch.block_diag (#115096) 2023-12-11 20:04:22 +00:00
_subclasses Add support for multi device foreach ops (#116064) 2023-12-20 04:19:40 +00:00
_vendor vendor packaging.version (#114108) 2023-11-21 11:51:23 +00:00
amp Add Half support for CPU autocast on eager mode (#112484) 2023-11-21 20:08:28 +00:00
ao [quant][fx] Lower operator.matmul in convert_fx (#113954) 2023-12-12 00:34:58 +00:00
autograd Add basic autograd TORCH_LOGS support (#115438) 2023-12-20 15:23:44 +00:00
backends [MPS] Add MacOS 14 runtime check (#115512) 2023-12-11 21:11:42 +00:00
compiler Fix torch.compiler.cudagraph_mark_step_begin example (#112807) 2023-11-07 04:15:31 +00:00
contrib Fixed docstring errors in _fuser.py, _state.py, __init__.py, _freeze.py, _async.py, _recursive.py, _tensorboard_vis.py, _trace.py, _await.py, _check.py, _serialization.py, _script.py, annotations.py, _monkeytype_config.py (#113371) 2023-11-12 03:19:02 +00:00
cpu [Dist] Enable FSDP on CPU (#112145) 2023-11-07 01:37:02 +00:00
csrc [Easy][BE]: Enable clang-tidy check for duplicate includes (#116193) 2023-12-20 17:56:21 +00:00
cuda [BE] Set torch.cuda.has_half to True (#115884) 2023-12-15 02:30:55 +00:00
distributed Fix ColwiseParallel typo (#116151) 2023-12-20 06:40:32 +00:00
distributions Fix hang in VonMises rejection sampling for small values of concentration (#114498) 2023-12-04 23:07:06 +00:00
export Support nn_module_stack in torch.export(strict=False) (#115454) 2023-12-20 01:43:39 +00:00
fft
func
futures
fx Support nn_module_stack in torch.export(strict=False) (#115454) 2023-12-20 01:43:39 +00:00
jit [BE][Easy]: Apply RUF019: remove duplicate checks for dict access (#114478) 2023-11-29 00:14:02 +00:00
legacy
lib
linalg
masked make_fx can now SymIntify int inputs (#113452) 2023-11-18 06:39:09 +00:00
monitor
mps
multiprocessing Robustify torch.multiprocessing.spawn error reporting to be less deadlock prone (#114688) 2023-12-09 03:36:43 +00:00
nested Fix jagged composite impl of flatten() (#115192) 2023-12-19 19:15:21 +00:00
nn [Doc] Add padding size constraint in nn.ReflectionPad2d (#115995) 2023-12-18 21:29:14 +00:00
onnx Store user model to simplify ONNXProgram.{adapt_torch_*,__call__} APIs (#115281) 2023-12-09 07:46:12 +00:00
optim Revert "Adamw refactor (#115983)" 2023-12-19 15:26:44 +00:00
package Add file name and size to the serialization metadata logging (#113077) 2023-11-09 11:14:24 +00:00
profiler [Profiler][Easy] Make timestamps in memory timelines be in microseconds (us) (#112772) 2023-11-03 00:41:41 +00:00
quantization
signal
sparse [sparse][semi-structured] enable fp32 support, separate sparse and dense constraints (#115550) 2023-12-15 02:28:17 +00:00
special
testing add Half support for layer_norm on CPU (#99590) 2023-12-20 01:11:15 +00:00
utils Activation checkpoint and checkpoint_sequential errors if use_reentrant not passed explicitly (#115868) 2023-12-20 15:23:44 +00:00
__config__.py
__future__.py
__init__.py Some tiny modification about torch.set/get_default_device (#116014) 2023-12-19 05:08:06 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_guards.py Add Stateful/Stateless symbolic contexts, use fresh fake mode for dynamo backends (#113926) (#114526) 2023-11-26 23:40:32 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Fix backward for SDPA NT jagged layout (#115576) 2023-12-12 18:35:40 +00:00
_namedtensor_internals.py
_ops.py Support Predispatch functionalization (#113728) 2023-12-19 20:28:35 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py [doc] two diff meanings of rv generated by torch.tensor.geometric_ and torch.distributions.geometric.Geometric (#113183) 2023-11-15 03:49:04 +00:00
_tensor_str.py Do not error when printing view created in no-grad modified in-place in no-grad (#113716) 2023-11-16 18:57:56 +00:00
_tensor.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
_torch_docs.py Updated docs for deprecated torch.set_default_tensor_type (#115041) 2023-12-07 16:17:36 +00:00
_utils_internal.py [inductor][Observability] Add log for Optimus to enable easier debug (#110452) 2023-12-01 18:25:56 +00:00
_utils.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
abi-check.cpp
CMakeLists.txt Revert "[Reland2] Update NVTX to NVTX3 (#109843)" 2023-12-05 16:10:20 +00:00
custom_class_detail.h
custom_class.h [Reland] [1/N] Fixes clang-tidy warnings in header files (#114668) 2023-11-29 07:11:51 +00:00
extension.h
functional.py make_fx can now SymIntify int inputs (#113452) 2023-11-18 06:39:09 +00:00
hub.py
library.h [fbgemm_gpu] add pt2_compliant tag to some ops (#113201) 2023-11-10 00:32:30 +00:00
library.py Optimize inspect.stack() call in caffe2/torch/library.py (#114700) 2023-11-29 20:54:02 +00:00
overrides.py Some tiny modification about torch.set/get_default_device (#116014) 2023-12-19 05:08:06 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py [pytree] register pytree node type in both C++ pytree and Python pytree (#112111) 2023-11-28 11:41:38 +00:00
script.h
serialization.py [BE] Do not warn when safely loading legacy dicts (#113614) 2023-11-14 22:09:10 +00:00
storage.py Fix pydocstyle errors listed in issue 112589 (#113227) 2023-11-13 22:05:45 +00:00
torch_version.py vendor packaging.version (#114108) 2023-11-21 11:51:23 +00:00
types.py improve annotation device parameters where a device ordinal is allowed (#113647) 2023-11-17 14:41:22 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.