pytorch/torch
2024-02-16 08:10:51 +00:00
..
_awaits
_C Intel GPU Runtime Upstreaming for Device Allocator (#118091) 2024-02-16 06:46:00 +00:00
_C_flatbuffer
_custom_op [inductor][custom ops] Add tag to custom ops to preserve stride orders in inductor (#117298) 2024-01-21 18:47:01 +00:00
_decomp Add pixel_shuffle to core aten decomps (#119899) 2024-02-14 21:01:11 +00:00
_dispatch
_dynamo [Dynamo] Do not create TorchInGraphFunctionVariable for tags (#120005) 2024-02-16 03:37:32 +00:00
_export [pytree] Require serialized_type_name (#119718) 2024-02-15 20:32:44 +00:00
_functorch beef up non-overlapping checks for detecting false aliasing of graph inputs (#119826) 2024-02-14 01:46:30 +00:00
_higher_order_ops Fix a bug in kernel analysis with ttir defined args (#119934) 2024-02-15 02:49:11 +00:00
_inductor [aot_inductor] move CudaWrapperCodeGen into a separate file (#119870) 2024-02-16 08:10:51 +00:00
_lazy
_library Fix FallbackKernel behavior on mutable ops (#118649) 2024-02-09 19:01:54 +00:00
_logging Change default TORCH_LOGS format to match Meta/glog standard (#119869) 2024-02-14 18:56:35 +00:00
_numpy Fix dynamo failure w/ astype (#117952) 2024-02-03 08:10:15 +00:00
_prims Prevent DCE'ing unbacked SymInt for view outputs (#119552) 2024-02-13 16:32:21 +00:00
_prims_common Forward fix for same_shape oblivious guard (#119383) 2024-02-08 02:11:46 +00:00
_refs [ROCm] Enable float16/complex32 fft tests on ROCm (#117296) 2024-02-13 22:35:32 +00:00
_subclasses Account for inference mode in FakeTensor cache (#119963) 2024-02-16 02:53:33 +00:00
_vendor
amp add GradScaler on CPU (#109993) 2024-01-29 23:42:35 +00:00
ao make internal lintrunner mypy clean (#119840) 2024-02-14 00:25:42 +00:00
autograd Fix typo in private attr of inference_mode (#119167) 2024-02-13 14:59:59 +00:00
backends [CUDNN][SDPA] Experimental cuDNN Flash Attention v2 Inference (#115663) 2024-02-14 22:02:06 +00:00
compiler Add a wrapper to transform a NumPy function into a PyTorch function (#114610) 2024-01-02 18:35:29 +00:00
contrib
cpu add GradScaler on CPU (#109993) 2024-01-29 23:42:35 +00:00
csrc Intel GPU Runtime Upstreaming for Device Allocator (#118091) 2024-02-16 06:46:00 +00:00
cuda [torch][cuda][perf] Avoid unnecessary dicts. (#118011) 2024-02-11 19:29:24 +00:00
distributed [FSDP] compile compute and CI with @test_compiled_fsdp (#119933) 2024-02-16 01:48:51 +00:00
distributions Bugfix to MixtureSameFamily's _pad_mixture_dimension (#118947) 2024-02-06 16:24:22 +00:00
export Revert "[export] Disable exported_program.__call__ (#119466)" 2024-02-15 21:42:32 +00:00
fft
func
futures
fx Make pattern matcher more robust (#119876) 2024-02-15 00:48:16 +00:00
jit Enable possibly-undefined error code (#118533) 2024-01-30 21:07:01 +00:00
legacy
lib Remove unneeded linking of torch_shm_manager in CMake (#119540) 2024-02-11 06:33:35 +00:00
linalg
masked Enable possibly-undefined error code (#118533) 2024-01-30 21:07:01 +00:00
monitor
mps
multiprocessing
nested Fix NJT stride access in SDPA dispatcher logic (#119846) 2024-02-14 22:37:52 +00:00
nn Remove Redundant Bullet Point (#120007) 2024-02-15 19:47:35 +00:00
onnx make internal lintrunner mypy clean (#119840) 2024-02-14 00:25:42 +00:00
optim Clarify the patience in ReduceLROnPlateau (#119872) 2024-02-15 19:43:06 +00:00
package [BE]: Add better handling of pathlib.Path with os calls (#116564) 2023-12-31 01:46:03 +00:00
profiler Fix the missing device in _memory_profiler (#119751) 2024-02-15 19:11:15 +00:00
quantization Enable possibly-undefined error code (#118533) 2024-01-30 21:07:01 +00:00
signal Clarifying windows cosine behaviour in the documentation (#119444) 2024-02-09 05:57:44 +00:00
sparse [sparse] semi-structured sparse refactor (#117302) 2024-02-14 01:10:40 +00:00
special
testing [Dynamo] Do not create TorchInGraphFunctionVariable for tags (#120005) 2024-02-16 03:37:32 +00:00
utils fixed flop counter formula for conv transposed backwards pass (#119874) 2024-02-16 02:43:49 +00:00
xpu Intel GPU Runtime Upstreaming for Device Allocator (#118091) 2024-02-16 06:46:00 +00:00
__config__.py
__future__.py Integrate swap_tensors into nn.Module.load_state_dict (#117913) 2024-02-09 22:32:29 +00:00
__init__.py fix torch.set_float32_matmul_precision doc (#119620) 2024-02-11 06:41:37 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py [Lint] replace [assigment] with [method-assign] for methods (#119706) 2024-02-13 02:06:04 +00:00
_guards.py [dynamo][guards] Avoid unnecessary stack copies. (#119115) 2024-02-10 21:56:00 +00:00
_jit_internal.py [jit][perf] Reduce lookupInModule overhead. (#119145) 2024-02-05 18:01:00 +00:00
_linalg_utils.py
_lobpcg.py [Lint] replace [assigment] with [method-assign] for methods (#119706) 2024-02-13 02:06:04 +00:00
_lowrank.py
_meta_registrations.py Add meta registration for _foreach_norm (2nd try) (#119927) 2024-02-16 00:23:23 +00:00
_namedtensor_internals.py
_ops.py Enable local_partial_types (#118467) 2024-01-28 13:38:22 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py Pyi doc inclusion + fix (#117267) 2024-01-15 13:06:53 +00:00
_tensor_str.py Enable possibly-undefined error code (#118533) 2024-01-30 21:07:01 +00:00
_tensor.py Integrate swap_tensors into nn.Module.load_state_dict (#117913) 2024-02-09 22:32:29 +00:00
_torch_docs.py fix torch.cumsum docs (#117944) 2024-02-13 15:29:06 +00:00
_utils_internal.py [inductor] Use torch.cuda.clock_rate instead of triton.testing.nvsmi (#118662) 2024-02-14 03:23:49 +00:00
_utils.py Revert "Add FakeTensor support to torch._utils._rebuild_tensor (#108186)" 2024-02-09 04:19:20 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py additional support for float8_e4m3fnuz and _e5m2fnuz (#115214) 2024-01-22 18:33:41 +00:00
abi-check.cpp
CMakeLists.txt [3/4] Intel GPU Runtime Upstreaming for Device (#116850) 2024-02-01 12:31:26 +00:00
custom_class_detail.h
custom_class.h
extension.h
functional.py Fix typo in istft docstring (#119776) 2024-02-15 21:20:00 +00:00
hub.py Enable possibly-undefined error code (#118533) 2024-01-30 21:07:01 +00:00
library.h Add way to actually delete a torch.library.Library object (#118318) 2024-01-26 22:30:51 +00:00
library.py Enable local_partial_types (#118467) 2024-01-28 13:38:22 +00:00
overrides.py Integrate swap_tensors into nn.Module.load_state_dict (#117913) 2024-02-09 22:32:29 +00:00
py.typed
quasirandom.py
random.py [random] Replace for loop with list comprehension. (#119143) 2024-02-11 19:29:19 +00:00
README.txt
return_types.py [pytree] reuse flatten_fn in flatten_with_keys_fn to ensure consistency (#117656) 2024-01-17 20:38:49 +00:00
script.h
serialization.py Revert "Add FakeTensor support to torch._utils._rebuild_tensor (#108186)" 2024-02-09 04:19:20 +00:00
storage.py Add Python binding resizable to class {Untyped,Typed}Storage (#119286) 2024-02-07 19:15:55 +00:00
torch_version.py Replace follow_imports = silent with normal (#118414) 2024-01-27 02:44:11 +00:00
types.py
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.