pytorch/torch
Yanbo Liang ab5385fc50 [Dynamo][6.3/N] Further cleanup torch.py (#114669)
A follow-up PR to clean up what I found during the refactor of torch.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114669
Approved by: https://github.com/jansel
2023-12-01 04:08:29 +00:00
..
_awaits
_C Bug fixes to DDP _update_process_group API. (#114194) 2023-11-27 23:52:40 +00:00
_C_flatbuffer
_custom_op Allow functionalization to work with optional mutable (#114803) 2023-11-30 23:48:03 +00:00
_decomp Revert "Add decomp for replication_pad2d and use for CUDA deterministic (#111590)" 2023-11-30 02:28:14 +00:00
_dispatch
_dynamo [Dynamo][6.3/N] Further cleanup torch.py (#114669) 2023-12-01 04:08:29 +00:00
_export [pytree] test aligned API signature for C++ and Python pytree (#112485) 2023-11-30 17:50:06 +00:00
_functorch [inductor] add a config to specify the shape attribute for the generated svg graphs (#114811) 2023-11-30 06:10:37 +00:00
_higher_order_ops [reland][HigherOrderOp] remove _deprecated_global_ns (#113813) 2023-11-20 23:16:18 +00:00
_inductor [inductor][easy] print out exception message upon failing to write to a file (#114836) 2023-12-01 02:40:43 +00:00
_lazy
_library
_logging Sort the output of TORCH_LOGS=help (#114657) 2023-11-30 20:13:51 +00:00
_numpy BUG: fix np.ndarray.resize under dynamo (#113931) 2023-11-17 18:12:17 +00:00
_prims Add support for torch.Generator type in TorchScript (#110413) 2023-11-21 23:07:21 +00:00
_prims_common Run sympy expressions with Python values / FX tracing (#113978) 2023-11-20 21:25:11 +00:00
_refs Fix compiling add with torch.int32 and scalars (#113965) 2023-11-22 07:32:19 +00:00
_subclasses Add Stateful/Stateless symbolic contexts, use fresh fake mode for dynamo backends (#113926) (#114526) 2023-11-26 23:40:32 +00:00
_vendor vendor packaging.version (#114108) 2023-11-21 11:51:23 +00:00
amp Add Half support for CPU autocast on eager mode (#112484) 2023-11-21 20:08:28 +00:00
ao [quant][pt2e] Add generate_numeric_debug_handle pass (#114315) 2023-12-01 03:38:17 +00:00
autograd Fixed an issue where a user-specified default device clashed with the… (#114560) 2023-11-29 17:45:49 +00:00
backends Resolve docstring errors in throughput_benchmark.py, weak.py, _traceback.py, file_baton.py, _contextlib.py, _device.py, cpp_backtrace.py, bundled_inputs.py, run_cpu.py, hooks.py, mobile_optimizer.py, _freeze.py, __init__.py, mkldnn.py, dlpack.py (#113311) 2023-11-15 17:40:04 +00:00
compiler Fix torch.compiler.cudagraph_mark_step_begin example (#112807) 2023-11-07 04:15:31 +00:00
contrib Fixed docstring errors in _fuser.py, _state.py, __init__.py, _freeze.py, _async.py, _recursive.py, _tensorboard_vis.py, _trace.py, _await.py, _check.py, _serialization.py, _script.py, annotations.py, _monkeytype_config.py (#113371) 2023-11-12 03:19:02 +00:00
cpu [Dist] Enable FSDP on CPU (#112145) 2023-11-07 01:37:02 +00:00
csrc Move class definition of DebugInfoWriter to TraceUtil as well (#114901) 2023-12-01 03:28:16 +00:00
cuda Add bsr_dense_addmm triton kernel (#114595) 2023-11-29 05:29:25 +00:00
distributed [DeviceMesh] Add get_local_rank() API to DeviceMesh (#114709) 2023-12-01 03:28:55 +00:00
distributions [doc] two diff meanings of rv generated by torch.tensor.geometric_ and torch.distributions.geometric.Geometric (#113183) 2023-11-15 03:49:04 +00:00
export [pytree] test aligned API signature for C++ and Python pytree (#112485) 2023-11-30 17:50:06 +00:00
fft
func
futures
fx Fix error with int+SymBool (#114828) 2023-11-30 18:30:36 +00:00
jit [BE][Easy]: Apply RUF019: remove duplicate checks for dict access (#114478) 2023-11-29 00:14:02 +00:00
legacy
lib
linalg
masked make_fx can now SymIntify int inputs (#113452) 2023-11-18 06:39:09 +00:00
monitor
mps
multiprocessing Add sparse tensors support to dataloader. (#112842) 2023-11-19 16:05:27 +00:00
nested Fix to wrap jagged dims for split() / split_with_sizes() (#113591) 2023-11-14 19:36:08 +00:00
nn [DeviceMesh] Rename get_dim_groups to get_group (#114708) 2023-11-30 23:40:14 +00:00
onnx [ONNX] Fix op level debug on complex dtype support (#114885) 2023-12-01 02:17:27 +00:00
optim [BE][SparseAdam] cleaner way to verify no sparse params (#114425) 2023-11-29 19:47:03 +00:00
package Add file name and size to the serialization metadata logging (#113077) 2023-11-09 11:14:24 +00:00
profiler [Profiler][Easy] Make timestamps in memory timelines be in microseconds (us) (#112772) 2023-11-03 00:41:41 +00:00
quantization
signal
sparse Call triton bsr_dense_mm/bsr_dense_addmm kernels on mm/addmm float32 inputs when appropiate (#114757) 2023-11-30 13:38:07 +00:00
special
testing [quant][pt2e] Add generate_numeric_debug_handle pass (#114315) 2023-12-01 03:38:17 +00:00
utils [pytree] support collections.defaultdict type for Python pytree (#113255) 2023-11-30 20:46:25 +00:00
__config__.py
__future__.py
__init__.py Revert "Add decomp for replication_pad2d and use for CUDA deterministic (#111590)" 2023-11-30 02:28:14 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_guards.py Add Stateful/Stateless symbolic contexts, use fresh fake mode for dynamo backends (#113926) (#114526) 2023-11-26 23:40:32 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py [sparse][semi-structured][inductor] meta registrations for _cslt_sparse_mm + additional stride checking in test. (#114685) 2023-11-29 00:31:52 +00:00
_namedtensor_internals.py
_ops.py [reland][HigherOrderOp] remove _deprecated_global_ns (#113813) 2023-11-20 23:16:18 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py [doc] two diff meanings of rv generated by torch.tensor.geometric_ and torch.distributions.geometric.Geometric (#113183) 2023-11-15 03:49:04 +00:00
_tensor_str.py Do not error when printing view created in no-grad modified in-place in no-grad (#113716) 2023-11-16 18:57:56 +00:00
_tensor.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
_torch_docs.py [doc] caution torch.multinomial usage (#112892) 2023-11-15 18:20:48 +00:00
_utils_internal.py Update impl_abstract_pystub to be less boilerplatey (#113182) 2023-11-08 00:39:00 +00:00
_utils.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
abi-check.cpp
CMakeLists.txt Revert "[BE] [cuDNN] Always build assuming cuDNN >= 8.0 (#95722)" 2023-11-10 17:26:36 +00:00
custom_class_detail.h
custom_class.h [Reland] [1/N] Fixes clang-tidy warnings in header files (#114668) 2023-11-29 07:11:51 +00:00
extension.h
functional.py make_fx can now SymIntify int inputs (#113452) 2023-11-18 06:39:09 +00:00
hub.py
library.h [fbgemm_gpu] add pt2_compliant tag to some ops (#113201) 2023-11-10 00:32:30 +00:00
library.py Optimize inspect.stack() call in caffe2/torch/library.py (#114700) 2023-11-29 20:54:02 +00:00
overrides.py Add support for torch.Generator type in TorchScript (#110413) 2023-11-21 23:07:21 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py [pytree] register pytree node type in both C++ pytree and Python pytree (#112111) 2023-11-28 11:41:38 +00:00
script.h
serialization.py [BE] Do not warn when safely loading legacy dicts (#113614) 2023-11-14 22:09:10 +00:00
storage.py Fix pydocstyle errors listed in issue 112589 (#113227) 2023-11-13 22:05:45 +00:00
torch_version.py vendor packaging.version (#114108) 2023-11-21 11:51:23 +00:00
types.py improve annotation device parameters where a device ordinal is allowed (#113647) 2023-11-17 14:41:22 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.