pytorch/torch
cyy d9fb7166d6 [BE] use DeviceIndex instead of int64_t for related device interfaces (#103068)
This PR unifies the device interfaces in aten/*cpp and torch/csrc/*cpp to use  **c10::DeviceIndex**.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103068
Approved by: https://github.com/malfet
2023-08-25 20:16:14 +00:00
..
_awaits
_C Expose cudaStreamCaptureMode in CUDA Graphs, use local setting in inductor (#107407) 2023-08-25 01:44:26 +00:00
_C_flatbuffer
_custom_op Extend impl_backward to be usable with torch.library operators (#106817) 2023-08-14 14:33:46 +00:00
_decomp Avoid decomposing _unsafe_index in Inductor (#107882) 2023-08-25 04:51:53 +00:00
_dispatch Fix some fake mode confusion between inner/outer fake mode in export (#106515) 2023-08-04 15:42:23 +00:00
_dynamo Fix how DDPOptimizer clones dynamo callback (#107834) 2023-08-25 17:46:36 +00:00
_export Hide transform method by renaming it (#107940) 2023-08-25 16:31:44 +00:00
_functorch [inductor][ac] preserve recompute tags through pattern matching (#107742) 2023-08-25 03:48:26 +00:00
_higher_order_ops Support cond and out_dtype for predispatch (#107941) 2023-08-25 17:37:16 +00:00
_inductor [Quant][Inductor] Enable quantization conv_unary(relu) pattern fusion inside inductor (#105455) 2023-08-25 18:07:29 +00:00
_lazy
_logging Add frame/recompile counter to all log messages in tracing context (#107530) 2023-08-21 13:02:12 +00:00
_numpy torch._numpy: keep f16 CUDA tensors in f16 where possible (#107768) 2023-08-23 18:35:47 +00:00
_prims [activation checkpointing] Add default autocast keys to functional rng wrappers (#107934) 2023-08-25 18:22:02 +00:00
_prims_common Remove dynamo+nvfuser (#105789) 2023-08-08 22:29:32 +00:00
_refs Added normal op decomposition for specializations of the normal op (#106792) 2023-08-25 16:18:28 +00:00
_subclasses Fix Inplace tensor update on transpose (#104689) 2023-08-24 16:58:50 +00:00
amp
ao [quant][pt2e] Make sure XNNPACKQuantizer works with the pre_dispatch=True (#107872) 2023-08-25 05:04:01 +00:00
autograd [profiler] move _enable_dynamo_cache_lookup_profiler (#107720) 2023-08-23 23:41:35 +00:00
backends [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
compiler
contrib
cpu
csrc [BE] use DeviceIndex instead of int64_t for related device interfaces (#103068) 2023-08-25 20:16:14 +00:00
cuda Expose cudaStreamCaptureMode in CUDA Graphs, use local setting in inductor (#107407) 2023-08-25 01:44:26 +00:00
distributed Enable custom device support in fsdp checkpoint (#107289) 2023-08-25 11:50:03 +00:00
distributions [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
export Hide transform method by renaming it (#107940) 2023-08-25 16:31:44 +00:00
fft
func [pt2] support vmap (#101707) 2023-08-09 03:39:33 +00:00
futures
fx Fix aot sequence_nr to reset bwd flag (#107210) 2023-08-24 16:58:12 +00:00
jit [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
legacy
lib Remove some unnecessary <iostream> includes from headers (#106914) 2023-08-25 18:24:05 +00:00
linalg [CUDA][Linalg} Patch crash of linalg.eigh when input matrix is ill-conditioned, in some cusolver version (#107082) 2023-08-16 21:15:15 +00:00
masked
monitor
mps [MPS] Introduce torch.mps.Event() APIs (#102121) 2023-08-08 03:45:45 +00:00
multiprocessing
nested
nn Fix the document of torch.nn.functional.conv2d (#107851) 2023-08-24 18:02:03 +00:00
onnx [ONNX] Enable 'ExportOutput.save' for models larger than 2GB (#107904) 2023-08-25 03:08:38 +00:00
optim Fixes #107737 SGD doc blank line (#107738) 2023-08-25 19:48:30 +00:00
package
profiler [profiler] move _enable_dynamo_cache_lookup_profiler (#107720) 2023-08-23 23:41:35 +00:00
quantization
signal
sparse [core][sparse][pruning] cuSPARSELt Kernels and ops. (#107398) 2023-08-25 07:04:15 +00:00
special
testing Revert "Remove remaining global set_default_dtype calls from tests (#107246)" 2023-08-25 19:34:55 +00:00
utils Expose cudaStreamCaptureMode in CUDA Graphs, use local setting in inductor (#107407) 2023-08-25 01:44:26 +00:00
__config__.py
__future__.py
__init__.py Initial Python 3.12 build fixes (#106083) 2023-08-25 13:23:48 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py Extend impl_backward to be usable with torch.library operators (#106817) 2023-08-14 14:33:46 +00:00
_deploy.py
_guards.py [dynamo] Store originating source in the Guard object (#107634) 2023-08-22 02:16:31 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_lowrank.py
_meta_registrations.py [Quant][Inductor] Enable quantized conv weight prepack inside inductor constant folding (#104581) 2023-08-25 17:37:41 +00:00
_namedtensor_internals.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_ops.py Support cond and out_dtype for predispatch (#107941) 2023-08-25 17:37:16 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_tensor_docs.py
_tensor_str.py
_tensor.py [PyTorch][Tensor] Introduce tensor.dim_order (#106835) 2023-08-25 00:06:03 +00:00
_torch_docs.py Fix the example of torch.slice_scatter (#107849) 2023-08-25 04:19:49 +00:00
_utils_internal.py [Dynamo] Improve PT2 fbcode logging observability (#106932) 2023-08-11 20:46:04 +00:00
_utils.py Enable custom device support in fsdp checkpoint (#107289) 2023-08-25 11:50:03 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt Revert "[Reland] Upgrade NVTX to NVTX3 (#97582)" 2023-08-15 20:55:12 +00:00
custom_class_detail.h
custom_class.h Remove some unnecessary <iostream> includes from headers (#106914) 2023-08-25 18:24:05 +00:00
extension.h reduce header file to boost cpp_wrapper build. (#107585) 2023-08-22 11:58:47 +00:00
functional.py
hub.py Default permissions for torch.hub downloads (#82869) 2023-08-24 15:48:24 +00:00
library.h
library.py
overrides.py Expose torch.export.{save,load} APIs (#107888) 2023-08-25 06:06:36 +00:00
py.typed
quasirandom.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
random.py
README.txt
return_types.py
script.h
serialization.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
storage.py
torch_version.py
types.py [BE]: Apply PYI autofixes to various types (#107521) 2023-08-20 02:42:21 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.