pytorch/torch
Wanchao Liang c76c84bde4 [dynamo] make ProcessGroupVariable a DistributedVariable (#105593)
This PR move the ProcessGroupVariable from UDO to DistributedVT
so that Distributed VTs are consolidated together

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105593
Approved by: https://github.com/voznesenskym
2023-07-26 06:42:50 +00:00
..
_awaits
_C [profiler] add option for kineto synchronization events in the trace (#105187) 2023-07-26 03:45:04 +00:00
_C_flatbuffer
_custom_op [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
_decomp [pt2] add decomps for multilabel_margin_loss_forward ops (#105302) 2023-07-23 02:16:29 +00:00
_dispatch Reland of https://github.com/pytorch/pytorch/pull/101818 (#103888) 2023-06-21 21:00:56 +00:00
_dynamo [dynamo] make ProcessGroupVariable a DistributedVariable (#105593) 2023-07-26 06:42:50 +00:00
_export Revert "Slightly improve AOTAutograd logging with ViewAndMutationMeta (#105702)" 2023-07-25 20:17:27 +00:00
_functorch Revert "Slightly improve AOTAutograd logging with ViewAndMutationMeta (#105702)" 2023-07-25 20:17:27 +00:00
_higher_order_ops Add torch.ops.out_dtype (#103333) 2023-07-18 16:25:45 +00:00
_inductor inductor: support conv+binary foldinig for freezing path (#105048) 2023-07-26 01:50:30 +00:00
_lazy
_logging [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
_prims [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_prims_common [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_refs Fix aten.logspace decomposition (#105201) 2023-07-22 04:10:20 +00:00
_subclasses Unconditionally record when FakeTensorMode is allocated and report it on inconsistency (#105927) 2023-07-26 03:38:42 +00:00
amp Documentation for torch.autocast (#95760) 2023-07-22 03:56:34 +00:00
ao [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
autograd Compiled autograd (#103822) 2023-07-24 21:12:05 +00:00
backends [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
compiler
contrib [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
cpu [core] Bring cpu device module closer to cuda's. (#103172) 2023-07-12 19:43:22 +00:00
csrc [profiler] add option for kineto synchronization events in the trace (#105187) 2023-07-26 03:45:04 +00:00
cuda [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
distributed [TP] Enable partial tensor add without redistribute (#105939) 2023-07-26 03:12:39 +00:00
distributions [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
fft
func [pt2] grad support (#102264) 2023-06-21 10:13:09 +00:00
futures
fx Don't alter original node's meta in Interpreter (#105880) 2023-07-26 03:44:58 +00:00
jit [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
legacy
lib
linalg [DocString] Fix incorrect api Examples (#105911) 2023-07-25 13:03:06 +00:00
masked [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
monitor
mps [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
multiprocessing [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
nested
nn [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
onnx [ONNX] Fix the warnings of aten overload fallback to default in onnx dispatcher (#105972) 2023-07-26 05:42:33 +00:00
optim Implement NAdamW optimizer (#103881) 2023-07-24 19:29:26 +00:00
package [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
profiler [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
quantization
signal [BE] Enable ruff's UP rules and autoformat optim/ (#105426) 2023-07-18 21:07:43 +00:00
sparse [BE] Enable ruff's UP rules and autoformat optim/ (#105426) 2023-07-18 21:07:43 +00:00
special
testing Added ModuleInfo test for meta device ctx init (#105871) 2023-07-26 01:57:54 +00:00
utils [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
__config__.py
__future__.py
__init__.py Tweak dynamic=False behavior (#105715) 2023-07-24 16:56:41 +00:00
_appdirs.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_classes.py
_compile.py [dynamo] Reland #104317 - Lazy disable_dynamo API out-of-dynamo (#104664) 2023-07-06 00:48:02 +00:00
_deploy.py
_guards.py Unconditionally record when FakeTensorMode is allocated and report it on inconsistency (#105927) 2023-07-26 03:38:42 +00:00
_jit_internal.py
_linalg_utils.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
_lobpcg.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_lowrank.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_meta_registrations.py Improve FakeTensor to work with mixed meta-cpu embedding bag arguments (#105924) 2023-07-26 01:19:08 +00:00
_namedtensor_internals.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_ops.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_python_dispatcher.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
_sources.py
_storage_docs.py
_tensor_docs.py Add deterministic path for Tensor.resize_ (#104300) 2023-07-07 00:22:13 +00:00
_tensor_str.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_tensor.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_torch_docs.py [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
_utils_internal.py
_utils.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt
custom_class_detail.h
custom_class.h
extension.h
functional.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
hub.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
library.h
library.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
overrides.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
py.typed
quasirandom.py
random.py Correct warning message info in fork_rng (#104525) 2023-07-04 19:08:16 +00:00
README.txt
return_types.py
script.h
serialization.py [easy] Minor torch.load docs fix (#105876) 2023-07-25 03:58:30 +00:00
storage.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
torch_version.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
types.py [Reland] Update mypy to 1.4.1 (#105227) 2023-07-15 20:30:20 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.