pytorch/torch
Michael Lazos 7b14a14e27 [Inductor] Optimize finding users of buffers for mutation (#105882)
Rather than visiting all nodes in the current environment to determine the users of a buffer, register the users of a buffer after node execution.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105882
Approved by: https://github.com/jansel
2023-07-29 02:04:03 +00:00
..
_awaits
_C Add torch.utils to the docs page, remove dead code and fix docstrings (#105142) 2023-07-26 14:24:58 +00:00
_C_flatbuffer
_custom_op Update custom op API (#105947) 2023-07-28 13:30:58 +00:00
_decomp inductor: enable weight prepack for LSTM (#103071) 2023-07-28 13:54:32 +00:00
_dispatch
_dynamo [Compiled Autograd] Handle aten.sym_size/aten.sym_stride (#105814) 2023-07-28 21:42:51 +00:00
_export [BE]: Enable ruff rules PIE807 and PIE810 (#106218) 2023-07-28 22:35:56 +00:00
_functorch inductor: fix CSE issue when have symbolic shape input at the freezing path (#105651) 2023-07-26 08:07:31 +00:00
_higher_order_ops Add torch.ops.out_dtype (#103333) 2023-07-18 16:25:45 +00:00
_inductor [Inductor] Optimize finding users of buffers for mutation (#105882) 2023-07-29 02:04:03 +00:00
_lazy
_logging [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
_prims [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_prims_common [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_refs Fix aten.logspace decomposition (#105201) 2023-07-22 04:10:20 +00:00
_subclasses [fake_tensor] Don't run fallback for fbgemm ops (#106210) 2023-07-28 22:31:54 +00:00
amp Documentation for torch.autocast (#95760) 2023-07-22 03:56:34 +00:00
ao Revert "[quant][pt2e] store scale/zero_point as tensor attributes to support serialization (#105894)" 2023-07-28 01:16:02 +00:00
autograd Fix typo ; Update grad_mode.py (#106045) 2023-07-27 00:24:50 +00:00
backends [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
compiler
contrib [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
cpu [core] Bring cpu device module closer to cuda's. (#103172) 2023-07-12 19:43:22 +00:00
csrc fix typo in serialization.md (#106191) 2023-07-29 00:01:59 +00:00
cuda [memory snapshots] removed chained history (#106079) 2023-07-28 06:45:48 +00:00
distributed [Optim in backward] API to retrieve in-backward optimizers (#105991) 2023-07-29 01:36:25 +00:00
distributions [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
fft
func
futures
fx If we can't statically prove 32-bit indexing OK, only add guard if hint exists (#106004) 2023-07-26 16:36:29 +00:00
jit [jit] move get_annotations out of infer_concrete_type_builder (#105197) 2023-07-26 13:39:39 +00:00
legacy
lib
linalg [DocString] Fix incorrect api Examples (#105911) 2023-07-25 13:03:06 +00:00
masked [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
monitor
mps [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
multiprocessing [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
nested
nn [DDP] Support optim in backward after DDP init (#105995) 2023-07-29 01:36:25 +00:00
onnx [ONNX] Support complex in FX exporter (#100554) 2023-07-28 07:03:07 +00:00
optim Change phrasing on optim state hook docs (#106209) 2023-07-28 18:59:21 +00:00
package [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
profiler [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
quantization
signal [BE] Enable ruff's UP rules and autoformat optim/ (#105426) 2023-07-18 21:07:43 +00:00
sparse [BE] Enable ruff's UP rules and autoformat optim/ (#105426) 2023-07-18 21:07:43 +00:00
special
testing [DDP] Support optim in backward after DDP init (#105995) 2023-07-29 01:36:25 +00:00
utils [memory snapshot] track context for segments (#106113) 2023-07-28 06:45:48 +00:00
__config__.py
__future__.py
__init__.py Tweak dynamic=False behavior (#105715) 2023-07-24 16:56:41 +00:00
_appdirs.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_classes.py
_compile.py [dynamo] Reland #104317 - Lazy disable_dynamo API out-of-dynamo (#104664) 2023-07-06 00:48:02 +00:00
_custom_ops.py Update custom op API (#105947) 2023-07-28 13:30:58 +00:00
_deploy.py
_guards.py Unconditionally record when FakeTensorMode is allocated and report it on inconsistency (#105927) 2023-07-26 03:38:42 +00:00
_jit_internal.py
_linalg_utils.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
_lobpcg.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_lowrank.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_meta_registrations.py [pt2] add meta for argsort.stable, use sort samples in OpInfo (#106025) 2023-07-27 03:49:17 +00:00
_namedtensor_internals.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_ops.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_python_dispatcher.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
_sources.py
_storage_docs.py
_tensor_docs.py [Doc] Add Tensor.Shape (#104750) 2023-07-26 16:30:15 +00:00
_tensor_str.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_tensor.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_torch_docs.py doc: fix fake quantize per channel doc (#105955) 2023-07-26 19:17:41 +00:00
_utils_internal.py
_utils.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt
custom_class_detail.h
custom_class.h
extension.h
functional.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
hub.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
library.h
library.py Enable registering fallthroughs to (op, dk) from torch.library (#106086) 2023-07-28 19:37:59 +00:00
overrides.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
py.typed
quasirandom.py
random.py Correct warning message info in fork_rng (#104525) 2023-07-04 19:08:16 +00:00
README.txt
return_types.py
script.h
serialization.py [easy] Minor torch.load docs fix (#105876) 2023-07-25 03:58:30 +00:00
storage.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
torch_version.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
types.py [Reland] Update mypy to 1.4.1 (#105227) 2023-07-15 20:30:20 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.