pytorch/torch
rzou 1542874311 Delete qualname from custom_op decorator (#124092)
I forgot to delete this in an earlier PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124092
Approved by: https://github.com/albanD
ghstack dependencies: #123937, #124064, #124065, #124066, #124071, #124089
2024-04-18 12:48:04 +00:00
..
_awaits
_C Add OpOverload.redispatch; use it in new custom ops API (#124089) 2024-04-18 12:48:04 +00:00
_C_flatbuffer
_custom_op Revert "Switch quantized_decomposed over to new custom ops API (#123454)" 2024-04-12 13:14:59 +00:00
_decomp Refactored implementation for upsample_nearest decompostions (#122783) 2024-04-17 23:05:40 +00:00
_dispatch
_dynamo [dynamo][cpp-guard] Reland Attempt 1 - Enable cpp guard manager (#124231) 2024-04-18 06:36:20 +00:00
_export [export] Restore original placeholder names (part 3: constant input de/serialization) (#123590) 2024-04-15 19:09:41 +00:00
_functorch [aot] trim refcount for subclass runtime wrapper (#124155) 2024-04-18 02:34:52 +00:00
_higher_order_ops Rename impl_abstract to register_fake, part 1/2 (#123937) 2024-04-17 12:46:01 +00:00
_inductor Revert "Re-land precompile triton templates (#124030)" 2024-04-18 07:21:41 +00:00
_lazy
_library Delete qualname from custom_op decorator (#124092) 2024-04-18 12:48:04 +00:00
_logging [TORCH_TRACE] Record stack when no compile context is available (#122644) 2024-03-26 19:30:52 +00:00
_numpy [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
_prims [effects] Add inductor support for tokens (#122347) 2024-04-09 03:22:32 +00:00
_prims_common [effects] Add inductor support for tokens (#122347) 2024-04-09 03:22:32 +00:00
_refs Fixed arange decomp for float dtype (#121013) 2024-04-11 09:02:31 +00:00
_subclasses Add fake impl for aten.unique2 (#124306) 2024-04-17 22:55:27 +00:00
_vendor
amp [BE][Ez]: Fix minor potential perf regression from #123960 (#124013) 2024-04-15 16:51:45 +00:00
ao [quant] Do not decompose choose_qparams_per_token_asymmetric (#124178) 2024-04-16 22:58:48 +00:00
autograd Revert "[Profiler][PrivateUse1] Profiler support PrivateUse1 key (#120556)" 2024-04-17 15:38:14 +00:00
backends [BE]: Optimize min/max/sum comprehensions C419 (#123960) 2024-04-12 23:54:15 +00:00
compiler [torch.export] Support is_compiling() flag for non-strict mode (#119602) 2024-02-29 05:52:51 +00:00
contrib
cpu
csrc Add OpOverload.redispatch; use it in new custom ops API (#124089) 2024-04-18 12:48:04 +00:00
cuda [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
distributed [FSDP2] Made unshard return type consistent (#124293) 2024-04-17 23:33:46 +00:00
distributions
export Fix derived dim bugs in ep.run_decomp (#123326) 2024-04-17 04:00:55 +00:00
fft
func Let torch dynamo inline torch.func.grad (#118407) 2024-02-28 20:05:00 +00:00
futures
fx [sym_shapes][perf] _find not update unchanged replacements (#124274) 2024-04-18 08:32:02 +00:00
jit [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
legacy
lib [codemod][lowrisk] Remove unused exception parameter from caffe2/caffe2/image/image_input_op.h (#123056) 2024-04-04 17:24:43 +00:00
linalg Move doc links to point to main (#121823) 2024-03-15 19:49:37 +00:00
masked
monitor
mps
multiprocessing
nested [NT] Fix typo in declared strides variable (#123856) 2024-04-13 19:55:57 +00:00
nn Revert "Add swap_tensors path to nn parametrizations (#124130)" 2024-04-18 06:12:54 +00:00
onnx [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
optim Small Adamax fix (#123498) 2024-04-18 00:50:03 +00:00
package Back out "Support triton.language.dtype with torch.compile (#121690)" (#122108) 2024-03-18 20:50:28 +00:00
profiler Revert "[Profiler][PrivateUse1] Profiler support PrivateUse1 key (#120556)" 2024-04-17 15:38:14 +00:00
quantization
signal
sparse Revert "[sparse] Add fast semi-structured spasification kernels (#122350)" 2024-04-17 11:47:02 +00:00
special
testing [ATen] Add CPU fp16 support for nll_loss and cross_entropy_loss (#123256) 2024-04-18 11:44:38 +00:00
utils [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
xpu Support gpu trace on XPU (#121795) 2024-03-30 13:07:53 +00:00
__config__.py
__future__.py Update nn.Module._apply to not gate on should_use_set_data when swap_tensors is set (#120659) 2024-02-28 00:59:34 +00:00
__init__.py Update compile doc to suggest Module.compile (#123951) 2024-04-12 20:13:21 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
_deploy.py
_guards.py [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
_jit_internal.py Adjust logging content for TS usage logging (#123133) 2024-04-03 18:54:26 +00:00
_linalg_utils.py
_lobpcg.py
_lowrank.py Fix svd_lowrank parameter M (#122681) 2024-03-29 18:06:38 +00:00
_meta_registrations.py Extend int[48]mm ops to float32 input (#124287) 2024-04-17 23:10:49 +00:00
_namedtensor_internals.py
_ops.py Add OpOverload.redispatch; use it in new custom ops API (#124089) 2024-04-18 12:48:04 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_streambase.py [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
_tensor_docs.py Fix doc example of masked_scatter (#123664) 2024-04-09 22:15:12 +00:00
_tensor_str.py Add sparse compressed meta tensor support (#120707) 2024-03-01 13:28:47 +00:00
_tensor.py Disallow {FakeTensor,FunctionalTensor}.data_ptr (#122514) 2024-03-26 23:55:42 +00:00
_torch_docs.py Graph-Safe RNG State Exchange for Tensor Parallelism (#114068) 2024-03-27 01:14:38 +00:00
_utils_internal.py Stop requiring a pystub for register_fake by default (#124064) 2024-04-17 23:51:20 +00:00
_utils.py Refactor gpu trace to be device-agnostic (#121794) 2024-03-30 13:04:38 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Add testing and fix issues for weights_only load for LRScheduler (#123775) 2024-04-16 20:29:27 +00:00
abi-check.cpp
CMakeLists.txt
custom_class_detail.h
custom_class.h
extension.h
functional.py Fix ouput typos (#120870) 2024-02-29 08:29:14 +00:00
hub.py Add verbose parameter to torch.hub.list (#120717) 2024-03-01 07:39:48 +00:00
library.h Rename impl_abstract to register_fake, part 1/2 (#123937) 2024-04-17 12:46:01 +00:00
library.py Add OpOverload.redispatch; use it in new custom ops API (#124089) 2024-04-18 12:48:04 +00:00
overrides.py [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
py.typed
quasirandom.py
random.py [2/2] Intel GPU Runtime Upstreaming for Generator (#118613) 2024-02-28 05:28:11 +00:00
README.txt
return_types.py register torch.return_types in torch.fx._pytree (#120027) 2024-02-23 21:52:42 +00:00
script.h
serialization.py Add support to save safetensors checkpoint directly into onnx (#121001) 2024-03-11 15:21:59 +00:00
storage.py Add hpu device support in storage/resize (#119761) 2024-02-17 01:04:27 +00:00
torch_version.py Enable UFMT on torch_version.py and types.py (#123131) 2024-04-09 15:03:17 +00:00
types.py Enable UFMT on torch_version.py and types.py (#123131) 2024-04-09 15:03:17 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.