pytorch/torch
cyy d0ad848aa5 Enable misc clang-tidy checks (#110283)
This PR enables the misc-XX checks in clang-tidy. Meanwhile, I excluded some of them that require a lot of code changes and have no immediate benefits. Some additional fixes and suppression were also given.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110283
Approved by: https://github.com/albanD
2023-09-30 10:39:52 +00:00
..
_awaits
_C Rename torch._C._TensorBase to TensorBase (#109940) 2023-09-25 19:10:22 +00:00
_C_flatbuffer
_custom_op Add torch.library.impl_abstract (#109912) 2023-09-26 01:59:50 +00:00
_decomp [decomp] Fix baddbmm decomposition (#109714) 2023-09-28 21:23:44 +00:00
_dispatch Fix some fake mode confusion between inner/outer fake mode in export (#106515) 2023-08-04 15:42:23 +00:00
_dynamo Skip launching kernels with zero grid in AOT Inductor (#110312) 2023-09-30 09:12:56 +00:00
_export Skip launching kernels with zero grid in AOT Inductor (#110312) 2023-09-30 09:12:56 +00:00
_functorch Revert "Reland "Update AOTAutograd to use FunctionalTensorMode instead of C++ functionalization (#106406)" (#109906)" 2023-09-26 12:10:25 +00:00
_higher_order_ops [Dynamo] Add functional triton kernel wrapper (#110185) 2023-09-30 04:20:20 +00:00
_inductor Skip launching kernels with zero grid in AOT Inductor (#110312) 2023-09-30 09:12:56 +00:00
_lazy
_library [torch.library] Fix some docstrings (#110214) 2023-09-29 01:44:49 +00:00
_logging [BE]: Replace undocumented constant in logging (#109434) 2023-09-16 20:17:32 +00:00
_numpy BUG: fix torch._numpy.arange(5, dtype="float32") (#110005) 2023-09-28 18:21:18 +00:00
_prims fix infinite loop with primtorch and .to(meta) (#109632) 2023-09-22 07:09:04 +00:00
_prims_common Use _check_is_size for validate_dim_length (#109849) 2023-09-26 23:33:31 +00:00
_refs [refs] Fix size check from #108360 (#109083) 2023-09-27 23:59:29 +00:00
_subclasses Add masked_select abstract impl (#110103) 2023-09-27 04:07:58 +00:00
amp Unblock float16 dtype for xla autocasting (#109554) 2023-09-21 03:19:44 +00:00
ao [quant] Enable quantization for wav2letter (#109830) 2023-09-29 00:47:34 +00:00
autograd Support inference_mode decorator (#109274) 2023-09-27 22:21:42 +00:00
backends [BE]: enable ruff rules PLR1722 and PLW3301 (#109461) 2023-09-18 02:07:21 +00:00
compiler
contrib [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
cpu Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
csrc Enable misc clang-tidy checks (#110283) 2023-09-30 10:39:52 +00:00
cuda Improve torch.cuda.amp type hints (#108630) 2023-09-08 06:06:25 +00:00
distributed Log usage of optimizer in backward (#110206) 2023-09-29 11:00:07 +00:00
distributions Spelling fix (#108490) 2023-09-04 16:59:35 +00:00
export deprecate constraints in favor of dynamic_shapes (#110143) 2023-09-28 10:26:21 +00:00
fft
func [pt2] support vmap (#101707) 2023-08-09 03:39:33 +00:00
futures
fx Add support for item() and nonzero() codegen in Inductor (#109893) 2023-09-28 23:37:31 +00:00
jit Python 3.10 Union operator | support for JIT (#109293) 2023-09-25 15:35:54 +00:00
legacy
lib [RELAND] Remove some unnecessary <iostream> includes from headers (#108150) 2023-09-20 21:55:15 +00:00
linalg fix matrix_power documentation bug (#108585) 2023-09-05 22:08:46 +00:00
masked [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
monitor
mps [MPS] Introduce torch.mps.Event() APIs (#102121) 2023-08-08 03:45:45 +00:00
multiprocessing Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
nested Add NestedTensor python subclass (#108314) 2023-09-11 18:29:20 +00:00
nn Log usage of optimizer in backward (#110206) 2023-09-29 11:00:07 +00:00
onnx Revert "[ONNX] Remove the depreacated function _export (#109763)" 2023-09-25 17:47:21 +00:00
optim Simplify the conditionals used for learning rate calculation for ConstantLR learning rate scheduler (#109785) 2023-09-29 23:11:23 +00:00
package removing some redundant str splits (#106089) 2023-09-01 00:22:58 +00:00
profiler [profiler] move _enable_dynamo_cache_lookup_profiler (#107720) 2023-08-23 23:41:35 +00:00
quantization Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
signal [BE] Enable ruff's UP rules and autoformat optim/ (#105426) 2023-07-18 21:07:43 +00:00
sparse Minor fixes in semi-structured sparse code (#105595) 2023-09-25 14:06:08 +00:00
special
testing Fix aminmax on CUDA when input shape contains 0 (#107564) 2023-09-29 16:18:08 +00:00
utils Revert "Reland "Update AOTAutograd to use FunctionalTensorMode instead of C++ functionalization (#106406)" (#109906)" 2023-09-26 12:10:25 +00:00
__config__.py
__future__.py
__init__.py Add torch.library.impl_abstract (#109912) 2023-09-26 01:59:50 +00:00
_appdirs.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_classes.py
_compile.py
_custom_ops.py Add torch.library.impl_abstract (#109912) 2023-09-26 01:59:50 +00:00
_deploy.py
_guards.py [RELAND] Force synced KJT to trace unbacked SymInt (#108960) (#109216) 2023-09-18 14:39:44 +00:00
_jit_internal.py Python 3.10 Union operator | support for JIT (#109293) 2023-09-25 15:35:54 +00:00
_linalg_utils.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
_lobpcg.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_lowrank.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_meta_registrations.py Fix python decomps for OpOverloadPackets and add tests (#107707) 2023-09-25 20:53:30 +00:00
_namedtensor_internals.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_ops.py Add torch.ops.import_module (#110090) 2023-09-27 13:56:47 +00:00
_python_dispatcher.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
_sources.py
_storage_docs.py
_tensor_docs.py Rename torch._C._TensorBase to TensorBase (#109940) 2023-09-25 19:10:22 +00:00
_tensor_str.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_tensor.py Pickle support for NT (#110219) 2023-09-29 15:30:06 +00:00
_torch_docs.py Improved the docs for torch.std, torch.var, torch.std_mean, torch.var_mean and torch.cov (#109326) 2023-09-19 20:47:24 +00:00
_utils_internal.py [TORCH_LIBRARY] Add impl_abstract_pystub (#109529) 2023-09-22 04:55:36 +00:00
_utils.py Hide __getattr__ from type checkers (#109683) 2023-09-21 17:01:23 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt Clean up CMake target linking (#109959) 2023-09-25 01:37:14 +00:00
custom_class_detail.h
custom_class.h [RELAND] Remove some unnecessary <iostream> includes from headers (#108150) 2023-09-20 21:55:15 +00:00
extension.h reduce header file to boost cpp_wrapper build. (#107585) 2023-08-22 11:58:47 +00:00
functional.py Improve docs for torch.unique dim argument (#108292) 2023-09-02 11:09:09 +00:00
hub.py Default permissions for torch.hub downloads (#82869) 2023-08-24 15:48:24 +00:00
library.h [TORCH_LIBRARY] Add impl_abstract_pystub (#109529) 2023-09-22 04:55:36 +00:00
library.py [torch.library] Fix some docstrings (#110214) 2023-09-29 01:44:49 +00:00
overrides.py Disabled UserWarnings for some public functions in torch.overrides (#109890) 2023-09-23 20:40:04 +00:00
py.typed
quasirandom.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
random.py Integrate xpu into torch.Generator and torch.seed (#109866) 2023-09-27 17:44:45 +00:00
README.txt
return_types.py
script.h
serialization.py Fix hpu deserialization bug (#109499) 2023-09-19 00:10:51 +00:00
storage.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
torch_version.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
types.py [BE]: Apply PYI autofixes to various types (#107521) 2023-08-20 02:42:21 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.