pytorch/torch
Aaron Gokaslan 8769fb854d [BE] Fix flake8 B027 errors - missing abstractmethod decorator (#100715)
Enables B027 and applies fixes by adding abstract method decorators. Autofix generated by ruff master.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100715
Approved by: https://github.com/ezyang
2023-05-09 17:28:48 +00:00
..
_awaits
_C Revert "[MPS] Add support for Custom Kernels (#100661)" 2023-05-09 17:02:04 +00:00
_C_flatbuffer
_decomp [decomp] Bad accuracy for elu_backward (#100284) 2023-04-29 04:21:20 +00:00
_dispatch
_dynamo Wrap more constraint violation cases to UserError (#100897) 2023-05-09 16:44:57 +00:00
_export [export] Pickle of ExportGraphModule (#100924) 2023-05-09 16:58:24 +00:00
_functorch [Functorch] Skip docs setup if called in optimize mode (#100750) 2023-05-08 23:36:57 +00:00
_higher_order_ops [Reland] Initial version of Dynamo capture for HigherOrderOperator (#100544) 2023-05-03 20:49:05 +00:00
_inductor Handle negative padding in reflect_pad_backward (#100923) 2023-05-09 16:30:48 +00:00
_lazy
_logging Expose function to retrieve list of registered loggers (#100776) 2023-05-06 04:22:28 +00:00
_prims [pt2] enable svd in fake_tensor (#100130) 2023-05-05 06:27:59 +00:00
_prims_common [pt2] add meta function for solve_triangular (#100829) 2023-05-08 13:48:15 +00:00
_refs [primTorch] add ref for polar (#100345) 2023-05-04 01:37:02 +00:00
_subclasses [pt2] enable svd in fake_tensor (#100130) 2023-05-05 06:27:59 +00:00
amp refactor macro with AMP (#99285) 2023-04-19 01:00:00 +00:00
ao [BE] Fix flake8 B027 errors - missing abstractmethod decorator (#100715) 2023-05-09 17:28:48 +00:00
autograd fix(docs): torch.autograd.graph.Node.register_hook can override grad_inputs, not grad_outputs (#100272) 2023-04-29 00:10:12 +00:00
backends Publicly exposing torch.backends.cpu.get_cpu_capability() (#100164) 2023-05-03 19:02:07 +00:00
contrib
cpu
csrc Revert "[MPS] Add support for Custom Kernels (#100661)" 2023-05-09 17:02:04 +00:00
cuda [BE] Enable C419 rule for any all shortcircuiting (#99890) 2023-04-25 15:02:13 +00:00
distributed [BE] Fix flake8 B027 errors - missing abstractmethod decorator (#100715) 2023-05-09 17:28:48 +00:00
distributions Remove in-place operations in NegativeBinomial (#96748) 2023-04-26 14:45:08 +00:00
fft
func
futures
fx [BE] Fix flake8 B027 errors - missing abstractmethod decorator (#100715) 2023-05-09 17:28:48 +00:00
jit Register get_cpu_capability for jit (#100723) 2023-05-09 09:52:29 +00:00
legacy
lib
linalg
masked
monitor
mps Revert "[MPS] Add support for Custom Kernels (#100661)" 2023-05-09 17:02:04 +00:00
multiprocessing Reduce overhead in CUDAGraph Trees (#98529) 2023-04-07 05:46:08 +00:00
nested [BE] Enable C419 rule for any all shortcircuiting (#99890) 2023-04-25 15:02:13 +00:00
nn [BE] Fix flake8 B027 errors - missing abstractmethod decorator (#100715) 2023-05-09 17:28:48 +00:00
onnx [ONNX] Refactor Input/Output Adapter (#100490) 2023-05-06 16:01:49 +00:00
optim [adam] Use the right params in weight_decay, rename for clarity, fixes #100707 (#100973) 2023-05-09 17:00:27 +00:00
package Convert logging f-strings to use % format, part five (#98765) 2023-04-11 13:17:59 +00:00
profiler [profiler] provide torch.profiler._utils._init_for_cuda_graphs() as a workaround (#100441) 2023-05-05 19:25:37 +00:00
quantization
signal Fix flake8 lint errors reported by ruff - take 2 (#99798) 2023-04-23 23:09:51 +00:00
sparse bsr_dense_bmm(): remove sparse_rowspace kernel and some dead code (#100876) 2023-05-09 16:12:11 +00:00
special
testing [BE] Fix flake8 B027 errors - missing abstractmethod decorator (#100715) 2023-05-09 17:28:48 +00:00
utils [DataLoader] __getitems__ added to description of Dataset API and better supported within Subset (#100375) 2023-05-05 15:52:28 +00:00
__config__.py
__future__.py
__init__.py Publicly exposing torch.backends.cpu.get_cpu_capability() (#100164) 2023-05-03 19:02:07 +00:00
_appdirs.py
_classes.py
_custom_op.py Add load_storage (#100519) 2023-05-05 05:25:03 +00:00
_deploy.py
_guards.py Move tracked nn_modules from OutputGraph to TracingContext (#100457) 2023-05-03 02:00:11 +00:00
_jit_internal.py [JIT] Allow tuple and list generics (#98703) 2023-04-09 22:58:58 +00:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py [WIP] enable cuda graphs support for flash attention with dropout (#100196) 2023-05-08 16:19:18 +00:00
_namedtensor_internals.py
_ops.py Revert "Initial version of Dynamo capture for HigherOrderOperator (#99988)" 2023-05-03 14:02:40 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_tensor_docs.py Fix Tensor.uniform_ documentation to mention generator argument (#99510) 2023-04-19 19:23:12 +00:00
_tensor_str.py Fix FakeTensor printing (#99205) 2023-04-18 13:26:27 +00:00
_tensor.py Change 'w.r.t.' to 'wrt' in function docstrings to fix doc rendering (#100028) 2023-04-25 23:53:26 +00:00
_torch_docs.py Modify repeat_interleave docs to highlight potential overloading (#99650) 2023-05-01 17:53:03 +00:00
_utils_internal.py Log PT2 compile to Scuba (#98790) 2023-04-11 20:10:35 +00:00
_utils.py add get_device_index for custom device (#98804) 2023-04-12 23:58:31 +00:00
_VF.py
_vmap_internals.py [BE] Enable C419 rule for any all shortcircuiting (#99890) 2023-04-25 15:02:13 +00:00
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt
custom_class_detail.h
custom_class.h
extension.h
functional.py
hub.py Add --offload-to-disk support to minifier (#100546) 2023-05-05 05:25:03 +00:00
library.h
library.py torch.library.Library.impl: add missing param in docstring example (#98619) 2023-04-11 06:09:46 +00:00
overrides.py Persist torch.assert in aten graph (#100101) 2023-04-28 07:31:43 +00:00
py.typed
quasirandom.py
random.py add rng_state support for custom device (#98069) 2023-04-10 22:36:55 +00:00
README.txt
return_types.py
script.h
serialization.py Support non-ASCII characters in model file paths (#99453) 2023-04-26 01:15:49 +00:00
storage.py Fix loading data on different encoding (#94503) 2023-04-25 21:05:20 +00:00
torch_version.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.