pytorch/torch
Kurt Mohler 2bfae07a79 Enable dim=None for torch.mean (#81286)
Part of #79525

This will require coordination with XLA before merging, just like #79881
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81286
Approved by: https://github.com/albanD
2022-07-28 22:34:56 +00:00
..
_C Rename SymbolicIntNode to SymIntNodeImpl (#82350) 2022-07-28 18:27:45 +00:00
_C_flatbuffer
_decomp prevent python view impls from getting registered to the meta key (#82007) 2022-07-27 17:15:05 +00:00
_lazy Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_masked Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_prims Rename SymbolicIntNode to SymIntNodeImpl (#82350) 2022-07-28 18:27:45 +00:00
_prims_common Revert "Revert "Added dynamic shape POC (#81093)"" (#82063) 2022-07-23 22:35:50 +00:00
_refs Make factory functions CompositeExplicitAutograd (#82251) 2022-07-28 18:18:51 +00:00
_subclasses Extend fake tensor tests to cuda, add support for index put (#82281) 2022-07-28 16:07:15 +00:00
amp Fix docstring inconsistencies: string -> str, boolean -> bool (#82410) 2022-07-28 21:29:57 +00:00
ao Modify LinearAPoT matrix multiplication bitshift to support all k (#82409) 2022-07-28 20:40:26 +00:00
autograd Improve autograd custom function docs (#81340) 2022-07-21 19:54:30 +00:00
backends [RFC] enable oneMKL&oneDNN on-demands verbose functinality (#63212) 2022-07-27 23:29:35 +00:00
contrib
cpu
csrc Enable dim=None for torch.mean (#81286) 2022-07-28 22:34:56 +00:00
cuda Fix docstring inconsistencies: string -> str, boolean -> bool (#82410) 2022-07-28 21:29:57 +00:00
distributed Fix docstring inconsistencies: string -> str, boolean -> bool (#82410) 2022-07-28 21:29:57 +00:00
distributions Fix typo in dirichlet.py example (#82062) 2022-07-23 22:30:12 +00:00
fft
futures Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
fx Rename SymbolicIntNode to SymIntNodeImpl (#82350) 2022-07-28 18:27:45 +00:00
jit Enable dim=None for torch.mean (#81286) 2022-07-28 22:34:56 +00:00
legacy
lib Make language std configurable. (#75519) 2022-07-13 14:21:27 +00:00
linalg [Array API] Add linalg.vecdot (#70542) 2022-07-12 14:28:54 +00:00
monitor
multiprocessing Fix docstring inconsistencies: string -> str, boolean -> bool (#82410) 2022-07-28 21:29:57 +00:00
nested
nn Fix docstring inconsistencies: string -> str, boolean -> bool (#82410) 2022-07-28 21:29:57 +00:00
onnx [ONNX] exporter native_layer_norm (#81754) 2022-07-27 00:30:19 +00:00
optim Fix docstring inconsistencies: string -> str, boolean -> bool (#82410) 2022-07-28 21:29:57 +00:00
package Copy black config to ufmt and run lintrunner -a (#82043) 2022-07-23 06:04:21 +00:00
profiler Fix docstring inconsistencies: string -> str, boolean -> bool (#82410) 2022-07-28 21:29:57 +00:00
quantization fx quant: refactor qconfig setting out of find_matches 2022-06-17 18:52:00 +00:00
sparse Add spdiags sparse matrix initialization (#78439) 2022-07-01 01:11:54 +00:00
special torch.special.scaled_modified_bessel_k0 (#78900) 2022-06-29 14:53:37 +00:00
testing Enable dim=None for torch.mean (#81286) 2022-07-28 22:34:56 +00:00
utils Fix docstring inconsistencies: string -> str, boolean -> bool (#82410) 2022-07-28 21:29:57 +00:00
__config__.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
__future__.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
__init__.py Update outdated nondeterministic error examples (#82003) 2022-07-26 16:58:12 +00:00
_appdirs.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_classes.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_deploy.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_jit_internal.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_linalg_utils.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_lobpcg.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_lowrank.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_meta_registrations.py Enable complex for meta tensors (#79975) 2022-07-27 22:19:14 +00:00
_namedtensor_internals.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_ops.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_python_dispatcher.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_six.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_sources.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_storage_docs.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_tensor_docs.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_tensor_str.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_tensor.py cudagraphs dynamo backend (#80566) 2022-07-22 14:06:07 +00:00
_torch_docs.py Enable dim=None for torch.mean (#81286) 2022-07-28 22:34:56 +00:00
_utils_internal.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_utils.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_VF.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
_vmap_internals.py Apply ufmt to torch internal (#81643) 2022-07-22 02:19:50 +00:00
abi-check.cpp
CMakeLists.txt Back out "Revert D37720837: Back out "Revert D37228314: [Profiler] Include ActivityType from Kineto"" (#81450) 2022-07-15 18:25:40 +00:00
custom_class_detail.h
custom_class.h
deploy.h
extension.h
functional.py Fix docstring inconsistencies: string -> str, boolean -> bool (#82410) 2022-07-28 21:29:57 +00:00
hub.py Fix docstring inconsistencies: string -> str, boolean -> bool (#82410) 2022-07-28 21:29:57 +00:00
library.h Add CppFunction::makeFromBoxedKernel (#82268) 2022-07-27 15:08:52 +00:00
library.py Add doc string for Library.impl (#81047) 2022-07-08 18:18:14 +00:00
overrides.py get rid of push_torch_{dispatch, function}_mode (#78215) 2022-07-22 18:56:37 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py Simplify and optimize linalg.solve 2022-06-11 04:06:40 +00:00
script.h
serialization.py Avoid temporary buffers for tensors with torch.save. (#80404) 2022-06-30 00:19:42 +00:00
storage.py Remove remaining eval calls from torch/storage.py (#81701) 2022-07-19 20:04:41 +00:00
torch_version.py Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520) 2022-07-08 14:31:24 +00:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.