pytorch/torch
Edward Z. Yang b7215de32f prod ref
It turns out the prim is implemented incorrectly as torch.prod does not accept
a dim list, so I added a little stub for this.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78461

Approved by: https://github.com/ngimel
2022-05-31 14:18:49 +00:00
..
_C [PyTorch] Integrate Execution Graph Observer into PyTorch Profiler (#75358) 2022-05-26 08:06:27 +00:00
_C_flatbuffer [4/5]Testing jit module in flatbuffer in Python. (#74387) 2022-03-24 23:29:47 +00:00
_decomp Populate the torch._decomp table on import (#78476) 2022-05-31 03:46:38 +00:00
_lazy Revert "Revert "[LT] Codegen ReuseNode for supported ops"" 2022-05-16 20:14:42 +00:00
_masked Revert "masked logsumexp/logaddexp" 2022-05-24 16:12:35 +00:00
_prims prod ref 2022-05-31 14:18:49 +00:00
_refs prod ref 2022-05-31 14:18:49 +00:00
amp Update amp document with CPU Training/Inference Examples (#77244) 2022-05-11 15:42:45 +00:00
ao Make PyTorch importable on python-3.7.0 (#78500) 2022-05-31 06:11:30 +00:00
autograd Fix gradcheck when outputs that don't require grad precede those that do 2022-05-24 22:41:49 +00:00
backends Revert "[cuDNN V8 API] (reopen) Allow the number of kernels profiled under torch.backends.cudnn.benchmark = True to be limitedCudnnv8 benchmark limit (#77002)" 2022-05-24 21:52:35 +00:00
contrib
cpu add autocast cpu doc 2022-03-22 02:02:43 +00:00
csrc [NNC] channels last propagation within NNC fusion group (#76948) 2022-05-30 18:31:49 +00:00
cuda Fix jiterator doc format (#78471) 2022-05-31 03:44:52 +00:00
distributed Clean prefixes when searching for params / buffers to ignore (#78278) 2022-05-26 02:43:03 +00:00
distributions Add mode property to distributions. (#76690) 2022-05-11 18:26:56 +00:00
fft [complex32] fft support (cuda only) (#74857) 2022-05-12 04:28:55 +00:00
futures
fx Add strictness check and made tensors into leaves if input tensors were leaves (#77474) 2022-05-21 01:16:39 +00:00
jit Adding SSA support for convolution_backward 2022-05-20 18:39:47 +00:00
legacy
lib
linalg Update linalg.*norm 2022-05-18 11:46:50 +00:00
monitor
multiprocessing Restore old names for private funcs in legacy storages (#77861) 2022-05-20 02:03:34 +00:00
nested [PyTorch] Delete NestedTensor Python wrapper (#74691) 2022-03-29 19:13:40 +00:00
nn [docs][nn] conv: complex support note (#78351) 2022-05-26 20:33:36 +00:00
onnx [ONNX] Fix check_training_mode in symbolic_helper (#78376) 2022-05-27 00:38:16 +00:00
optim Adding maximize to Adamax (#77409) 2022-05-16 17:34:44 +00:00
package torch: Fix black linter 2022-05-20 01:14:08 +00:00
profiler [PyTorch] Integrate Execution Graph Observer into PyTorch Profiler (#75358) 2022-05-26 08:06:27 +00:00
quantization [quant][fx][improvement] Renamed default_affine_fixed_qparams_observer and default_symmetric_fixed_qparams_observer (#76637) 2022-05-04 02:39:20 +00:00
sparse Compressed sparse layout conversion stubs (#77489) 2022-05-16 18:37:42 +00:00
special Laguerre polynomial (#78366) 2022-05-30 17:24:00 +00:00
testing prod ref 2022-05-31 14:18:49 +00:00
utils [DataPipe] Refactor 'mux' to have buffer as an instance variable 2022-05-19 19:55:27 +00:00
__config__.py
__future__.py
__init__.py [quant] Reordering the imports in the torch/__init__.py 2022-05-20 03:51:15 +00:00
_appdirs.py
_classes.py
_deploy.py [lint] upgrade mypy to latest version 2022-05-03 20:51:34 +00:00
_jit_internal.py Revert "Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)" 2022-03-31 04:17:33 -07:00
_linalg_utils.py Remove deprecated torch.solve (#70986) 2022-05-10 13:44:07 +00:00
_lobpcg.py remove inverse from LOBPCG 2022-04-20 19:03:00 +00:00
_lowrank.py
_meta_registrations.py addr ref 2022-05-25 01:40:11 +00:00
_namedtensor_internals.py
_ops.py Return all overloads for an operator in _jit_get_operation 2022-05-04 23:49:47 +00:00
_python_dispatcher.py Lint fix 2022-05-05 05:52:40 +00:00
_six.py
_sources.py Revert "Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)" 2022-03-31 04:17:33 -07:00
_storage_docs.py Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
_tensor_docs.py reland of as_strided support for functionalization; introduce as_strided_scatter 2022-05-24 22:40:44 +00:00
_tensor_str.py Support str for Sparse Compressed tensors 2022-05-18 12:58:54 +00:00
_tensor.py Updates floor_divide to perform floor division (#78411) 2022-05-29 21:28:45 +00:00
_torch_docs.py Fix asarray documentation formatting (#78485) 2022-05-30 19:28:10 +00:00
_utils_internal.py
_utils.py Enhance the _rebuild_qtensor to support other device type other than CPU (#78234) 2022-05-26 01:36:37 +00:00
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Revert "[cuDNN V8 API] (reopen) Allow the number of kernels profiled under torch.backends.cudnn.benchmark = True to be limitedCudnnv8 benchmark limit (#77002)" 2022-05-24 21:52:35 +00:00
custom_class_detail.h
custom_class.h Fix some typos. 2022-04-11 21:55:59 +00:00
deploy.h
extension.h
functional.py Revert "stft: remove non-center overload and python functional wrapper" 2022-05-09 19:59:46 +00:00
hub.py Minor torchhub docs 2022-05-10 11:01:02 +00:00
library.h Back out Dispatcher change that makes Messenger Desktop crash on M1 devices (#77414) 2022-05-13 17:33:53 +00:00
library.py Allow specifying alias analysis while registering new ops 2022-05-19 21:11:40 +00:00
overrides.py Laguerre polynomial (#78366) 2022-05-30 17:24:00 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py Register torch.return_types.* as pytree nodes 2022-04-19 13:46:20 +00:00
script.h
serialization.py Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
storage.py Add meta device support to _UntypedStorage and _TypedStorage (#78008) 2022-05-28 15:33:45 +00:00
torch_version.py
types.py Introducing SymInt to Pytorch (for tracing size arithmetic) (master rebase) (#74861) 2022-03-31 21:59:59 +00:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.