pytorch/torch
Serhat Yilmaz 4ca4640bae [torch][repeat_interleave] remove stream syncronization if output size is given (#58417)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58417

Same as title.

Test Plan:
Rely on CI signal.

Update unit test to exercise new code path as well.

Reviewed By: ngimel

Differential Revision: D28482927

fbshipit-source-id: 3ec8682810ed5c8547b1e8d3869924480ce63dcd
2021-05-22 20:53:28 -07:00
..
_C enable torch.cpu.amp.autocast (#57386) 2021-05-20 17:48:36 -07:00
ao [sparsity] Moving the sparsity python files to OSS (#56617) 2021-04-22 14:07:31 -07:00
autograd [profiler][small] CUDA synchronize guard, minor fix (#58254) 2021-05-13 19:21:56 -07:00
backends NNAPI: flex size support for upsample_nearest2d op (#57563) 2021-05-05 13:54:43 -07:00
contrib
cpu enable torch.cpu.amp.autocast (#57386) 2021-05-20 17:48:36 -07:00
csrc [Pytorch Delegated Backend] Add python binding for (#57156) 2021-05-22 08:34:19 -07:00
cuda [1/n][torch/elastic] Move torchelastic docs *.rst (#148) 2021-05-04 00:57:56 -07:00
distributed Document monitored barrier (#58322) 2021-05-21 19:04:57 -07:00
distributions Deprecate symeig (#57732) 2021-05-12 02:21:35 -07:00
fft Fix variable names in torch.fft examples (#57290) 2021-05-01 15:56:19 -07:00
for_onnx Un-ignore F403 in .flake8 (#55838) 2021-04-13 09:24:07 -07:00
futures Update docs to mention CUDA support for Future (#50048) 2021-05-11 08:26:33 -07:00
fx [fx][graph_drawer] Improve graph drawer coloring and tensor_meta handling (#58699) 2021-05-20 21:26:04 -07:00
jit [JIT] improve documentation (#57991) 2021-05-19 11:47:32 -07:00
legacy
lib [c10d] Fix monitored_barrier with wait_all_ranks (#58702) 2021-05-21 09:40:50 -07:00
linalg Clarifies cholesky_ex role and makes batched support a common string (#58217) 2021-05-17 05:23:06 -07:00
multiprocessing Fix typo in warning for spawn method (#57927) 2021-05-10 13:12:38 -07:00
nn [DDP] Remove train call to module copies (#58595) 2021-05-20 22:34:20 -07:00
onnx [torch][repeat_interleave] remove stream syncronization if output size is given (#58417) 2021-05-22 20:53:28 -07:00
optim refactor ASGD to use functional API (#58410) 2021-05-19 18:55:52 -07:00
package [torch][package] Fix importlib.resources.path for python <3.8.8 (#58718) 2021-05-21 19:16:54 -07:00
profiler [profiler][small] CUDA synchronize guard, minor fix (#58254) 2021-05-13 19:21:56 -07:00
quantization [quant][fx] Validate qconfig_dict keys (#58566) 2021-05-21 15:20:05 -07:00
sparse
special [special] Add xlog1py (#55138) 2021-04-30 05:51:13 -07:00
testing OpInfo: clone, contiguous (#58390) 2021-05-22 18:25:31 -07:00
utils Fix typo. (#58728) 2021-05-21 11:45:10 -07:00
__config__.py
__future__.py
__init__.py enable torch.cpu.amp.autocast (#57386) 2021-05-20 17:48:36 -07:00
_appdirs.py Disallow versionless Python shebangs (#58275) 2021-05-14 08:26:02 -07:00
_autograd_functions.py
_classes.py
_deploy.py
_jit_internal.py torch.jit.ignore as a context manager (#55172) 2021-05-14 01:53:50 -07:00
_linalg_utils.py Deprecate symeig (#57732) 2021-05-12 02:21:35 -07:00
_lobpcg.py Deprecate symeig (#57732) 2021-05-12 02:21:35 -07:00
_lowrank.py Deprecate in docs torch.svd and change svd -> linalg_svd (#57981) 2021-05-11 18:04:10 -07:00
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_storage_docs.py
_tensor_docs.py cfloat and cdouble functions (#58137) 2021-05-13 21:13:37 -07:00
_tensor_str.py Add CSR (compressed sparse row) layout for sparse tensors (#50937) 2021-04-12 10:09:12 -07:00
_tensor.py Fix some tensor operators to return NotImplemented for invalid inputs (#58216) 2021-05-19 13:09:57 -07:00
_torch_docs.py [torch][repeat_interleave] remove stream syncronization if output size is given (#58417) 2021-05-22 20:53:28 -07:00
_utils_internal.py
_utils.py move flatten_dense_tensors and unflatten_dense_tensors to Native (#58006) 2021-05-12 18:18:34 -07:00
_VF.py
_vmap_internals.py Add note about improved vmap prototype to vmap docstring (#57677) 2021-05-06 06:47:18 -07:00
abi-check.cpp
CMakeLists.txt Fix return type of getDeviceMap (#57487) 2021-05-03 15:01:24 -07:00
custom_class_detail.h
custom_class.h
deploy.h
extension.h
functional.py Added sublist support for torch.einsum (#56625) 2021-05-21 08:36:45 -07:00
hub.py Add lint for unqualified type: ignore (#56290) 2021-04-21 08:07:23 -07:00
library.h HABANA Device registration key and Autograd key addition (#57094) 2021-05-12 13:07:33 -07:00
overrides.py enable torch.cpu.amp.autocast (#57386) 2021-05-20 17:48:36 -07:00
py.typed
quasirandom.py Port NumPy typing testing style to PyTorch (#54234) 2021-04-15 01:25:16 -07:00
random.py Port NumPy typing testing style to PyTorch (#54234) 2021-04-15 01:25:16 -07:00
README.txt
script.h
serialization.py Ensure torch.save() deterministic output (#57536) 2021-05-10 11:51:55 -07:00
storage.py
types.py Add type annotations to nnapi (#48142) 2021-04-26 19:08:07 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.