pytorch/torch
kshitij12345 908fd3d78b [fix] composite compliance: quantile and nanquantile (#70894)
Summary:
Reference https://github.com/pytorch/pytorch/issues/69991

Refactored such that only `out` variant copies the result into `out` otherwise we just return the result of the composite functions as is.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70894

Reviewed By: samdow

Differential Revision: D33641742

Pulled By: zou3519

fbshipit-source-id: 671be13b31a7fff3afc0b7976706a5ecfc51ccac
(cherry picked from commit e7d5ac9af3)
2022-01-19 17:54:00 +00:00
..
_C backout D33469839 (#71443) 2022-01-18 23:51:51 +00:00
_masked
ao backout D33469839 (#71443) 2022-01-18 23:51:51 +00:00
autograd Enable nested default hooks (#70932) 2022-01-11 15:03:49 -08:00
backends NNAPI: quant logistic fix (#70847) 2022-01-07 13:36:33 -08:00
contrib
cpu
csrc Adapt to llvm marking SmallVector::set_size private (#71434) 2022-01-19 00:54:03 +00:00
cuda Document torch.cuda.ExternalStream, torch.cuda.caching_allocator_alloc and torch.cuda.caching_allocator_delete (#70126) 2022-01-12 15:44:40 -08:00
distributed Support multiple input dims for sharded linear. (#70266) 2022-01-19 08:07:14 +00:00
distributions [Reinstate] Wishart distribution (#70377) 2021-12-30 11:41:46 -08:00
fft
for_onnx
futures torch.futures doc formatting (#70630) 2022-01-07 15:22:22 -08:00
fx [fx2trt] Export some options out (#71315) 2022-01-19 02:13:31 +00:00
jit backout D33469839 (#71443) 2022-01-18 23:51:51 +00:00
legacy
lib
linalg Simplify forward / backward AD for linalg.eigh and add checks (#70528) 2022-01-12 08:35:52 -08:00
monitor torch/monitor: add pybind (#69567) 2022-01-12 13:35:11 -08:00
multiprocessing make ProcessException pickleable (#70118) 2021-12-30 09:09:55 -08:00
nn Fix docstring for nn.MultiHeadAttention (#71100) 2022-01-14 10:29:18 -08:00
onnx [ONNX] minor clarifications of docstrings (#69260) (#69549) 2022-01-13 18:03:27 -08:00
optim fix loading of older models that don't have maximize (#71023) 2022-01-10 06:01:24 -08:00
package [Codemod][FBSourceBlackLinter] Daily arc lint --take BLACK 2022-01-11 04:20:46 -08:00
profiler Add low level torch.profiler.kineto_profile base class (#63302) 2021-12-14 14:47:43 -08:00
quantization [quant][fx][graphmode][be] Change the type for output of convert to be torch.nn.Module (#69959) 2021-12-29 20:33:32 -08:00
sparse
special
testing [fix] composite compliance: quantile and nanquantile (#70894) 2022-01-19 17:54:00 +00:00
utils Lazy load pandas when importing pytorch (#71316) 2022-01-19 17:02:50 +00:00
__config__.py
__future__.py
__init__.py
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py Back out "Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions" 2021-12-27 09:11:46 -08:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_namedtensor_internals.py
_ops.py backout D33469839 (#71443) 2022-01-18 23:51:51 +00:00
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor_docs.py Fixes doc errors in Tensor.triu(), Tensor.tril(), Tensor.ravel(). (#71057) 2022-01-11 07:34:59 -08:00
_tensor_str.py added set_printoptions examples (#68324) 2021-12-14 07:40:52 -08:00
_tensor.py Document torch.cuda.ExternalStream, torch.cuda.caching_allocator_alloc and torch.cuda.caching_allocator_delete (#70126) 2022-01-12 15:44:40 -08:00
_torch_docs.py Fix torch.dsplit docs dim specification (#70557) 2022-01-13 19:04:51 -08:00
_utils_internal.py
_utils.py
_VF.py
_vmap_internals.py
abi-check.cpp
autocast_mode.py
CMakeLists.txt [pytorch][PR] Add ability for a mobile::Module to save as flatbuffer (#70201) 2022-01-12 16:30:39 -08:00
custom_class_detail.h
custom_class.h [jit] Decouple ivalue.h from jit_type.h (#70119) 2022-01-07 18:34:17 -08:00
deploy.h
extension.h
functional.py Add linalg.lu_factor (#66933) 2022-01-05 20:32:12 -08:00
hub.py
library.h [PyTorch] Outline destructor of CppFunction (#63688) 2022-01-07 09:16:23 -08:00
overrides.py add channels last support for ChannelShuffle (#50247) 2022-01-14 11:55:21 -08:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py
storage.py
torch_version.py Lazy import packaging in torch_version (#71345) 2022-01-18 22:12:41 +00:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.