pytorch/torch
Hong Xu 0ae0fac1bb Clarify, make consistent, and test the behavior of logspace when dtype is integral (#47647)
Summary:
torch.logspace doesn't seem to have explained how integers are handled.
Add some clarification and some test when dtype is integral.

The CUDA implementation is also updated to be consistent with CPU implementation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47647

Reviewed By: gchanan

Differential Revision: D25843351

Pulled By: walterddr

fbshipit-source-id: 45237574d04c56992c18766667ff1ed71be77ac3
2021-01-15 12:31:20 -08:00
..
_C Revert D24924236: [pytorch][PR] [ONNX] Handle sequence output shape and type inference 2021-01-15 05:58:35 -08:00
autograd Revert D25563542: Add batched grad testing to gradcheck, turn it on in test_autograd 2021-01-14 19:19:02 -08:00
backends
contrib Add type annotations to _tensorboard_vis.py and hipify_python.py (#49834) 2021-01-04 09:29:51 -08:00
csrc Remove optional for veiw_fn during View Tracking (#50067) 2021-01-15 08:29:28 -08:00
cuda Add torch.cuda.can_device_access_peer (#50446) 2021-01-12 20:30:45 -08:00
distributed Enable GPU-to-GPU comm in TensorPipeAgent (#44418) 2021-01-14 13:55:41 -08:00
distributions Validate args in HalfCauchy and HalfNormal (#50492) 2021-01-14 10:16:56 -08:00
fft Remove deprecated spectral ops from torch namespace (#48594) 2020-12-05 04:12:32 -08:00
for_onnx
futures [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
fx [FX] Add wrap() docstring to docs and add decorator example (#50555) 2021-01-14 21:31:51 -08:00
jit Drop unused imports (#49972) 2021-01-13 12:26:17 -08:00
legacy
lib Change watchdog timeout logging from INFO to ERROR. (#50455) 2021-01-12 20:15:39 -08:00
linalg Added linalg.pinv (#48399) 2021-01-12 06:52:06 -08:00
multiprocessing Drop unused imports (#49972) 2021-01-13 12:26:17 -08:00
nn add type annotations to torch.nn.modules.conv (#49564) 2021-01-15 11:16:11 -08:00
onnx Revert D24924236: [pytorch][PR] [ONNX] Handle sequence output shape and type inference 2021-01-15 05:58:35 -08:00
optim [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
package [package] mangle imported module names (#50049) 2021-01-13 16:32:36 -08:00
profiler [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
quantization [quant][refactor] Minor refactor of some typos (#50304) 2021-01-12 15:23:13 -08:00
sparse [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
testing Enable GPU-to-GPU comm in TensorPipeAgent (#44418) 2021-01-14 13:55:41 -08:00
utils MAINT: char class regex simplify (#50294) 2021-01-13 08:48:17 -08:00
__config__.py Expose CXX_FLAGS through __config__ (#47861) 2020-12-01 19:58:29 -08:00
__future__.py
__init__.py [quant] Quantizable LSTM (#49671) 2020-12-30 15:21:38 -08:00
_appdirs.py
_autograd_functions.py
_classes.py [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
_jit_internal.py Unused exception variables (#50181) 2021-01-08 13:33:18 -08:00
_linalg_utils.py Drop unused imports (#49972) 2021-01-13 12:26:17 -08:00
_lobpcg.py [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
_lowrank.py Drop unused imports (#49972) 2021-01-13 12:26:17 -08:00
_namedtensor_internals.py
_ops.py [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
_six.py Remove redundant code for unsupported Python versions (#49486) 2021-01-06 12:45:46 -08:00
_storage_docs.py
_tensor_docs.py [numpy] torch.{all/any} : output dtype is always bool (#47878) 2021-01-08 11:05:39 -08:00
_tensor_str.py Reland: Add base forward grad logic (#49734) 2020-12-22 12:11:27 -08:00
_torch_docs.py Clarify, make consistent, and test the behavior of logspace when dtype is integral (#47647) 2021-01-15 12:31:20 -08:00
_utils_internal.py
_utils.py add type annotations to torch._utils (#49705) 2021-01-07 16:20:16 -08:00
_VF.py
_vmap_internals.py Fix return value of _vmap_internals._get_name (#49951) 2021-01-05 07:00:48 -08:00
abi-check.cpp
CMakeLists.txt pyi codegen update - remove Declarations.yaml (#48754) 2020-12-07 10:39:38 -08:00
custom_class_detail.h
custom_class.h
extension.h
functional.py Treat has_torch_function and object_has_torch_function as static False when scripting (#48966) 2021-01-10 19:23:38 -08:00
hub.py add close() method to tqdm mock (#46040) 2020-12-21 12:24:30 -08:00
library.h Remove .impl_UNBOXED() and functionalities associated with it (#49220) 2021-01-06 14:22:46 -08:00
overrides.py [StaticRuntime][ATen] Add out variant for narrow_copy (#49502) 2021-01-12 19:35:32 -08:00
py.typed
quasirandom.py
random.py [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
README.txt
script.h
serialization.py Remove redundant code for unsupported Python versions (#49486) 2021-01-06 12:45:46 -08:00
storage.py
tensor.py move has_torch_function to C++, and make a special case object_has_torch_function (#48965) 2021-01-10 19:23:35 -08:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.