pytorch/torch
Frank Zhang d4712ee218 Added correct isinf handling for Integral tensors (#15489)
Summary:
Currently torch.isinf on integral tensor will raise RuntimeError: value cannot be converted to type int16_t without overflow: inf.
This pr will suppress the error and return false(0) for all integral tensors. The behavior will also be consistent with np.isinf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15489

Reviewed By: zou3519

Differential Revision: D13540786

Pulled By: flashhack

fbshipit-source-id: e730dea849da6a59f3752d347bcfbadfd12c6483
2018-12-26 06:36:09 -08:00
..
_thnn Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
autograd Mention Jacobian-vector product in the doc of torch.autograd (#15197) 2018-12-15 00:10:30 -08:00
backends Add support for torch.backends.cudnn.enabled (#13057) 2018-10-31 09:31:09 -07:00
contrib Remove stages from IR, they are not longer used 2018-10-05 13:58:15 -07:00
csrc Trivial comment update in autograd/function.h (#15529) 2018-12-26 02:25:54 -08:00
cuda Bicubic interpolation for nn.functional.interpolate (#9849) 2018-12-17 15:31:48 -08:00
distributed Making dist.get_default_group private for PT1 release (#14767) 2018-12-04 19:22:24 -08:00
distributions Add at::one_hot (#15208) 2018-12-20 14:24:58 -08:00
for_onnx
jit Add self to Python printer reserved words (#15318) 2018-12-21 16:02:07 -08:00
legacy Remove torch/legacy (#11823) 2018-09-20 14:00:54 -07:00
lib Move CUDAGuard, CUDAStream and CUDAGuardImpl to c10/cuda (#14248) 2018-12-12 11:24:26 -08:00
multiprocessing Fix cuda multiprocessing cached memory (#14736) 2018-12-05 10:55:43 -08:00
nn Port replication_pad1d to ATen (#15507) 2018-12-24 06:34:02 -08:00
onnx Revert D13494873: [pytorch][PR] Fixing ONNX export of logical ops to have correct output datatype 2018-12-20 15:56:31 -08:00
optim Redefine scheduler to set learning rate using recursive formula (#14010) 2018-12-18 16:44:31 -08:00
sparse sparse.mm(), reland #14526 (#14661) 2018-12-03 10:39:27 -08:00
testing Codemod to update our codebase to 0.4 standard (#6641) 2018-04-17 22:06:54 -04:00
utils Enable running collect_env.py without building PyTorch (#15468) 2018-12-21 11:37:43 -08:00
__init__.py Skip all builtin functions when importing names from _C._VariableFunctions to torch (#13884) 2018-11-15 13:23:57 -08:00
_jit_internal.py Don't enforce docstrings on bool dispatch (#15306) 2018-12-17 14:41:05 -08:00
_ops.py Use realpath for loaded libraries (#13936) 2018-11-15 11:23:20 -08:00
_six.py Refactor dataloader.py (#15331) 2018-12-19 12:36:03 -08:00
_storage_docs.py [ready] General documentation improvements (#5450) 2018-03-08 13:21:12 -05:00
_tensor_docs.py Rename potrs to cholesky_solve (#15334) 2018-12-19 12:31:24 -08:00
_tensor_str.py Fix tensor printing bug in Python 2 (#12732) 2018-12-17 13:17:51 -08:00
_torch_docs.py Implementing cuda kernel for tril_indices and triu_indices (#15203) 2018-12-20 10:23:38 -08:00
_utils_internal.py Use fixed MASTER_PORT in test_distributed (#13109) 2018-10-25 08:51:34 -07:00
_utils.py Don't serialize hooks (#11705) 2018-10-16 20:11:03 -07:00
abi-check.cpp Fixes for Torch Script C++ API (#11682) 2018-09-17 09:54:50 -07:00
CMakeLists.txt allow non-final returns (#15463) 2018-12-21 14:01:33 -08:00
extension.h Remove deprecated variable_tensor_functions (#15003) 2018-12-11 17:16:11 -08:00
functional.py Added correct isinf handling for Integral tensors (#15489) 2018-12-26 06:36:09 -08:00
hub.py Improve hub documentation (#14862) 2018-12-07 14:59:01 -08:00
random.py [ready] General documentation improvements (#5450) 2018-03-08 13:21:12 -05:00
README.txt Make all of TH and THC C++. (#6913) 2018-04-28 07:45:02 -04:00
script.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
serialization.py Support torch.load with encoding (#14743) 2018-12-10 08:07:36 -08:00
storage.py Storage.clone maintains original device (#14751) 2018-12-05 08:33:56 -08:00
tensor.py Change default value of unique to 'sorted=True' 2018-12-20 17:09:08 -08:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.