Commit Graph

626 Commits

Author SHA1 Message Date
Sam Gross
7c0b16c140 Add torch.take and Tensor.put_ (#3263)
* Add torch.take and Tensor.put_

These are similar to numpy.take and numpy.put. The take function allows
you to linearly index into a tensor without viewing it as a 1D tensor
first. The output has the same shape as the indices. The put function
copies value into a tensor also using linear indices.
2017-11-01 06:04:44 -04:00
SsnL
91a8d3325e test sparse dp, broadcast_coalesced, reduce_add_coalesced 2017-10-28 18:52:35 -04:00
Ozan Çağlayan
e43a63a968 tensor: Ensure that the tensor is contiguous before pinning (#3266) (#3273)
* tensor: Ensure that the tensor is contiguous before pinning (#3266)

pin_memory() was producing out-of-order tensor when the given
tensor was transposed, i.e. in column-major order.
This commit fixes this by calling contiguous() before pinning.

* test: add contiguous test for pin_memory (#3266)
2017-10-25 13:17:54 +02:00
SsnL
634c8315a4 isContiguous problems (#3148)
* with the size=1 case, impossible to do single point check, replace with isContiguousRange

* fix stride in desc; fix undef scope

* add test for this case for cudnn

* assertTrue
2017-10-20 10:20:33 -04:00
Edward Z. Yang
2dcaa40425 Add get_rng_state_all and set_rng_state_all.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-30 16:21:04 -04:00
IraKorshunova
2b9765ad02 Erf and erfinv (#2799) 2017-09-20 21:23:45 -04:00
Francisco Massa
1da87118cc Optimize pow for different exponents and add tests 2017-09-10 13:51:05 -04:00
Anton Osokin
0d34a6451a fixing the bug with squeezing a singleton dimension in torch.min and torch.max 2017-08-16 17:51:48 -04:00
Francisco Massa
b797ee04fc Add CUDA version of eye 2017-08-16 17:25:52 -04:00
Gregory Chanan
b3db52fe36 Support __neg__, .neg(), and neg_() for Long, Int, Short tensor types. 2017-08-15 02:51:25 -04:00
Christian Sarofeen
ac76ab5fca Increase tol. for float tensor qr big test.
test_FloatTensor_qr_big test is still a bit flaky on K80. Increasing tolerance to improve reliability as tests are moved around and results change for this test.
2017-07-27 14:23:06 -04:00
ngimel
3c275fe7a0 Increase flaky test tolerance (#2185) 2017-07-22 11:37:34 -04:00
Sam Gross
71ce3448d9 Fix torch.inverse when magma is not available
Fixes #2156
2017-07-21 15:57:43 -04:00
Francisco Massa
82143487b3 Add CUDA support for arange
Also enables CUDA for range
2017-07-19 15:48:20 -04:00
Trevor Killeen
a45ad7cfba Advanced Indexing Part 1 -- Purely Integer Array Indexing 2017-06-22 17:21:50 -04:00
Gregory Chanan
5b81746767 Simplify python warning settings and cleanup tests. 2017-06-11 05:37:59 -04:00
Gregory Chanan
69287250d1 Add a broadcast parameter to copy_, use it in the library in cases where there is non-broadcasting calls exposed by the tests. 2017-06-11 05:37:59 -04:00
Gregory Chanan
5af46cb352 Add broadcasting support for matmul. 2017-06-11 05:37:59 -04:00
Gregory Chanan
a36f95fe26 Add broadcast support for fused-matmul broadcasting. Functions are: addmm, addbmm, addr, addmv, baddbmm. 2017-06-11 05:37:59 -04:00
Gregory Chanan
85d838a028 Testing over the following: 1) CPU tensor out-of-place functions 2) CPU tensor in-place functions 3) GPU tensor out-of-place functions 4) GPU tensor in-place functions 5) torch. functions 6) Fallback semantics (use pointwise nElem matching rather than broadcasting) 2017-06-11 05:37:59 -04:00
Edward Z. Yang
ba690d5607 Add support for NVTX functions. (#1748) 2017-06-10 18:26:58 +02:00
Alykhan Tejani
5f1a16a018 Torch manual seed to seed cuda devices (#1762) 2017-06-10 12:37:21 +02:00
Adam Paszke
7b578dd68e Add scatterAdd 2017-05-25 16:49:48 -04:00
Alexander Matyasko
33b3968660 add larger tests for qr 2017-05-08 16:58:54 -07:00
Trevor Killeen
f273377d19 add device asserts in scatter/gather kernels 2017-05-03 11:12:26 -04:00
Soumith Chintala
77035d151e make topk test unique 2017-04-28 07:30:25 -04:00
Adam Paszke
01a35dcace Fix coalesced CUDA collectives for nonhomogeneous lists 2017-04-11 14:48:54 -07:00
Rudy Bunel
b16a352a3b Fix remainder and cremainder for integer types 2017-04-07 17:17:44 -07:00
albanD
f0c7124420 Allow support for negative dimension argument for all functions 2017-04-06 16:37:00 -07:00
Adam Paszke
91c4ba7980 Add torch.arange and deprecate torch.range 2017-04-03 10:38:58 -04:00
Brandon Amos
bb353ccc17 Add batch triangular factorization and solves, add IntegerTensor to cwrap (#903) 2017-03-23 15:06:00 -04:00
Sam Gross
e50a1f19b3 Use streams in scatter to overlap copy with compute 2017-03-14 22:46:07 +01:00
soumith
7ad948ffa9 fix tests to not sys.exit(), also fix fatal error on THC initialization 2017-03-01 17:37:04 -05:00
Sam Gross
b190f1b5bc Add another pinned memory test.
Checks that pinned memory freed on a different GPU from which it was
allocated isn't re-used too soon.
2017-03-01 12:22:31 +01:00
Luke Yeager
61bd5a0643 [Lint] Address F811 2017-02-27 19:33:00 -05:00
Adam Paszke
4c474a9939 Improve prodall CUDA test 2017-02-20 23:28:31 -08:00
Adam Paszke
a1534cc37d Fix auto-gpu in cat 2017-02-14 21:28:50 +01:00
Sam Gross
712686ce91 Add cat, contiguous, squeeze, and unsqueeze to THPP
Use unsqueeze and view from TH/THC
2017-02-11 17:49:31 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
a1fa995044 Fixes and improvements (#593)
* Fix error in ELU backward

* Add --seed flag for testst st

* Add test for BatchNorm eval

* Fix autograd.backward docs

* Support cc flags in cuDNN search

* Fix IndexSelect backward formula
2017-01-25 22:21:49 -05:00
Sam Gross
d951d5b1cd Fix tensor.cuda(0) when on non-zero device. (#472) 2017-01-18 01:08:37 -05:00
Adam Paszke
f91bb96071 Remove cmin, cmax and cinv 2017-01-16 19:07:37 -05:00
soumith
b07358b329 renaming test to avoid dot in test name 2016-12-27 13:34:09 -08:00
soumith
2aea8077f9 renaming test to avoid dot in test name 2016-12-27 13:17:04 -08:00
Soumith Chintala
f45d75ed22 make the CUDA-aware tests backoff if CUDA no available 2016-12-24 15:36:00 -05:00
soumith
93ed476e7d adding LAPACK double bindings, adding fmod and remainder 2016-12-22 17:36:47 -08:00
Adam Paszke
59b9eeff49 Expose gather and equals for CUDA tensors 2016-12-19 20:35:08 -05:00
Sam Gross
20fffc8bb7 Fix torch.is_tensor for half tensors (#322)
Fixes #311
2016-12-19 15:27:47 +01:00
Sam Gross
0d7d29fa57 Enable caching allocator for CUDA pinned memory (#275)
Also add binding for CUDA "sleep" kernel
2016-12-02 01:33:56 -05:00
Adam Paszke
88d9fdec2e Add torch.cuda.set_device 2016-12-01 23:14:41 +01:00
Sam Gross
6322cf3234 Allow device=None in Tensor constructor"
Setting device=None is the same as not specifying the device (use the
current active device).
2016-12-01 20:09:19 +01:00
Soumith Chintala
103e70ccc5 adding cuda types for tensor methods (#194) 2016-11-02 10:25:58 -04:00
Sam Gross
f2d7e94948 Use torch.Size for Tensor sizes and tuple for strides
See issue #20

The torch.Size class is a tuple subclass which distinguishes sizes from
other tuples so that torch.Tensor(size) is interpreted as size instead
of data.
2016-10-28 19:37:09 +02:00
Adam Paszke
19f2f1a9d3 Buffer values when constructing a CUDA tensor from a sequence 2016-10-24 22:30:11 +02:00
Sam Gross
79ead42ade Add CUDA Stream and Event API (#133) 2016-10-18 12:15:57 -04:00
Sam Gross
ee14cf9438 Add support for pinned memory: (#127)
torch.Storage/Tensor.pin_memory()
 torch.Storage/Tensor.is_pinned()
2016-10-15 18:38:26 -04:00
Soumith Chintala
3d6ebde756 qr and ormqr tests and bugfix 2016-10-14 03:10:16 -04:00
Adam Paszke
0c9670ddf0 Allow remapping storages at load time and serialize data in little endian order 2016-10-04 12:54:55 -07:00
Adam Paszke
3f7ab95890 Finish implementation of prng related functions 2016-09-29 11:33:25 -07:00
Adam Paszke
3eac7164f4 Add data parallel functions to nn 2016-09-27 15:45:45 -07:00
Adam Paszke
1ed488da4f Make custom precision of CUDA tests work in inplace mode as well 2016-09-25 12:26:00 -07:00
Adam Paszke
5030d76acf Reduce precision of CUDA blas tests 2016-09-23 21:10:28 -07:00
Adam Paszke
a489884da4 Reduce precision of addmm CUDA test 2016-09-23 17:52:08 -07:00
Adam Paszke
06ab3f962f Refactor _C extension to export some utilities 2016-09-21 08:36:54 -07:00
Adam Paszke
8fdec15a55 Codemod to remove camel case method naming 2016-09-20 08:40:28 -07:00
Adam Paszke
da5bb373e6 Type conversions now use auto gpu 2016-09-15 18:48:27 -07:00
soumith
19ec206bad reducing tolerance in cumprod unit test 2016-09-14 15:53:14 -07:00
Adam Paszke
a0fb1ab86e Reduce precision for addmm and rsqrt CUDA tests 2016-09-14 11:08:53 -04:00
Adam Paszke
75579fcabd Fix Log autograd test 2016-08-23 10:42:36 -07:00
Adam Paszke
686e8d32e2 Add torch.save and torch.load 2016-08-23 07:51:55 -07:00
Adam Paszke
9fff8e7392 Fixes for changes in libs 2016-08-12 22:02:57 -07:00
Adam Paszke
1e905eb4d5 copy -> copy_ 2016-08-12 09:26:33 -07:00
Adam Paszke
12bed8dc0d Add CUDA device selection 2016-08-12 07:46:46 -07:00
Adam Paszke
fa6e5c5bff Update tests and fix CosineEmbeddingCriterion 2016-08-11 13:10:54 -07:00
Adam Paszke
ff00cdd728 Add cunn tests 2016-08-11 08:56:30 -07:00
Adam Paszke
1a57979f41 Add cutorch tests 2016-08-11 06:43:41 -07:00