Commit Graph

742 Commits

Author SHA1 Message Date
Luke Yeager
61bd5a0643 [Lint] Address F811 2017-02-27 19:33:00 -05:00
Adam Paszke
4c474a9939 Improve prodall CUDA test 2017-02-20 23:28:31 -08:00
Adam Paszke
a1534cc37d Fix auto-gpu in cat 2017-02-14 21:28:50 +01:00
Sam Gross
712686ce91 Add cat, contiguous, squeeze, and unsqueeze to THPP
Use unsqueeze and view from TH/THC
2017-02-11 17:49:31 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
a1fa995044 Fixes and improvements (#593)
* Fix error in ELU backward

* Add --seed flag for testst st

* Add test for BatchNorm eval

* Fix autograd.backward docs

* Support cc flags in cuDNN search

* Fix IndexSelect backward formula
2017-01-25 22:21:49 -05:00
Sam Gross
d951d5b1cd Fix tensor.cuda(0) when on non-zero device. (#472) 2017-01-18 01:08:37 -05:00
Adam Paszke
f91bb96071 Remove cmin, cmax and cinv 2017-01-16 19:07:37 -05:00
soumith
b07358b329 renaming test to avoid dot in test name 2016-12-27 13:34:09 -08:00
soumith
2aea8077f9 renaming test to avoid dot in test name 2016-12-27 13:17:04 -08:00
Soumith Chintala
f45d75ed22 make the CUDA-aware tests backoff if CUDA no available 2016-12-24 15:36:00 -05:00
soumith
93ed476e7d adding LAPACK double bindings, adding fmod and remainder 2016-12-22 17:36:47 -08:00
Adam Paszke
59b9eeff49 Expose gather and equals for CUDA tensors 2016-12-19 20:35:08 -05:00
Sam Gross
20fffc8bb7 Fix torch.is_tensor for half tensors (#322)
Fixes #311
2016-12-19 15:27:47 +01:00
Sam Gross
0d7d29fa57 Enable caching allocator for CUDA pinned memory (#275)
Also add binding for CUDA "sleep" kernel
2016-12-02 01:33:56 -05:00
Adam Paszke
88d9fdec2e Add torch.cuda.set_device 2016-12-01 23:14:41 +01:00
Sam Gross
6322cf3234 Allow device=None in Tensor constructor"
Setting device=None is the same as not specifying the device (use the
current active device).
2016-12-01 20:09:19 +01:00
Soumith Chintala
103e70ccc5 adding cuda types for tensor methods (#194) 2016-11-02 10:25:58 -04:00
Sam Gross
f2d7e94948 Use torch.Size for Tensor sizes and tuple for strides
See issue #20

The torch.Size class is a tuple subclass which distinguishes sizes from
other tuples so that torch.Tensor(size) is interpreted as size instead
of data.
2016-10-28 19:37:09 +02:00
Adam Paszke
19f2f1a9d3 Buffer values when constructing a CUDA tensor from a sequence 2016-10-24 22:30:11 +02:00
Sam Gross
79ead42ade Add CUDA Stream and Event API (#133) 2016-10-18 12:15:57 -04:00
Sam Gross
ee14cf9438 Add support for pinned memory: (#127)
torch.Storage/Tensor.pin_memory()
 torch.Storage/Tensor.is_pinned()
2016-10-15 18:38:26 -04:00
Soumith Chintala
3d6ebde756 qr and ormqr tests and bugfix 2016-10-14 03:10:16 -04:00
Adam Paszke
0c9670ddf0 Allow remapping storages at load time and serialize data in little endian order 2016-10-04 12:54:55 -07:00
Adam Paszke
3f7ab95890 Finish implementation of prng related functions 2016-09-29 11:33:25 -07:00
Adam Paszke
3eac7164f4 Add data parallel functions to nn 2016-09-27 15:45:45 -07:00
Adam Paszke
1ed488da4f Make custom precision of CUDA tests work in inplace mode as well 2016-09-25 12:26:00 -07:00
Adam Paszke
5030d76acf Reduce precision of CUDA blas tests 2016-09-23 21:10:28 -07:00
Adam Paszke
a489884da4 Reduce precision of addmm CUDA test 2016-09-23 17:52:08 -07:00
Adam Paszke
06ab3f962f Refactor _C extension to export some utilities 2016-09-21 08:36:54 -07:00
Adam Paszke
8fdec15a55 Codemod to remove camel case method naming 2016-09-20 08:40:28 -07:00
Adam Paszke
da5bb373e6 Type conversions now use auto gpu 2016-09-15 18:48:27 -07:00
soumith
19ec206bad reducing tolerance in cumprod unit test 2016-09-14 15:53:14 -07:00
Adam Paszke
a0fb1ab86e Reduce precision for addmm and rsqrt CUDA tests 2016-09-14 11:08:53 -04:00
Adam Paszke
75579fcabd Fix Log autograd test 2016-08-23 10:42:36 -07:00
Adam Paszke
686e8d32e2 Add torch.save and torch.load 2016-08-23 07:51:55 -07:00
Adam Paszke
9fff8e7392 Fixes for changes in libs 2016-08-12 22:02:57 -07:00
Adam Paszke
1e905eb4d5 copy -> copy_ 2016-08-12 09:26:33 -07:00
Adam Paszke
12bed8dc0d Add CUDA device selection 2016-08-12 07:46:46 -07:00
Adam Paszke
fa6e5c5bff Update tests and fix CosineEmbeddingCriterion 2016-08-11 13:10:54 -07:00
Adam Paszke
ff00cdd728 Add cunn tests 2016-08-11 08:56:30 -07:00
Adam Paszke
1a57979f41 Add cutorch tests 2016-08-11 06:43:41 -07:00