Commit Graph

220 Commits

Author SHA1 Message Date
Choongwoo Han
cf71385ec9 Implement torch.isnan (#5273)
* Implement torch.isnan

* Simple python implementation

* Fix typo
2018-02-19 19:46:35 -05:00
gchanan
b984c0b6e9
Various testing and utility improvements including torch.testing module. (#4726)
* Various testing and utility improvements including torch.testing module.

1) Remove method definition for randn_like since ones_like, zeros_like do not have methods.
2) Add an empty_like native function for creating a tensor with uninitialized values.
3) Add an is_floating_point() native function, similar to is_signed().
4) Add a torch.testing module loosely modeled after numpy.testing; currently it contains
   make_non_contiguous (moved from test_autograd) and randn_like (wrapper around the VariableFunction).
5) Remove code from test_autograd and test_nn that is responsible for generating grad_outputs to use
   with gradgradcheck.  These now use gradgradcheck's own generating code.  This fixes
   test_nn.py with scalars because gradgradcheck does the right thing here already.

* Rename parameter.

* Fix parameter usages.
2018-01-19 10:54:41 -05:00
Sam Gross
720c7b1e2c
Move repeat to torch/_utils.py (#4712)
This moves the implementation of repeat to _utils so that the autograd
function can call it directly instead of relying on forward being called
on tensors.

This also removes _range, which was previously necessary because we
shadowed the built-in range() function.
2018-01-17 17:30:43 -05:00
ptrblck
7c729e6321 - added size_splits to functional (#3837) 2018-01-04 09:52:47 -05:00
SsnL
9a48f8d7c3 add tests for btrifact_with_info and doc for btriunpack 2017-12-24 03:08:28 +08:00
gchanan
41c9959ef7
Enable functional torch.where. (#4298) 2017-12-21 13:55:57 -05:00
Tongzhou Wang
d8b2e5d091 Add python only default init expression; Implement stft, hann/hamming/bartlett window. (#4095)
* implement stft

* addressed comments; implemented window functions; added support for python only default initialization
2017-12-18 12:28:23 -05:00
Tongzhou Wang
fe12ac57a4 Improve docs for torch and torch.Tensor (#3969)
* doc overhaul

* update split doc
2017-12-01 14:56:48 -05:00
Tongzhou Wang
c681b03d37 Add determinant function on variable; Add backward on svd (#3816)
* determinant on variable

* svd bwd
2017-12-01 13:22:46 -05:00
lynic
54cabb8bf3 Correct negative dim behavior in torch.stack (#2084)
Fixes #1950
2017-07-13 16:29:31 -04:00
Sam Gross
8a4eb50ed1 Speed up torch.matmul for 3D+ x 2D/1D tensors (#1931)
If the left tensor is 3D+ and the right tensor is at most 2D, we can
fold the batch into the matrix dimension and use torch.mm instead of
torch.bmm. In practice, this is faster especially if the right tensor is
column major.
2017-06-28 17:43:21 -04:00
gchanan
4e356528b4 Add torch.matmul function. (#1780)
* Add torch.matmul function.

Includes test_torch, test_autograd and docs changes.

* Add __all__ to functional so imports are accidentally imported.

* Include unbind in __all__.

* Add matmul case for when one argument is 1-dimensional and the other
at least 3-dimensional.

* Add squeeze_ to Variable.

* Use squeeze_ instead of squeeze for matmul.
2017-06-14 08:14:53 -04:00
Sam Gross
3ab074b3c5 Fix torch.stack() with Variable inputs (#1345) 2017-04-24 12:20:51 -04:00
Sam Gross
24d92b5d9f Concatenate directly into shared memory when constructing batches (#1323)
This saves an extra memory copy, which speeds up data loading a bit
(5-10% with accimage).

As part of this change:

 * torch.cat accepts keyword argument out
 * sepcifiying out=None is treated like not specifying out
2017-04-22 03:40:30 -04:00
Soumith Chintala
15267ac009 fix typo 2017-04-15 13:08:58 -04:00
Brandon Amos
be146fd721 Add btriunpack and update the btrifact test. 2017-03-29 13:42:13 +02:00
Adam Paszke
825e919eb8 Add torch.unbind 2017-02-01 21:48:11 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
8a20e22239 Add torch.stack 2016-12-31 16:25:39 -05:00
Adam Paszke
7c5014d803 Add torch.split, torch.chunk and change default dim of cat to 0 2016-12-31 16:25:39 -05:00