Commit Graph

1821 Commits

Author SHA1 Message Date
Sam Gross
10d24d8f84
Add Tensor.slice() (#3750)
The slice function is very similar to narrow, except that it takes an
optional "step" argument. Unlike narrow, the arguments use the same
conventions as Python indexing: negative values wrap around and start
and stop are clamped to the size of the Tensor.
2017-11-20 13:58:12 -05:00
Philipp Lang
c4b0db5079 Remove hard file offset reset in load() (#3695)
* improved file offset logic

* load offset test

* whitespace

* needless exception handling

* test integer in binary
2017-11-17 15:21:37 -05:00
SsnL
43d1405d0d Fix ld* conditions for gemv ger gemm (#3604) 2017-11-09 19:43:29 -05:00
gchanan
aabfae0503 CPU all/any should work with empty tensors. (#3581) 2017-11-08 20:18:26 -05:00
Holger Kohr
5e382894be add numpy() and from_numpy() to HalfTensor (#2953) 2017-11-08 15:01:29 +01:00
Sam Gross
ecbc4b0dc3
Fix float uniform generation in TH (#3541)
Generate random uniform floats in the range [0, 1) by generating random
uniform uint32 in the range [0, 2^24-1] and dividing by 2^24. This
ensures that the largest value is representable as a float32 less than
one.

This also changes the uniform double generation to use more bits of
randomness.
2017-11-07 16:26:11 -05:00
avmgithub
68116d7f84 Fix test_torch.py test for Power see issue #3277 (#3517) 2017-11-06 18:51:02 -05:00
Richard Zou
3d06a1e075 Make THCTensor_varInnermostDim numerically stable using Welford's algorithm (#3425)
* Use Welford's algorithm when reducing along inner dimension for THCTensor's variance fn

* Use accreals in THCTensor's varInnermostDim

* Skip cuda tests if no cuda

* Variance testing
2017-11-06 16:00:29 -05:00
SsnL
8fd171a6fd add test_index to test_cuda 2017-11-06 14:21:31 -05:00
SsnL
0bb0ee883e relax index dim check 2017-11-06 14:21:31 -05:00
Dhanton
74d1bb54e6 Add single argument version of torch.arange (#3494) 2017-11-06 12:26:04 -05:00
Richard Zou
e11d2b9c9c Better error messages for Aten tensor types (#3449)
* Better error messages for Aten tensor types

* Address comments, add unit test
2017-11-03 07:59:05 -04:00
Sam Gross
7c0b16c140 Add torch.take and Tensor.put_ (#3263)
* Add torch.take and Tensor.put_

These are similar to numpy.take and numpy.put. The take function allows
you to linearly index into a tensor without viewing it as a 1D tensor
first. The output has the same shape as the indices. The put function
copies value into a tensor also using linear indices.
2017-11-01 06:04:44 -04:00
Richard Zou
81b995514e Make THTensor_(var) and THTensor_(std) more numerically stable (#3410) 2017-10-31 18:36:26 -04:00
Filip Binkiewicz
e4a3747cd8 Add unit tests for casting onto scalars 2017-10-31 08:51:55 -04:00
albanD
1ae10a4831 add test to check zero_strided tensors in blas level 2 and 3 functions 2017-10-30 16:00:21 -04:00
Priya Goyal
129336cb06 [dlpack] Memory management for dlpack 2017-10-21 20:19:51 +02:00
Richard Zou
ed9c43774c Don't resize output in cpu torch.gels (#3204)
* Don't resize output in cpu torch.gels when m > n
2017-10-21 00:43:42 +02:00
SsnL
634c8315a4 isContiguous problems (#3148)
* with the size=1 case, impossible to do single point check, replace with isContiguousRange

* fix stride in desc; fix undef scope

* add test for this case for cudnn

* assertTrue
2017-10-20 10:20:33 -04:00
SsnL
38f87cc9c4 Limit print scale by sys.float_info (#3113)
* limit print scale by sys.float_info

* test print tiny/huge values in test_print

* fix lint
2017-10-14 08:52:01 +02:00
Priya Goyal
756ab3f24f Adding conversion from python tensor to dlpack tensor (#2933) 2017-10-04 08:35:42 -04:00
Holger Kohr
fa8044d92f Add tests for array interface 2017-10-03 10:27:56 -04:00
Taehoon Lee
5d9de014bd Fix typos 2017-10-01 03:09:25 -04:00
gchanan
805ad16924 Support "expanding" an empty tensor to an empty tensor. (#2824)
This doesn't currently support expanding the sizes to (0,), but
we can handle that eventually at the ATen level.
2017-09-22 11:58:03 -04:00
IraKorshunova
2b9765ad02 Erf and erfinv (#2799) 2017-09-20 21:23:45 -04:00
Trevor Killeen
9c39e8cecb Parity with NumPy newaxis placement in indexing (#2779) 2017-09-19 10:38:18 -04:00
Gregory Chanan
08eb88f3de Duplicate what is tested in function tests in the method tests.
Also make some function-vs-method tests uniform and change method
tests so they will pass gradchecks (i.e. avoid nans)
2017-09-12 21:07:48 -04:00
Francisco Massa
1da87118cc Optimize pow for different exponents and add tests 2017-09-10 13:51:05 -04:00
Trevor Killeen
8820d467d6 handle useless ellipsis in advanced indexing (#2589) 2017-09-01 14:27:47 -04:00
Trevor Killeen
26cdfcd9cf allow single non-tuple sequence to trigger advanced indexing (#2323) 2017-09-01 00:28:45 -04:00
Alykhan Tejani
bc228b2409 auto_gpu:True for ones_like and zeros_like (#2559) 2017-08-29 09:51:36 -04:00
Zhou Mo
2c07f88ea3 Fix typos. 2017-08-25 14:27:07 -04:00
Alykhan Tejani
eb58740651 add ones_like and zeros_like 2017-08-25 14:11:04 -04:00
rluo
3b155fa305 Not changing dimension size for expand when target size is -1 2017-08-25 14:04:23 -04:00
Anton Osokin
0d34a6451a fixing the bug with squeezing a singleton dimension in torch.min and torch.max 2017-08-16 17:51:48 -04:00
Luca Antiga
21d8465d8b Add test for Tensor creation from NumPy on CPU and CUDA 2017-08-16 17:44:58 -04:00
Gregory Chanan
b3db52fe36 Support __neg__, .neg(), and neg_() for Long, Int, Short tensor types. 2017-08-15 02:51:25 -04:00
Gregory Chanan
50c208a50b Revert "Fix typos."
This reverts commit 4622b33952.
2017-08-10 13:57:00 -04:00
Zhou Mo
4622b33952 Fix typos. 2017-08-08 11:05:38 -04:00
Adam Paszke
e708de37cc Allow keyword args in long_arg options 2017-07-20 01:45:57 -04:00
Trevor Killeen
31894cafdd add support for advanced indexing with less than ndim indexers, ellipsis (#2144) 2017-07-19 15:51:03 -04:00
Trevor Killeen
c4120f34bf move to model with cuda indexing tensors for cuda tensor adv indexing 2017-07-19 11:05:10 -04:00
Luca Antiga
366299f9f3 Wrap unbiased flag in var, std, varall, stdall 2017-07-14 17:29:06 -04:00
lynic
54cabb8bf3 Correct negative dim behavior in torch.stack (#2084)
Fixes #1950
2017-07-13 16:29:31 -04:00
lynic
f98c384973 Raise error when call from_numpy on 0-dim array (#2075)
* Raise error when call from_numpy on 0-dim array

Fixes: #2055

* reword error message
2017-07-13 09:56:12 -04:00
Christian Sarofeen
27da4eafc2 Remove more advanced indexing duplicate tests (#2071) 2017-07-13 00:30:52 -04:00
Sam Gross
841173c530 Use NamedTemporaryFile to avoid filename collisions (#2069) 2017-07-12 17:14:42 -04:00
albanD
a74fb22b9a fix inplace division for python3 (#2063) 2017-07-12 11:37:55 -04:00
Sam Gross
10e23943b3 Fix missing _forward_pre_hooks in serialized modules (#2057) 2017-07-11 18:23:35 -04:00
Trevor Killeen
10a8ccf27f only test gets for advanced indexing with duplicates (#2041) 2017-07-10 16:05:55 -04:00
lynic
90d0762d14 Use torch.arange instead of torch.range in test_torch.py (#1996) 2017-07-07 00:06:31 -04:00
Alykhan Tejani
5964394a4c return empty iter when tensor is empty 2017-07-04 17:29:27 -04:00
Luca Antiga
05c2bafc9d Have median reduce over all dims and return just the value when dim is not provided 2017-07-04 14:55:37 -04:00
Ethan Luo
406040f6a9 fix torch.is_tensor not recognizing HalfTensor (#1934) 2017-07-02 10:13:44 -04:00
Sam Gross
8a4eb50ed1 Speed up torch.matmul for 3D+ x 2D/1D tensors (#1931)
If the left tensor is 3D+ and the right tensor is at most 2D, we can
fold the batch into the matrix dimension and use torch.mm instead of
torch.bmm. In practice, this is faster especially if the right tensor is
column major.
2017-06-28 17:43:21 -04:00
Trevor Killeen
08648061f7 Advanced Indexing 2A - Colons + Adjacent Adv Indexers (#1890) 2017-06-28 10:01:45 -04:00
Sam Gross
7cdd018db4 Fix assertEquals for lists and tuples (#1913)
zip finishes once the first iterator is exhausted, so we were erroneously allowing things like assertEquals([1, 2], [1]) to pass.
2017-06-26 14:13:21 -04:00
Gregory Chanan
bb3779efe8 Add broadcasting to masked_select. 2017-06-24 09:45:21 -04:00
Trevor Killeen
a45ad7cfba Advanced Indexing Part 1 -- Purely Integer Array Indexing 2017-06-22 17:21:50 -04:00
Leonid Vlasenkov
3cecdf84f1 Storage from_file method (#1821) 2017-06-17 00:34:20 +02:00
gchanan
4e356528b4 Add torch.matmul function. (#1780)
* Add torch.matmul function.

Includes test_torch, test_autograd and docs changes.

* Add __all__ to functional so imports are accidentally imported.

* Include unbind in __all__.

* Add matmul case for when one argument is 1-dimensional and the other
at least 3-dimensional.

* Add squeeze_ to Variable.

* Use squeeze_ instead of squeeze for matmul.
2017-06-14 08:14:53 -04:00
Gregory Chanan
49ec984c40 Ensure warnings are repeated in python2 for tests. 2017-06-11 05:37:59 -04:00
Gregory Chanan
f4ce99fd87 Add dist, atan2, lerp to fallback functions.
They weren't documented as having those semantics, but tests on
master show they do.
2017-06-11 05:37:59 -04:00
Gregory Chanan
d5a0f97ea7 Renamed masked_copy to masked_scatter in test, fix use of break/continue. 2017-06-11 05:37:59 -04:00
Gregory Chanan
5b81746767 Simplify python warning settings and cleanup tests. 2017-06-11 05:37:59 -04:00
Gregory Chanan
7da46097fe Fix lint errors. 2017-06-11 05:37:59 -04:00
Gregory Chanan
21d9b0c9dd Ensure warnings are repeated in test, necessary in python2. 2017-06-11 05:37:59 -04:00
Gregory Chanan
69287250d1 Add a broadcast parameter to copy_, use it in the library in cases where there is non-broadcasting calls exposed by the tests. 2017-06-11 05:37:59 -04:00
Gregory Chanan
74a23c5aba Fix test_broadcast for cuda tensors, since map_, map2_ not implemented. 2017-06-11 05:37:59 -04:00
Gregory Chanan
65b23f146e Add broadcasting support for copy_, simplify code generation by moving a lot of currently generated code to expand_utils. 2017-06-11 05:37:59 -04:00
Gregory Chanan
c54e532954 Add broadcasting support for map_, map2_. 2017-06-11 05:37:59 -04:00
Gregory Chanan
ec120fac0c Add broadcasting support for masked_copy, masked_fill. 2017-06-11 05:37:59 -04:00
Gregory Chanan
5af46cb352 Add broadcasting support for matmul. 2017-06-11 05:37:59 -04:00
Gregory Chanan
a36f95fe26 Add broadcast support for fused-matmul broadcasting. Functions are: addmm, addbmm, addr, addmv, baddbmm. 2017-06-11 05:37:59 -04:00
Gregory Chanan
014372e707 Support "fused" ops: addcmul/addcdiv. 2017-06-11 05:37:59 -04:00
Gregory Chanan
e96f854ce2 Implement/test broadcasting semantics for comparison ops. 2017-06-11 05:37:59 -04:00
Gregory Chanan
e653fe2857 Test fixes for keepdim=False, suppress warnings on backwards-compatible behavior. 2017-06-11 05:37:59 -04:00
Gregory Chanan
70c33777a6 pow, fmod, remainder also should fallback.
This behavior isn't listed in the docs, but the tests depend on it.
2017-06-11 05:37:59 -04:00
Gregory Chanan
85d838a028 Testing over the following: 1) CPU tensor out-of-place functions 2) CPU tensor in-place functions 3) GPU tensor out-of-place functions 4) GPU tensor in-place functions 5) torch. functions 6) Fallback semantics (use pointwise nElem matching rather than broadcasting) 2017-06-11 05:37:59 -04:00
Gregory Chanan
e772a440cb Revert "Change keepdim default to False."
This reverts commit e124790cb2.

Note the original commit message is incorrect; this changes keepdim
back to false.
2017-06-11 05:37:58 -04:00
Adam Paszke
a53cde09b5 Rename masked_copy_ to masked_scatter_ 2017-06-06 01:06:14 -04:00
Adam Paszke
7b578dd68e Add scatterAdd 2017-05-25 16:49:48 -04:00
Adam Lerer
c39d48ea7d Fast transposed copy 2017-05-25 15:39:21 -04:00
Po-Hsien Chu
c57f0530e7 let long_args False for param "size" of set_ (#1568)
* fix #1524, let long_args False for param "size" of set_
2017-05-18 19:31:36 -04:00
Gregory Chanan
c4742fd128 Explicitly pass keepdim=False for tests that require it.
If we change the default to False, reverting this commit is optional.
2017-05-09 14:49:44 -07:00
Gregory Chanan
e124790cb2 Change keepdim default to False. 2017-05-09 14:49:21 -07:00
Gregory Chanan
d95f711501 Add a keepdim test to torch_test. 2017-05-09 14:25:01 -07:00
Gregory Chanan
ae2b2cbbec Make keepdim work with autograd. 2017-05-09 14:15:59 -07:00
Alexander Matyasko
33b3968660 add larger tests for qr 2017-05-08 16:58:54 -07:00
ethanluoyc
d0504aa41d Implement lgamma function. 2017-05-08 16:21:26 -07:00
Trevor Killeen
f273377d19 add device asserts in scatter/gather kernels 2017-05-03 11:12:26 -04:00
Tejas Khot
0160438eb9 added logical not operator for ByteTensor (#1403) 2017-04-30 08:47:24 -04:00
Martin Raison
701e63107f speed improvements, fix tests 2017-04-18 12:46:54 -07:00
Rudy Bunel
b16a352a3b Fix remainder and cremainder for integer types 2017-04-07 17:17:44 -07:00
albanD
f0c7124420 Allow support for negative dimension argument for all functions 2017-04-06 16:37:00 -07:00
Adam Paszke
91c4ba7980 Add torch.arange and deprecate torch.range 2017-04-03 10:38:58 -04:00
albanD
dfa2d26830 * make random_ range correct when both lower and upper are specified 2017-03-31 15:37:24 -04:00
Brandon Amos
be146fd721 Add btriunpack and update the btrifact test. 2017-03-29 13:42:13 +02:00
Brandon Amos
95aa2af377 btrisolve: Make a Tensor method and update argument order
Also update docs for btrifact and btrisolve to the newest interface.
2017-03-27 15:46:49 -04:00
Brandon Amos
bb353ccc17 Add batch triangular factorization and solves, add IntegerTensor to cwrap (#903) 2017-03-23 15:06:00 -04:00
Adam Paszke
faac0f5c25 Fix torch.cat bugs
Always use PySequence API and disallow catting along inexistent
dimensions.
2017-03-22 18:58:42 -04:00
Sam Gross
c4d1318662 Fix map_location in torch.load (#1006) 2017-03-15 16:54:19 -04:00
Hardik Goel
c93c884ee2 Add negative dimension to transpose and tests (#792) 2017-03-03 09:31:22 -05:00
Adam Paszke
490c15fae9 Fix slicing with step (#905) 2017-03-03 09:00:14 -05:00
Zhou Chang
f366e5fc81 Support int16 numpy conversions
issue #891
2017-03-02 09:15:57 -05:00
Alykhan Tejani
37e05485d9 added initialization schemes in torch.nn.init (#833) 2017-03-01 19:34:13 +01:00
Adam Paszke
67f94557ff Expose torch.HalfTensor 2017-02-27 19:35:47 -05:00
Luke Yeager
61bd5a0643 [Lint] Address F811 2017-02-27 19:33:00 -05:00
Luke Yeager
5d5cfe2e57 [Lint] Address E731 2017-02-27 19:33:00 -05:00
Adam Paszke
1f6f82dbcf Fall back to indexing compatible with numpy 2017-02-26 20:02:42 +01:00
Adam Paszke
1f8939937a Allow using expand to broadcast tensors 2017-02-26 20:02:42 +01:00
Francisco Massa
2dc563f1f1 Fix indexing when passing only an Ellipsis 2017-02-25 23:34:09 +01:00
Adam Lerer
e71cf20192 improved serialization (no tar copy) (#713) 2017-02-22 22:24:20 +01:00
Adam Paszke
84248690a9 Add support for indexing with None and slices with positive steps 2017-02-20 23:28:31 -08:00
Adam Paszke
b9ece39685 Make torch.Size methods return torch.Size, not tuple 2017-02-17 10:40:08 +05:30
Sam Gross
712686ce91 Add cat, contiguous, squeeze, and unsqueeze to THPP
Use unsqueeze and view from TH/THC
2017-02-11 17:49:31 +01:00
Adam Paszke
825e919eb8 Add torch.unbind 2017-02-01 21:48:11 +01:00
Adam Paszke
acb0ce8885 Add LongTensor indexing support 2017-02-01 21:48:11 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
a1fa995044 Fixes and improvements (#593)
* Fix error in ELU backward

* Add --seed flag for testst st

* Add test for BatchNorm eval

* Fix autograd.backward docs

* Support cc flags in cuDNN search

* Fix IndexSelect backward formula
2017-01-25 22:21:49 -05:00
Adam Paszke
3975a2676e Fix invalid DECREF in torch.Size constructor 2017-01-24 17:30:50 -05:00
Adam Paszke
8d1a6975d2 Fix for non-contiguous from_numpy (#489) 2017-01-18 18:53:13 -05:00
Adam Paszke
f91bb96071 Remove cmin, cmax and cinv 2017-01-16 19:07:37 -05:00
Adam Paszke
1c6ff53b60 Make storages unresizable once exported to numpy 2017-01-16 12:59:47 -05:00
Sam Gross
8c14630e35 Fix Tensor.apply_() (#444)
Fixes #411
2017-01-12 21:51:18 -08:00
Zeming Lin
59d66e6963 Sparse Library (#333) 2017-01-05 00:43:41 +01:00
Adam Paszke
8a20e22239 Add torch.stack 2016-12-31 16:25:39 -05:00
Adam Paszke
7c5014d803 Add torch.split, torch.chunk and change default dim of cat to 0 2016-12-31 16:25:39 -05:00
Adam Paszke
9b7eceddc8 Accept outputs in out argument 2016-12-29 12:25:59 +01:00
Adam Paszke
5497b1babb Use TypeError in invalidArguments 2016-12-28 18:15:17 +01:00
Adam Paszke
bef70aa377 Make type checking more strict and fix topk arguments 2016-12-28 18:15:17 +01:00
Adam Paszke
cd82b2b869 Implement comparison and logical operators for tensors 2016-12-28 00:04:08 +01:00
soumith
a215e000e9 fix for out of place tests and for non standard I/O pipes 2016-12-20 16:13:24 -08:00
Adam Paszke
26516f667e Fix multinomial bug and decrease precision of normal test (#325) 2016-12-17 21:40:13 +01:00
Adam Paszke
8a70067b92 Add support for stochastic functions in autograd (#294) 2016-12-16 13:14:37 +01:00
Adam Paszke
a681f6759b Raise correct error types when indexing tensors 2016-12-01 23:14:41 +01:00
Adam Paszke
1f5951693a Change torch.randperm to return Long tensors 2016-12-01 23:14:41 +01:00
Soumith Chintala
0fecec14b8 fixing bug in indexing when given float indices 2016-11-26 11:50:56 -05:00
Adam Paszke
8b492bbc47 Return accreal as correct python types 2016-11-25 00:40:36 +01:00
Adam Paszke
40247b0382 Fix torch tests in Python 3.3 and 3.4 2016-11-08 18:12:56 +01:00
Sam Gross
e3e786e35e Move source code checks from __getstate__ to torch.load (#200)
The __getstate__ and __setstate__ functions are called from copy.copy as
well as pickling. The source code inspection currently slows down the
data parallel code because it makes a copy of the object every
iteration.
2016-11-03 16:29:14 -04:00
Adam Paszke
e867baa5f9 Accept file paths in torch.save and torch.load 2016-11-01 19:31:53 +01:00
Sam Gross
ad5fdef6ac Make every user-visible Tensor have a Storage (#179) 2016-10-31 12:12:22 -04:00
Adam Paszke
6027513574 Add support for indexing with numpy types 2016-10-30 00:16:06 +02:00
Sam Gross
f2d7e94948 Use torch.Size for Tensor sizes and tuple for strides
See issue #20

The torch.Size class is a tuple subclass which distinguishes sizes from
other tuples so that torch.Tensor(size) is interpreted as size instead
of data.
2016-10-28 19:37:09 +02:00
Sam Gross
30924ff1e0 Fix test_nonzero flakiness (#173) 2016-10-26 19:50:56 -04:00
Adam Paszke
383c48968f Add support for indexing with ellipsis (#172) 2016-10-26 19:50:44 -04:00
Adam Paszke
9000f40e61 Add torch.from_numpy 2016-10-24 22:30:11 +02:00
Soumith Chintala
067662d280 making .numpy return writeable arrays (#164) 2016-10-24 16:23:28 -04:00
Francisco Massa
b85fc35f9a Fix for versions compiled without CUDA support (#155)
* Fix pytorch when compiling without CUDA support
* Skip print test with CUDA types if CUDA is not available
2016-10-23 13:03:10 +02:00
Soumith Chintala
bcb466fb76 fix bug with numpy conversion and storageOffset > 0 (#154) 2016-10-22 11:56:18 -04:00
Sam Gross
ee14cf9438 Add support for pinned memory: (#127)
torch.Storage/Tensor.pin_memory()
 torch.Storage/Tensor.is_pinned()
2016-10-15 18:38:26 -04:00
Sam Gross
0391bbb376 Fix view_as and view for empty tensors (#128) 2016-10-15 18:33:05 -04:00
Soumith Chintala
3d6ebde756 qr and ormqr tests and bugfix 2016-10-14 03:10:16 -04:00
Adam Paszke
966adc6291 Simplify torch.cat 2016-10-10 20:51:15 -07:00
Adam Paszke
96f61bff30 Add LAPACK functions 2016-10-08 20:37:37 -07:00
Adam Paszke
3f7ab95890 Finish implementation of prng related functions 2016-09-29 11:33:25 -07:00
Adam Paszke
4a8a185aa4 Preserve storage view sharing in torch.save and torch.load 2016-09-25 12:24:10 -07:00
Adam Paszke
e71204b52f Improve error messages in storage and tensor C functions 2016-09-23 17:17:35 -07:00
Adam Paszke
8fdec15a55 Codemod to remove camel case method naming 2016-09-20 08:40:28 -07:00
Adam Paszke
a8e816f450 Fix maskedSelect test 2016-09-18 12:54:12 -04:00
Adam Paszke
5d24432322 Fix errors when printing tensors with inf and nan values 2016-09-15 18:49:20 -07:00
Adam Paszke
4bad029fd4 Add more functions to autograd 2016-09-15 13:01:24 -07:00
Adam Paszke
f646391f26 Bug fixes and test improvements
Fixed:
* tensor and storage printing
* legacy.nn module printing
* SpatialCrosMapLRN tests

Also, all fixed bugs have regression tests now.
2016-09-08 19:07:05 -07:00
Sam Gross
1486d880b0 Add Storage.from_buffer
The from_buffer is similar to numpy's frombuffer. It decodes a Python
buffer object into a Storage object. For byte and char storages, it
simply copies the bytes.
2016-09-07 15:32:33 -07:00
Adam Paszke
cc62ee229e Fix torch tests 2016-08-24 10:10:52 -07:00
Adam Paszke
75579fcabd Fix Log autograd test 2016-08-23 10:42:36 -07:00
Adam Paszke
686e8d32e2 Add torch.save and torch.load 2016-08-23 07:51:55 -07:00
Adam Paszke
e6953000e8 Add tests for copy and pickle + make CUDA optional in legacy nn tests 2016-08-15 06:37:57 -07:00
Adam Paszke
1e905eb4d5 copy -> copy_ 2016-08-12 09:26:33 -07:00
Adam Paszke
1a57979f41 Add cutorch tests 2016-08-11 06:43:41 -07:00