Commit Graph

101 Commits

Author SHA1 Message Date
Tongzhou Wang
fe12ac57a4 Improve docs for torch and torch.Tensor (#3969)
* doc overhaul

* update split doc
2017-12-01 14:56:48 -05:00
SsnL
f80902c6fa update Tensor.new doc 2017-11-30 23:14:19 -05:00
Vladislav Zavadskyy
30d06218cb Solved boolean ambiguity for variables and tensors which contain one value. (#3656)
* Solved boolean ambiguity for variables and tensors which contain one value.

* Update variable.py

* Update tensor.py
2017-11-12 11:07:50 -05:00
Filip Binkiewicz
54bfa88eec Allow casting one-element Tensors onto Python scalars 2017-10-31 08:51:55 -04:00
Ozan Çağlayan
e43a63a968 tensor: Ensure that the tensor is contiguous before pinning (#3266) (#3273)
* tensor: Ensure that the tensor is contiguous before pinning (#3266)

pin_memory() was producing out-of-order tensor when the given
tensor was transposed, i.e. in column-major order.
This commit fixes this by calling contiguous() before pinning.

* test: add contiguous test for pin_memory (#3266)
2017-10-25 13:17:54 +02:00
Holger Kohr
c488a9e9bf Add Numpy array interface to tensors 2017-10-03 10:27:56 -04:00
Gregory Chanan
08eb88f3de Duplicate what is tested in function tests in the method tests.
Also make some function-vs-method tests uniform and change method
tests so they will pass gradchecks (i.e. avoid nans)
2017-09-12 21:07:48 -04:00
albanD
a74fb22b9a fix inplace division for python3 (#2063) 2017-07-12 11:37:55 -04:00
Hugh Perkins
0d91048639 add dummy tensor.data property, to provide interpretable error message to users (#2058) 2017-07-12 10:22:08 -04:00
Leonid Vlasenkov
46a868dab7 [Ready] Limit docs line length (#1900)
* some docs are ready

* docs

* docs

* fix some more

* fix some more
2017-07-10 10:24:54 -04:00
Hugh Perkins
73fead9f8f add shape alias (#1983) 2017-07-05 19:12:37 -04:00
Alykhan Tejani
5964394a4c return empty iter when tensor is empty 2017-07-04 17:29:27 -04:00
gchanan
4e356528b4 Add torch.matmul function. (#1780)
* Add torch.matmul function.

Includes test_torch, test_autograd and docs changes.

* Add __all__ to functional so imports are accidentally imported.

* Include unbind in __all__.

* Add matmul case for when one argument is 1-dimensional and the other
at least 3-dimensional.

* Add squeeze_ to Variable.

* Use squeeze_ instead of squeeze for matmul.
2017-06-14 08:14:53 -04:00
Edward Z. Yang
865beada0e Add comment about new implementation being CPU-only.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-06-13 16:30:34 -04:00
Gregory Chanan
7da46097fe Fix lint errors. 2017-06-11 05:37:59 -04:00
Gregory Chanan
5af46cb352 Add broadcasting support for matmul. 2017-06-11 05:37:59 -04:00
gchanan
da7957c660 Fix masked_copy call to masked_scatter. (#1749) 2017-06-07 12:58:47 -04:00
Adam Paszke
a53cde09b5 Rename masked_copy_ to masked_scatter_ 2017-06-06 01:06:14 -04:00
Po-Hsien Chu
ba885a1a51 expose bitwise operators from C/CUDA (#1556)
* fix issue #1549, expose bitwise and

* expose C bitwise or of Tensor

* expose C bitwise xor of Tensor

* use built-in method for inplace and, or, xor

* expose C bitwise lshift(ilshift) and rshift(irshift) of Tensor
2017-05-15 11:36:15 -06:00
Tejas Khot
0160438eb9 added logical not operator for ByteTensor (#1403) 2017-04-30 08:47:24 -04:00
Trevor Killeen
48f9e526ea implement expand/expandAs in CPU/GPU code 2017-04-28 07:06:25 -04:00
Sam Gross
c4d1318662 Fix map_location in torch.load (#1006) 2017-03-15 16:54:19 -04:00
Christian Sarofeen
b1ae7f90d5 Added functionality for data parallel table (#843) 2017-03-05 02:35:46 +01:00
Martin Raison
f17cfe4293 sparse tensor operations (#735) 2017-03-03 18:37:03 +01:00
Adam Paszke
1f8939937a Allow using expand to broadcast tensors 2017-02-26 20:02:42 +01:00
Adam Lerer
e71cf20192 improved serialization (no tar copy) (#713) 2017-02-22 22:24:20 +01:00
Sam Gross
712686ce91 Add cat, contiguous, squeeze, and unsqueeze to THPP
Use unsqueeze and view from TH/THC
2017-02-11 17:49:31 +01:00
Luke Yeager
3ed720079e [pep8] Fix most remaining lint manually 2017-01-28 01:15:51 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Sam Gross
35c2821d71 Add documentation for methods defined in TensorBase (#462) 2017-01-17 17:40:54 -05:00
Sam Gross
dcf5f8671c Add __pow__ to Tensor and list additional undocumented functions (#398) 2017-01-03 13:38:44 -05:00
Adam Paszke
7c5014d803 Add torch.split, torch.chunk and change default dim of cat to 0 2016-12-31 16:25:39 -05:00
Adam Paszke
62ac1b4bdd Implement missing cases of __matmul__ 2016-12-31 16:25:39 -05:00
Adam Paszke
0633c08ec9 Add is_shared() method for storages and tensors 2016-12-31 16:25:39 -05:00
Sergey Zagoruyko
62af45d99f Basic functional interface (#354) 2016-12-29 22:53:57 +01:00
Sam Gross
24af02154c Use ForkingPickler for sharing tensor/storages across processes (#344)
This hooks into the (internal) ForkingPickler class in multiprocessing
to reduce tensors, storages, and CUDA events instead of our queue from
joblib. This makes it easier to use the standard multiprocessing classes
in later versions of Python.

This also exposes:

 - Tensor/Storage.share_memory_()
 - Module.share_memory()

These methods move the CPU tensors and storages to shared memory. If
you're using the "fork" method of multiprocessing, these objects can be
directly inherited instead of serialized through a queue.
2016-12-28 20:34:23 -05:00
Adam Paszke
e27bb3e993 Minor fixes 2016-12-28 18:15:17 +01:00
Adam Paszke
cd82b2b869 Implement comparison and logical operators for tensors 2016-12-28 00:04:08 +01:00
soumith
a215e000e9 fix for out of place tests and for non standard I/O pipes 2016-12-20 16:13:24 -08:00
Sam Gross
1af9a9637f Refactor copy and release GIL during copy (#286) 2016-12-11 21:54:58 +01:00
Adam Paszke
2fd78112ab Add half copy/conversions 2016-11-17 14:34:33 -08:00
Soumith Chintala
44874542c8 fix printing in console (#208) 2016-11-08 13:42:26 -05:00
Zeming Lin
86e42ba291 Adding truncated tensor printing (#202)
* Adding truncated tensor printing
2016-11-08 10:05:30 -05:00
Sam Gross
f2d7e94948 Use torch.Size for Tensor sizes and tuple for strides
See issue #20

The torch.Size class is a tuple subclass which distinguishes sizes from
other tuples so that torch.Tensor(size) is interpreted as size instead
of data.
2016-10-28 19:37:09 +02:00
Sam Gross
ee14cf9438 Add support for pinned memory: (#127)
torch.Storage/Tensor.pin_memory()
 torch.Storage/Tensor.is_pinned()
2016-10-15 18:38:26 -04:00
Sam Gross
0391bbb376 Fix view_as and view for empty tensors (#128) 2016-10-15 18:33:05 -04:00
Adam Paszke
0325e2f646 Major autograd refactor
Improves autograd performance by more than 2x and fixes a couple
of bugs. All core functions have been moved to C.
2016-10-13 17:17:49 -07:00
Sam Gross
cb5d4e836f Lazy load CUDA and THNN modules (#64) 2016-09-28 19:29:53 -04:00
Adam Paszke
1828e7c42f Add async CUDA copy 2016-09-27 15:12:48 -07:00
Adam Paszke
06ab3f962f Refactor _C extension to export some utilities 2016-09-21 08:36:54 -07:00
Adam Paszke
8fdec15a55 Codemod to remove camel case method naming 2016-09-20 08:40:28 -07:00