Alykhan Tejani
eb58740651
add ones_like and zeros_like
2017-08-25 14:11:04 -04:00
gchanan
c000d15058
Properly use Py_RETURN_True, Py_RETURN_False in back compatibility warnings. ( #2345 )
2017-08-08 21:54:20 -04:00
Zach DeVito
9d8cff9bc1
initialize aten and pytorch to share the same THCState
2017-07-11 10:35:03 -04:00
Adam Paszke
714351ff39
Officially enable process-group mode
2017-06-12 22:02:11 -04:00
Gregory Chanan
4f602a52b5
Use THPUtils_assert rather than THError in torch/csrc/Module.
2017-06-11 05:37:59 -04:00
Gregory Chanan
ffd808768e
Remove raiseErrors from THTensor functions, have THStorage functions take an error_buffer to return a proper error message while being able to handle memory management correctly from calling function.
2017-06-11 05:37:59 -04:00
Gregory Chanan
177785eecf
explicit Ptr constructors, fast transposed copy.
2017-06-11 05:37:59 -04:00
Gregory Chanan
be65f46c76
Add optional warning for backwards incompatible keepdim. Setting torch.utils.backcompat.keepdim.warning.enabled=True will cause Python warnings in the case where the default value of keepdim is used for 1-d reductions.
...
Also specify keepdim via kwargs in library so these warnings have less
noise.
2017-06-11 05:37:59 -04:00
Gregory Chanan
3556d1b8a3
Add optional warning for backwards incompatible broadcast.
...
Setting torch.utils.backcompat.broadcast.warning.enabled=True
will cause Python warnings in the case where broadcast occurs
but previously 1-d view style pointwise ops occured.
2017-06-11 05:37:59 -04:00
Gregory Chanan
5af46cb352
Add broadcasting support for matmul.
2017-06-11 05:37:59 -04:00
Sam Gross
d81da41650
Make sure the number of MKL and OpenMP threads match
...
Otherwise, on many machines, the size of the OpenMP thread pool will
change between MKL and our OpenMP enabled functions. The constant thread
creation and destruction results in worse performance and leaks memory
on GCC 5.4
2017-06-07 14:53:29 -04:00
Adam Paszke
8ea7c87c29
Improve init methods
2017-06-02 23:42:11 +02:00
Adam Paszke
181d2f41bd
Add initial Python wrappers for THDTensors
2017-06-02 23:42:11 +02:00
Trevor Killeen
05bc877a05
make THPPointer have explicit constructors ( #1636 )
2017-05-25 15:35:54 -04:00
ethanluoyc
d0504aa41d
Implement lgamma function.
2017-05-08 16:21:26 -07:00
Sam Gross
4c1cdb6148
Refactor Python string utility function
2017-04-28 21:25:26 +02:00
Sam Gross
27990fee54
Use fully qualified name as tp_name for tensors and storages ( #1379 )
2017-04-27 16:26:44 -04:00
Martin Raison
cd3bbc9dfd
more operations and optimizations (hspmm, reorder, ...)
2017-04-18 12:46:54 -07:00
albanD
71303b8af4
Autograd deadlock for recent glibc fix ( #1243 )
2017-04-12 22:24:31 +02:00
Adam Paszke
afeeb81e79
Add support for keyword arguments in torch.cat
2017-04-11 14:48:54 -07:00
Adam Paszke
91c4ba7980
Add torch.arange and deprecate torch.range
2017-04-03 10:38:58 -04:00
albanD
dfa2d26830
* make random_ range correct when both lower and upper are specified
2017-03-31 15:37:24 -04:00
Sergey Zagoruyko
8dc5d2a22e
export current_blas_handle
2017-03-23 23:32:45 +01:00
Brandon Amos
bb353ccc17
Add batch triangular factorization and solves, add IntegerTensor to cwrap ( #903 )
2017-03-23 15:06:00 -04:00
Adam Paszke
faac0f5c25
Fix torch.cat bugs
...
Always use PySequence API and disallow catting along inexistent
dimensions.
2017-03-22 18:58:42 -04:00
Sam Gross
379ae6d865
Refactor out dispatchStateless ( #1007 )
...
Some of the error messages were incorrect due to erroneous
'tensor == THPDefaultTensorClass' checks
2017-03-15 16:24:55 -04:00
Martin Raison
f17cfe4293
sparse tensor operations ( #735 )
2017-03-03 18:37:03 +01:00
Zhou Chang
f366e5fc81
Support int16 numpy conversions
...
issue #891
2017-03-02 09:15:57 -05:00
Sam Gross
fc6fcf23f7
Lock the cudaFree mutex. ( #880 )
...
Prevents NCCL calls from overlapping with cudaFree() which can lead to
deadlocks.
2017-03-01 11:29:25 -05:00
Adam Paszke
67f94557ff
Expose torch.HalfTensor
2017-02-27 19:35:47 -05:00
Sam Gross
bd5303010d
Refactor autograd package to separate Python dependencies. ( #662 )
...
The core autograd Variable, Function, and Engine no longer depend on the
Python API. This let's us implement functions in C++. In the future, we
can also multithread engine and release the GIL for most of the
non-Python backwards.
2017-02-13 16:00:16 -08:00
Sam Gross
712686ce91
Add cat, contiguous, squeeze, and unsqueeze to THPP
...
Use unsqueeze and view from TH/THC
2017-02-11 17:49:31 +01:00
Adam Paszke
79232c24e2
Fixes after rebase
2017-01-31 01:58:09 +01:00
Janusz Marcinkiewicz
76520512e7
DataChannel tests rewrite ( #42 ); DataChannel isend and irecv implementation ( #44 )
2017-01-31 01:58:09 +01:00
Adam Paszke
60d1852c7b
Major improvements to master-worker mode
...
* Fixed all undefined symbol errors
* Implemented storage interface and THStorage class
* RPC improvements
* Code refactor
2017-01-31 01:58:09 +01:00
Adam Paszke
55632d81d2
Add Python wrappers for process group mode
2017-01-31 01:58:09 +01:00
Sam Gross
c414bf0aaf
Fix handling of unicode in torch._C._add_docstr ( #487 )
2017-01-18 17:22:30 -05:00
Sam Gross
9302f860ae
Remove unused file TensorDocstrings.cpp ( #481 )
...
Tensor docstrings are created in _tensor_docs.py
2017-01-18 13:34:40 -05:00
Soumith Chintala
8aa8f791fc
add more torch.* and Tensor docs ( #476 )
2017-01-18 08:39:33 -05:00
Sam Gross
14d5d52789
Add placeholder tensor documentation for methods that exist in torch. ( #463 )
2017-01-17 19:37:47 -05:00
Adam Paszke
f91bb96071
Remove cmin, cmax and cinv
2017-01-16 19:07:37 -05:00
Soumith Chintala
bdfef2975c
adding more docs for torch.* functions
2017-01-11 08:19:49 -08:00
Zeming Lin
59d66e6963
Sparse Library ( #333 )
2017-01-05 00:43:41 +01:00
Soumith Chintala
6b4ed52f10
adding docs for some torch.* functions, removing all, any stateless methods
2017-01-03 18:29:50 -05:00
Sam Gross
849794cd2c
Remove deprecated and unimplemented functions ( #383 )
2016-12-30 18:37:44 -05:00
Sam Gross
ab5776449c
Add documentation for some torch.xxx functions ( #382 )
2016-12-30 17:01:47 -05:00
Adam Paszke
9b7eceddc8
Accept outputs in out argument
2016-12-29 12:25:59 +01:00
Sam Gross
24af02154c
Use ForkingPickler for sharing tensor/storages across processes ( #344 )
...
This hooks into the (internal) ForkingPickler class in multiprocessing
to reduce tensors, storages, and CUDA events instead of our queue from
joblib. This makes it easier to use the standard multiprocessing classes
in later versions of Python.
This also exposes:
- Tensor/Storage.share_memory_()
- Module.share_memory()
These methods move the CPU tensors and storages to shared memory. If
you're using the "fork" method of multiprocessing, these objects can be
directly inherited instead of serialized through a queue.
2016-12-28 20:34:23 -05:00
Sam Gross
126a1cc398
Add Sphinx docs
2016-12-28 00:03:39 +01:00
Sam Gross
e46d942ca6
Fix double initialization of HalfStorage ( #331 )
2016-12-19 15:19:41 -05:00