Adam Paszke
55632d81d2
Add Python wrappers for process group mode
2017-01-31 01:58:09 +01:00
Sam Gross
c414bf0aaf
Fix handling of unicode in torch._C._add_docstr ( #487 )
2017-01-18 17:22:30 -05:00
Sam Gross
9302f860ae
Remove unused file TensorDocstrings.cpp ( #481 )
...
Tensor docstrings are created in _tensor_docs.py
2017-01-18 13:34:40 -05:00
Soumith Chintala
8aa8f791fc
add more torch.* and Tensor docs ( #476 )
2017-01-18 08:39:33 -05:00
Sam Gross
14d5d52789
Add placeholder tensor documentation for methods that exist in torch. ( #463 )
2017-01-17 19:37:47 -05:00
Adam Paszke
f91bb96071
Remove cmin, cmax and cinv
2017-01-16 19:07:37 -05:00
Soumith Chintala
bdfef2975c
adding more docs for torch.* functions
2017-01-11 08:19:49 -08:00
Zeming Lin
59d66e6963
Sparse Library ( #333 )
2017-01-05 00:43:41 +01:00
Soumith Chintala
6b4ed52f10
adding docs for some torch.* functions, removing all, any stateless methods
2017-01-03 18:29:50 -05:00
Sam Gross
849794cd2c
Remove deprecated and unimplemented functions ( #383 )
2016-12-30 18:37:44 -05:00
Sam Gross
ab5776449c
Add documentation for some torch.xxx functions ( #382 )
2016-12-30 17:01:47 -05:00
Adam Paszke
9b7eceddc8
Accept outputs in out argument
2016-12-29 12:25:59 +01:00
Sam Gross
24af02154c
Use ForkingPickler for sharing tensor/storages across processes ( #344 )
...
This hooks into the (internal) ForkingPickler class in multiprocessing
to reduce tensors, storages, and CUDA events instead of our queue from
joblib. This makes it easier to use the standard multiprocessing classes
in later versions of Python.
This also exposes:
- Tensor/Storage.share_memory_()
- Module.share_memory()
These methods move the CPU tensors and storages to shared memory. If
you're using the "fork" method of multiprocessing, these objects can be
directly inherited instead of serialized through a queue.
2016-12-28 20:34:23 -05:00
Sam Gross
126a1cc398
Add Sphinx docs
2016-12-28 00:03:39 +01:00
Sam Gross
e46d942ca6
Fix double initialization of HalfStorage ( #331 )
2016-12-19 15:19:41 -05:00
Adam Paszke
8e09f0590b
Make sure that C extension was compiled with cuDNN before using it
2016-12-15 00:47:55 +01:00
Adam Paszke
28f0cf6cee
Add docstring support to cwrap ( #295 )
2016-12-11 23:25:14 +01:00
Sam Gross
1af9a9637f
Refactor copy and release GIL during copy ( #286 )
2016-12-11 21:54:58 +01:00
Sam Gross
0d7d29fa57
Enable caching allocator for CUDA pinned memory ( #275 )
...
Also add binding for CUDA "sleep" kernel
2016-12-02 01:33:56 -05:00
Adam Paszke
1f5951693a
Change torch.randperm to return Long tensors
2016-12-01 23:14:41 +01:00
Adam Paszke
3928f7740a
Implement functional interface for Variables (torch.*)
2016-11-08 16:13:25 -05:00
Adam Paszke
ebc70f7919
Look for libcudart in default CUDA installation paths ( #195 )
2016-11-02 19:36:10 -04:00
Sam Gross
f2d7e94948
Use torch.Size for Tensor sizes and tuple for strides
...
See issue #20
The torch.Size class is a tuple subclass which distinguishes sizes from
other tuples so that torch.Tensor(size) is interpreted as size instead
of data.
2016-10-28 19:37:09 +02:00
Sam Gross
ad2d413c0b
Add C++ bindings for cuDNN ( #167 )
...
The Python ctypes bindings overhead was high enough that it slowed down
multi-gpu training when using 4+ Maxwell GPUs.
2016-10-26 19:51:48 -04:00
Adam Paszke
9000f40e61
Add torch.from_numpy
2016-10-24 22:30:11 +02:00
Adam Paszke
f137c0c05a
Improve error messages of stateless functions
2016-10-24 22:29:43 +02:00
Sam Gross
79ead42ade
Add CUDA Stream and Event API ( #133 )
2016-10-18 12:15:57 -04:00
Sam Gross
3931beee81
Use THSetNumThreads instead of omp_set_num_threads
...
Set OMP num threads to one in the data loader.
Fixes #81
Fixes #82
2016-10-17 15:15:00 -04:00
Sam Gross
ee14cf9438
Add support for pinned memory: ( #127 )
...
torch.Storage/Tensor.pin_memory()
torch.Storage/Tensor.is_pinned()
2016-10-15 18:38:26 -04:00
Soumith Chintala
3d6ebde756
qr and ormqr tests and bugfix
2016-10-14 03:10:16 -04:00
Adam Paszke
0325e2f646
Major autograd refactor
...
Improves autograd performance by more than 2x and fixes a couple
of bugs. All core functions have been moved to C.
2016-10-13 17:17:49 -07:00
Adam Paszke
2acee24332
Add keyword argument support to most tensor functions
2016-10-13 12:32:04 -04:00
Adam Paszke
96f61bff30
Add LAPACK functions
2016-10-08 20:37:37 -07:00
Adam Paszke
dbe540e49f
Use the custom TH error handler in all threads by default
2016-09-30 14:59:50 -07:00
Adam Paszke
3f7ab95890
Finish implementation of prng related functions
2016-09-29 11:33:25 -07:00
Adam Paszke
941cf4e63d
Add ffi utils for user C extensions
2016-09-29 09:35:56 -07:00
Adam Paszke
1828e7c42f
Add async CUDA copy
2016-09-27 15:12:48 -07:00
Adam Paszke
ddf1598ef8
Add a method for catching exceptions thrown in ctypes
2016-09-25 12:25:54 -07:00
Adam Paszke
e71204b52f
Improve error messages in storage and tensor C functions
2016-09-23 17:17:35 -07:00
Adam Paszke
06ab3f962f
Refactor _C extension to export some utilities
2016-09-21 08:36:54 -07:00
Adam Paszke
8fdec15a55
Codemod to remove camel case method naming
2016-09-20 08:40:28 -07:00
soumith
1f2695e875
adding cuda driver check functions for runtime checking
2016-09-13 10:34:13 -07:00
Adam Paszke
58f507f9e3
Add file descriptor sharing mode to multiprocessing
2016-09-08 11:23:33 -07:00
Adam Paszke
f9d186d33a
Add initial version of multiprocessing module
2016-08-31 19:46:08 -07:00
Adam Paszke
1902bc0bfb
Interface with numpy
2016-08-13 20:19:17 -07:00
Adam Paszke
12bed8dc0d
Add CUDA device selection
2016-08-12 07:46:46 -07:00
Adam Paszke
e9f9fd3727
Major refactor
2016-08-10 09:24:53 -07:00
Adam Paszke
554a1d8336
Add optim
2016-07-21 16:42:06 -04:00
Adam Paszke
bc7bd7a8b3
Add unit tests and fix detected bugs
2016-07-21 13:46:59 -04:00
Adam Paszke
c574295012
Various fixes
2016-07-19 10:45:59 -04:00