Commit Graph

625 Commits

Author SHA1 Message Date
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Ronny
a0a95c95d4 Add Random Number Generator Docstrings (#506) 2017-01-19 11:10:01 -05:00
Sam Gross
14d5d52789 Add placeholder tensor documentation for methods that exist in torch. (#463) 2017-01-17 19:37:47 -05:00
Zeming Lin
59d66e6963 Sparse Library (#333) 2017-01-05 00:43:41 +01:00
Adam Paszke
7c5014d803 Add torch.split, torch.chunk and change default dim of cat to 0 2016-12-31 16:25:39 -05:00
Sam Gross
24af02154c Use ForkingPickler for sharing tensor/storages across processes (#344)
This hooks into the (internal) ForkingPickler class in multiprocessing
to reduce tensors, storages, and CUDA events instead of our queue from
joblib. This makes it easier to use the standard multiprocessing classes
in later versions of Python.

This also exposes:

 - Tensor/Storage.share_memory_()
 - Module.share_memory()

These methods move the CPU tensors and storages to shared memory. If
you're using the "fork" method of multiprocessing, these objects can be
directly inherited instead of serialized through a queue.
2016-12-28 20:34:23 -05:00
Sam Gross
126a1cc398 Add Sphinx docs 2016-12-28 00:03:39 +01:00
Adam Paszke
91f2946310 Import most common packages by default 2016-12-01 23:14:41 +01:00
Zeming Lin
86e42ba291 Adding truncated tensor printing (#202)
* Adding truncated tensor printing
2016-11-08 10:05:30 -05:00
Sam Gross
ee14cf9438 Add support for pinned memory: (#127)
torch.Storage/Tensor.pin_memory()
 torch.Storage/Tensor.is_pinned()
2016-10-15 18:38:26 -04:00
Adam Paszke
3f7ab95890 Finish implementation of prng related functions 2016-09-29 11:33:25 -07:00
Soumith Chintala
1cf87e8a0b OSX + Python 2 build fixes 2016-09-25 19:26:13 -04:00
Adam Paszke
4cdeae3283 Return only unique variables from parameters() 2016-09-25 12:23:43 -07:00
Adam Paszke
e66ea56bb3 Improve THNN tensor type mismatch error messages 2016-09-23 18:06:26 -07:00
Adam Paszke
7a74d3fc9e Fix dl flag module in python>=3.6 2016-09-23 17:25:10 -07:00
Adam Paszke
06ab3f962f Refactor _C extension to export some utilities 2016-09-21 08:36:54 -07:00
Adam Paszke
8fdec15a55 Codemod to remove camel case method naming 2016-09-20 08:40:28 -07:00
Adam Paszke
f9d186d33a Add initial version of multiprocessing module 2016-08-31 19:46:08 -07:00
Adam Paszke
774a6f1093 Add in-place operations to autograd and nn 2016-08-25 09:34:54 -07:00
Adam Paszke
686e8d32e2 Add torch.save and torch.load 2016-08-23 07:51:55 -07:00
Adam Paszke
c574295012 Various fixes 2016-07-19 10:45:59 -04:00
Adam Paszke
3a44259b32 Add support for CUDA 2016-07-19 10:45:59 -04:00
Adam Paszke
3cec305524 Restructure python code 2016-06-23 22:55:05 +02:00
Adam Paszke
b0d90e3688 Add templated __init__ 2016-05-02 23:54:59 +02:00
Adam Paszke
731041cb6a Initial commit 2016-05-02 23:19:57 +02:00