Commit Graph

157 Commits

Author SHA1 Message Date
Vishwak Srinivasan
32b3841553 [ready] General documentation improvements (#5450)
* Improvize documentation
1. Add formula for erf, erfinv
2. Make exp, expm1 similar to log, log1p
3. Symbol change in ge, le, ne, isnan

* Fix minor nit in the docstring

* More doc improvements
1. Added some formulae
2. Complete scanning till "Other Operations" in Tensor docs

* Add more changes
1. Modify all torch.Tensor wherever required

* Fix Conv docs
1. Fix minor nits in the references for LAPACK routines

* Improve Pooling docs
1. Fix lint error

* Improve docs for RNN, Normalization and Padding
1. Fix flake8 error for pooling

* Final fixes for torch.nn.* docs.
1. Improve Loss Function documentation
2. Improve Vision Layers documentation

* Fix lint error

* Improve docstrings in torch.nn.init

* Fix lint error

* Fix minor error in torch.nn.init.sparse

* Fix Activation and Utils Docs
1. Fix Math Errors
2. Add explicit clean to Makefile in docs to prevent running graph generation script
while cleaning
3. Fix utils docs

* Make PYCMD a Makefile argument, clear up prints in the build_activation_images.py

* Fix batch norm doc error
2018-03-08 13:21:12 -05:00
Tongzhou Wang
27265503ad nn.* doc update after Variable/Tensor merge (#5459)
The nn.* counterpart of #5443 . Mostly removed Variable wrapper. Also added doc for nn.RReLU.

Notice that torch.randn(*, requires_grad=True) isn't documented until #5462 is done.
2018-03-01 18:11:39 -05:00
Tongzhou Wang
6e0d0f08a9 Improves Conv*d(Transposed) docs to have correct newline and formatting (#5139)
Improves CUDA matmul error message by basically copying the CPU error message
2018-02-08 15:34:30 -05:00
albanie
5bcacb21d5 add bias term to linear __repr__ functions, fix spacing
Adds a missing bias term to the __repr__ functions of the
Linear and Bilinear modules. Fixes the spacing in the Conv2d
__repr__ to make it consistent with other modules.
2017-12-27 22:08:17 +01:00
Emanuel Jöbstl
be1ef5e4a4 Added explicit tuple element-count to doc for Conv1d. (#4136)
* Added explicit tuple element-count to doc for Conv1d.
2017-12-14 22:17:46 -05:00
Iaroslav Shcherbatyi
558516fcdb More docs for Conv1d Conv2d (#3870)
* Add a bit of notation explanation

For a first time user of Conv1d, it is not clear from documentation what N, C and L exactly mean. This should clarify this. Same for Conv2d.
2017-11-27 11:07:48 -05:00
Soumith Chintala
25b166ed1f add depthwise convolution terminology as a note 2017-11-12 23:26:42 -05:00
Ozan Çağlayan
dd6d04ddf2 doc: Normalize all true/false in docstrings to `True|False` (#3593)
* doc: Normalize all true/false in docstrings to ``True|False``

This makes them more apparent in the documentation.

* doc: fix flake8
2017-11-09 08:12:29 -05:00
vfdev
acb73c729b Space is missing in __repr___ of conv (#3229)
* - Remove spaces in `__repr__` of layers
- Replace `size` by `kernel_size` in `__repr__` of a pooling layer

* Fix flake8 errors
2017-10-30 13:45:37 -04:00
SsnL
de1f4e69dd raw text (#3327) 2017-10-28 01:24:02 +05:30
SsnL
6dc67aef17 doc (#3110) 2017-10-14 10:44:35 +02:00
Martin Drawitsch
b3bcba60c7 Correct padding docs of 3D modules (#2970)
3D modules apply padding on all three sides. "Both" doesn't make sense here.
I used the wording of the AvgPool3d docstring, where it was already correct.
2017-10-04 09:52:37 -04:00
yunjey
d19ee9c182 Add comments for default value (#2282)
Added comments for default value in conv.py
2017-08-15 02:49:22 -04:00
Kongsea
53ac2d46c6 Fix typos in docstrings. (#2034) 2017-07-10 10:35:46 -04:00
Leonid Vlasenkov
46a868dab7 [Ready] Limit docs line length (#1900)
* some docs are ready

* docs

* docs

* fix some more

* fix some more
2017-07-10 10:24:54 -04:00
Kongsea
0025e1c776 Fix typos in the docstrings of Conv3d, AvgPool3d and MaxPool3d (#2030)
* Fix a typo of the docstring of Conv3d

* Fix typos in docstrings of 3D operations.
2017-07-09 23:20:07 -04:00
Soumith Chintala
ba56de1150 add coding UTF-8 declaration 2017-05-23 16:02:34 -04:00
Kai Arulkumaran
6e3e453ad2 Tidy up convs docs (#1602) 2017-05-23 18:32:33 +02:00
Edward Z. Yang
96a281dfab Add one more missing self.dilation parameter. (#1392)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-28 19:16:32 +02:00
Edward Z. Yang
7e8ef0e22a Actually pass dilation to the underlying operators. (#1386)
No tests for now; we'll need some sort of shape DSL to concisely
represent them.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-27 23:38:01 +02:00
Edward Z. Yang
34546f022a Expose dilated convolutions.
Fixes #1225.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-04-18 17:13:02 -04:00
Gu Wang
170d790b66 fix doc of conv3d in conv.py (#989)
the second dimension should be height.
2017-03-13 11:30:13 -04:00
Alfredo Canziani
385913be1c Fix class torch.nn.ConvTransposeNd documentation (#739)
There is no `dilation`
`output_padding` doc was missing
2017-02-15 10:37:20 +05:30
Luke Yeager
79f5bf84e5 [pep8] Potentially breaking docstring changes 2017-01-28 01:15:51 +01:00
Luke Yeager
3ed720079e [pep8] Fix most remaining lint manually 2017-01-28 01:15:51 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Ronny
26a492acf3 Update docstring for ConvTranspose functions
Transposed convolutions are often (but incorrectly) referred to as Deconvolutional operations. Made mention of this in the docstring to make it easier for people to search for this operation in the documentation.
2017-01-19 13:02:58 +01:00
Sam Gross
3a07228509 Add ConvTranspose1d module (#449) 2017-01-13 15:22:57 -05:00
Soumith Chintala
39ab5bcba8 fix MaxPool1d,2d,3d docs for rst 2017-01-04 03:11:48 -05:00
Soumith Chintala
42f131c09f fixing nn.Conv* documentation for rst and adding nn docs to sphinx 2017-01-04 02:11:27 -05:00
Sam Gross
64ca584199 Fix group support in convolution modules (#374) 2016-12-29 20:01:39 -05:00
Sam Gross
c367e0b64e Support dilated 1d and 3d convolutions (#372)
Fixes #367
2016-12-29 18:20:32 -05:00
Sergey Zagoruyko
239ae94389 fix in conv repr 2016-12-29 17:30:46 -05:00
Sergey Zagoruyko
62af45d99f Basic functional interface (#354) 2016-12-29 22:53:57 +01:00
Sam Gross
8a29338837 Use cuDNN for Conv3d and ConvTranspose3d (#359)
I've also updated test_nn.py to run marked tests twice: once with cuDNN
enabled and once with it disabled.
2016-12-28 16:14:47 -05:00
Sergey Zagoruyko
45d6212fd2 default args for conv functions 2016-12-25 01:55:00 -05:00
Sam Gross
ffcc38cf05 Deterministic ordering of parameters and buffers. (#317)
Uses the assignment syntax to get deterministic ordering of parameters.
The ordering of parameters using the constructor syntax is
non-deterministic because kwargs use dict() in Python 3.5 and earlier.
2016-12-16 14:45:56 -05:00
Adam Paszke
e51d0bef97 Add cuDNN bindings for 2D transposed convolution 2016-11-17 14:34:40 -08:00
Soumith Chintala
26d626a47c adding docs for loss functions, container, module and fix typos 2016-11-17 15:11:27 -05:00
soumith
071e68d99d fixing output size w / h order 2016-11-16 15:32:18 -08:00
Adam Paszke
78c1094d93 Don't override __call__ in modules 2016-11-16 15:32:18 -08:00
Soumith Chintala
513d902df1 adding __repr__ for nn 2016-11-07 16:17:40 -05:00
Adam Paszke
34bcd4c237 Rename FullConv to ConvTranspose and allow specifying output size 2016-10-10 20:51:15 -07:00
Adam Lerer
1213149a2f add bias option to linear; allow modules to return nested lists/tuples of tensors (#106)
* add bias option to linear; allow modules to return nested lists/tuples of tensors
2016-10-06 15:59:12 -04:00
Adam Paszke
3cbe66ba8c Change requires_grad default to False 2016-10-05 08:46:34 -07:00
soumith
d92b7da733 fix documentation to not use forward 2016-09-30 09:49:30 -07:00
Sam Gross
bab7f89cdc Fix no_bias constructor for conv2d (#65) 2016-09-28 19:30:43 -04:00
Adam Paszke
7f4ff0e615 Fix type conversions in nn 2016-09-27 15:45:49 -07:00
Adam Paszke
f9d25e8e72 Refactor nn (require specifying parameters explicitly) 2016-09-27 15:22:26 -07:00
Sam Gross
779a460030 Add cuDNN support for convolutions (#36) 2016-09-27 17:55:04 -04:00
Adam Paszke
eec0420eb3 Initialize nn modules' parameters with a default tensor type 2016-09-23 18:06:26 -07:00
Soumith Chintala
5114d94ad9 docstrings for conv, dropout, linear, pooling and sparse functions 2016-09-19 00:31:22 -04:00
Adam Paszke
fb39971464 Add more modules to nn 2016-09-14 11:05:56 -07:00
Sam Gross
cd0929aa5e Use chainer-style constructor for Conv2d
* Conv2d, MaxPool2d, and AvgPool2d have one argument for each of ksize,
   stride, and pad. This argument can be either a single number or a
   tuple of (h, w)
2016-09-07 15:51:44 -07:00
Sam Gross
b738b09606 Clean up Module forward and __call__ (#14)
* _forward is renamed forward since users should override it

 * some __call__ overrides are changed to forward

 * function which return a single variable are changed to return that
   variable instead of a one-element tuple
2016-09-07 15:41:39 -04:00
Sam Gross
9553e46ed7 Make bias optional in Conv2d 2016-09-01 12:38:34 -07:00
Adam Paszke
ea93fb7ac0 Add more nn modules 2016-08-23 19:15:21 -07:00