Commit Graph

159 Commits

Author SHA1 Message Date
Calvin Lee
f69fb3829a Add documentation for LPPool1D (#5730) 2018-03-13 04:37:25 -04:00
Piotr Mitros
7b33ef4cff Documentation cleanup for activation functions (#5457) 2018-03-01 14:53:11 +01:00
Tongzhou Wang
8c18220a59 Fix layer_norm initialization and nn.Module docs (#5422)
* Fix LN initialization; Support single int normalized_shape

* disable docstring inheritance

* fix sphinx warnings
2018-02-26 19:32:08 -05:00
Tongzhou Wang
1848cad108 [ready] Layer Normalization (#4922)
* at::maybe_data_ptr and Check.h => TensorUtils.h

* THNN support for optional BN running_*

* ATen support for optional BN running_*

* Python nn.* support for optional BN running_*; Improve IN and BN doc

* Add tests for IN and BN new option

* Layer Norm

* Fix LRN doc

* functional interface for LN and IN

* Layer norm tests

* fix BN double backward returning undefined tensors

* fix jit test using wrong dim inputs for BN

* add/improve BN, IN and LN GPU tests with half type

* Udpate docs to be consistent with Conv notation
Fix onnx
Clarified onnx symbokic wrapper

* fix typo

* Address comments
2018-02-22 11:56:41 -05:00
Martin Drawitsch
1fdb3929c9 Fixes for docstrings/sphinx rendering of CosineAnnealingLR and Local Response Normalization (#5254)
* Fix LaTex rendering in CosineAnnealingLR

Backslashes were interpreted by Python as escapes in the string, so \frac
turned into frac, which is not a valid LaTex command.
This could be fixed with double backslashes, but the easiest solution is to
just use a raw (r) docstring.

* Fix sphinx warnings for LRN doc headings

* Move LRN docstring from __init__ to class level

The docstring was not rendered by sphinx at
http://pytorch.org/docs/master/nn.html#torch.nn.LocalResponseNorm
because it was in the constructor.

* Remove superfluous backticks from LRN formula
2018-02-15 10:29:02 -05:00
Kai Arulkumaran
9f893dda5f Add LocalResponseNorm to docs (#4681) 2018-01-16 11:12:50 -05:00
Richard Zou
35c4d73bdb Deprecate nn.NLLLoss2d (#4238)
* Deprecate nn.NLLLoss2d

* Fix legacy tests

* Fix tests

* Remove NLLLoss2d from docs, add deprecation warning instead of error

* fix lint

* Add more to docs
2018-01-04 12:38:04 -05:00
Sherin Thomas
492e26fbcd Pad sequences and Pack sequences (#3875) 2017-12-22 16:14:09 +01:00
Sam Gross
9cb8b43778
Split off in-place NN functions (#3683)
For example, this splits threshold into threshold(), which is now
never in-place, and threshold_() which is always in-place.

This simplifies the in-place vs. non-in-place logic in
gen_variable_type.py, which was bug-prone.
2017-11-14 12:59:06 -05:00
Sam Gross
f1f64c8d07 Generate autograd functions for NN / more refactors (#3136)
Generate autograd functions for NN and implement more derivatives in derivatives.yaml

A big refactor of gen_variable_type.py
2017-10-19 15:03:26 -04:00
SsnL
d5a7e304fa added volumetric adaptive max pooling 2017-09-30 16:57:51 -04:00
SsnL
6a4ec4f9a8 VolumetricAdaptiveAveragePool 2017-09-25 15:12:44 -04:00
Soumith Chintala
ce4932f8a4 add softmax2d docs 2017-09-14 09:41:04 -04:00
Soumith Chintala
4fec5f658b add Bilinear to docs, fix reference 2017-09-11 20:12:27 -04:00
Soumith Chintala
1794e76800 add missing bilinear docs entry 2017-09-11 20:06:44 -04:00
jekbradbury
5e088da5ba Add DistributedDataParallel to docs
DataParallel was included twice.
2017-08-15 10:01:36 +05:30
Aron Barreira Bordin
11f3ccf98f Add missing Modules to nn.functional (#1801)
* add dropout2d and dropout3d to functional

added some loss functions to functional

added tests

using dropout from backend

added docs

fixes

* edited loss modules to call functional
2017-07-19 15:55:21 -04:00
Soumith Chintala
37183e91de add normalize docs to sphinx 2017-07-13 02:31:57 -04:00
Soumith Chintala
58e4caf80f add missing docs 2017-07-13 01:01:04 -04:00
Sam Gross
2c038f2074 Add weight normalization implementation (#1945)
* Add weight normalization implementation

This adds forward "pre-hooks" which get called before the module's
forward() method. Weight norm is implemented as a hook which calculates
the weight variable from the weight_g and weight_v every iteration.

Based on @rtqichen implementation.

* Specify return type
2017-06-30 15:41:40 -04:00
Soumith Chintala
b3e500c522 fix docs generation warnings 2017-06-30 14:39:21 -04:00
Leonid Vlasenkov
ae61f3ff42 adds poisson NLL loss (#1779) 2017-06-27 10:04:54 -04:00
Soumith Chintala
1f391a42f7 fix warnings for docs generation 2017-06-27 00:18:32 -04:00
Alykhan Tejani
67968cb60b Add numerically stable BCELoss which takes logits as input (#1792) 2017-06-19 22:05:51 -04:00
Francisco Massa
76ee014d10 Add documentation to SELU and AlphaDropout 2017-06-19 18:18:01 -04:00
Soumith Chintala
f61ec2495e nn.EmbeddingBag to compute a bag of word embeddings (Embedding + Sum/Mean) 2017-06-15 12:32:47 -04:00
Aron Barreira Bordin
909f31764f Add nn.padding to docs fixes #1127 (#1808)
* exposed nn.padding modules

* using functional
2017-06-15 07:41:38 -04:00
Sam Gross
9c53c6dcb9 Fix errors and warnings when building docs (#1806) 2017-06-14 13:50:14 -04:00
Adam Paszke
12813b88f6 Add DistributedDataParallel 2017-06-12 22:00:22 -04:00
Soumith Chintala
2a49353d5e minor fix for docs of Upsample 2017-06-07 11:42:52 -04:00
Luca Antiga
b9ab26765e Add 3D upsampling (nearest and trilinear) with tests 2017-06-07 11:29:27 -04:00
Aron Barreira Bordin
d7db75c10f added CosineSimilarity to nn.distance and updated docs (#1672)
* added CosineSimilarity to nn.distance and updated docs
2017-06-06 22:53:21 -04:00
Adam Paszke
6b84dc26f0 Add F.cosine_similarity (#1502) 2017-05-15 11:12:54 -06:00
Soumith Chintala
ecd51f8510 docs fixes 2017-05-02 15:42:33 -04:00
Kai Arulkumaran
48a7869b23 Doc fixes (#1409) 2017-04-30 08:28:19 -04:00
Kai Arulkumaran
cbb9f08b71 Add new init methods gain, eye and dirac (#1172) 2017-04-28 17:16:40 -04:00
Dmitry Ulyanov
fa4f363b93 Instance norm (#1283)
* instance norm

* fix whitespaces

* whitespaces

* docs

* "C" letter was cyrillic in docs, fixed

* remove force_eval, fix non contiguous case
2017-04-23 14:49:15 +02:00
Soumith Chintala
2979f4b989 add more functions to docs 2017-03-29 01:29:17 -04:00
Soumith Chintala
2fd4d088ff add Adaptive pooling methods to docs 2017-03-26 22:43:46 -04:00
Soumith Chintala
13b1580613 add F.pad to docs 2017-03-15 00:09:14 -04:00
Alykhan Tejani
01650ac9de add torch.nn.init docs to the source folder (#979) 2017-03-11 10:11:30 -05:00
Adam Paszke
da725830c2 Add support for variable length sequences in RNNs (#873) 2017-03-01 17:36:32 +01:00
Adam Paszke
b3d41a5f96 Add docs for ModuleList and ParameterList 2017-02-26 20:02:42 +01:00
Soumith Chintala
38c8520adf adding unsqueeze to docs 2017-02-23 12:13:25 -05:00
Adam Paszke
c2c1710047 Add clip_grad_norm 2017-02-20 23:28:31 -08:00
Sasank Chilamkurthy
49295ebe54 Add sequential to documentation 2017-02-18 08:42:43 +05:30
Soumith Chintala
d4c9a3782b billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix (#617)
* billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8 tests fix
2017-01-30 05:08:48 +05:30
Adam Paszke
57373c7c29 Fix docs 2017-01-28 01:16:04 +01:00
Soumith Chintala
0bc4246425 adding NLLLoss2d to docs 2017-01-24 09:22:51 -05:00
Adam Paszke
07ebbcbcb3 Add Parameter docs 2017-01-22 18:32:51 -05:00
Adam Paszke
ee4c77c59f Docs improvements (#512)
* Always compile .numpy() for all types

* Add torch.nn.functional docs and hidden headers

* Use sphinx to generate torchvision docs

* Remove unused import in ffi utils
2017-01-19 17:28:49 -05:00
Soumith Chintala
ac32d8b706 fix docs 2017-01-16 21:08:14 -05:00
Adam Paszke
f91bb96071 Remove cmin, cmax and cinv 2017-01-16 19:07:37 -05:00
Sam Gross
3a07228509 Add ConvTranspose1d module (#449) 2017-01-13 15:22:57 -05:00
Sam Gross
24a2f2e3a0 Add MaxUnpool1d module (#447) 2017-01-13 14:36:25 -05:00
Sam Gross
d5e45b2278 Add AvgPool1d which just uses AvgPool2d implementation (#439) 2017-01-12 15:07:11 -05:00
Soumith Chintala
42f131c09f fixing nn.Conv* documentation for rst and adding nn docs to sphinx 2017-01-04 02:11:27 -05:00
Adam Paszke
f4870ca5c6 Fix nn docs 2016-12-30 00:15:06 -05:00
Sam Gross
126a1cc398 Add Sphinx docs 2016-12-28 00:03:39 +01:00