Commit Graph

68 Commits

Author SHA1 Message Date
Vishwak Srinivasan
32b3841553 [ready] General documentation improvements (#5450)
* Improvize documentation
1. Add formula for erf, erfinv
2. Make exp, expm1 similar to log, log1p
3. Symbol change in ge, le, ne, isnan

* Fix minor nit in the docstring

* More doc improvements
1. Added some formulae
2. Complete scanning till "Other Operations" in Tensor docs

* Add more changes
1. Modify all torch.Tensor wherever required

* Fix Conv docs
1. Fix minor nits in the references for LAPACK routines

* Improve Pooling docs
1. Fix lint error

* Improve docs for RNN, Normalization and Padding
1. Fix flake8 error for pooling

* Final fixes for torch.nn.* docs.
1. Improve Loss Function documentation
2. Improve Vision Layers documentation

* Fix lint error

* Improve docstrings in torch.nn.init

* Fix lint error

* Fix minor error in torch.nn.init.sparse

* Fix Activation and Utils Docs
1. Fix Math Errors
2. Add explicit clean to Makefile in docs to prevent running graph generation script
while cleaning
3. Fix utils docs

* Make PYCMD a Makefile argument, clear up prints in the build_activation_images.py

* Fix batch norm doc error
2018-03-08 13:21:12 -05:00
Sam Gross
54b4cdeffa
Replace all uses of 'Tensor or Variable' with 'Tensor' (#5508)
Replace all uses of 'Tensor or Variable'  and 'Variable or Tensor' with 'Tensor'
2018-03-02 14:26:11 -05:00
Sam Gross
76ae03d5f1
Operate on Variables in torch.nn.init (#4964)
Once Variable and Tensor are merged the existing Variable test would
cause an infinite recursion. Instead, modify the Variables directly
inside a `no_grad()` block.
2018-02-05 11:34:05 -05:00
Soumith Chintala
99068d2e52 fix nn.init.constant example 2017-12-29 19:14:53 +09:00
Alykhan Tejani
7752fe5d4e remove zero padding in orthogonal initialization 2017-09-14 23:13:43 -04:00
Soumith Chintala
e4c0af8b56 revert #2708 modify orthogonal init for rows<cols case 2017-09-13 18:23:43 -04:00
nadavbh12
d01adcbe0e modify orthogonal init 2017-09-13 16:54:37 -04:00
Alykhan Tejani
f0f7b39650 fix example in docs for nn.init.calculate_gain (#2600) 2017-09-02 19:23:25 -04:00
Surag Nair
8b42308f71 Bug in line 381 (sparse) (#2130)
The function iterates over columns and sets "sparsity" fraction of entires in each column to 0. The number of zeros in a column (num_zeros) is then ceil(rows*sparsity)
2017-07-18 22:55:06 -04:00
Leonid Vlasenkov
46a868dab7 [Ready] Limit docs line length (#1900)
* some docs are ready

* docs

* docs

* fix some more

* fix some more
2017-07-10 10:24:54 -04:00
Francisco Massa
6626881e7a Add Alpha Dropout (#1775) 2017-06-13 00:39:49 +02:00
GBLin5566
e50c7daaf9 Use Qr factorization to get orthogonal matrix in orthogonal init (#1453) 2017-05-04 07:11:59 -04:00
Kai Arulkumaran
48a7869b23 Doc fixes (#1409) 2017-04-30 08:28:19 -04:00
Kai Arulkumaran
cbb9f08b71 Add new init methods gain, eye and dirac (#1172) 2017-04-28 17:16:40 -04:00
Alykhan Tejani
be6322e4b5 Update nn.init docstrings to correctly reference the module (#1001) 2017-03-15 11:17:59 -04:00
Alykhan Tejani
01650ac9de add torch.nn.init docs to the source folder (#979) 2017-03-11 10:11:30 -05:00
Alykhan Tejani
37e05485d9 added initialization schemes in torch.nn.init (#833) 2017-03-01 19:34:13 +01:00
Adam Paszke
63893c3fa2 Fix auto-gpu semantics for indexing 2017-01-22 18:02:40 -05:00