Commit Graph

62 Commits

Author SHA1 Message Date
Richard Zou
b4cd9f2fc9
Clarify mp note about sharing a tensor's grad field. (#8688)
* Clarify mp note about sharing a tensor's grad field.

* Address comments

* Address comments
2018-06-20 14:22:38 -04:00
Kaiyu Shi
0169ac5936 Fix sample code for cuda stream (#8319) 2018-06-10 11:41:50 -04:00
Tongzhou Wang
9af3a80cff
Docs for gradcheck and gradgradcheck; expose gradgradcheck (#8166)
* Docs for gradcheck and gradgradcheck; expose gradgradcheck

* address comments
2018-06-06 13:59:55 -04:00
peterjc123
108fb1c2c9 Fix the import part of the windows doc (#7979) 2018-05-30 21:51:30 -04:00
peterjc123
267fc43a96 Fix Windows doc for import error (#7704)
* Fix Windows doc for import error

* Fix doc again

* Fix wrong format
2018-05-29 22:07:00 +01:00
braincodercn
5ee5537b98 Fix typo in document (#7725) 2018-05-21 11:10:24 -04:00
Richard Zou
0430bfe40b
[docs] Update broadcasting and cuda semantics notes (#6904)
* [docs] Update broadcasting and cuda semantics notes

* Update multiprocessing.rst

* address comments

* Address comments
2018-04-24 13:41:24 -04:00
peterjc123
a4dbd37403 [doc] Minor fixes for Windows docs (#6853) 2018-04-23 13:15:33 +02:00
peterjc123
56567fe47d Add documents for Windows (#6653)
* Add Windows doc

* some minor fixes

* Fix typo

* more minor fixes

* Fixes on dataloader
2018-04-22 15:18:02 -04:00
Richard Zou
2acc247517
[docs] Update autograd notes (#6769) 2018-04-19 13:34:14 -04:00
Tongzhou Wang
6b7ec95abb Link relevant FAQ section in DataLoader docs (#6476)
* Link FAQ section on workers returning same random numbers in DataLoader docs

* explicitly mention section names
2018-04-11 13:41:46 -04:00
Tongzhou Wang
4d15442ebc
Add total_length option to pad_packed_sequence (#6327)
* add total_length to pad_packed_sequence; add example on how to use pack->rnn->unpack with DP

* address comments

* fix typo
2018-04-08 20:25:48 -04:00
Kento NOZAWA
c00ee6da8f Fix typos (#6348)
* Fix typo

* Fix typo

* Update faq.rst
2018-04-06 11:06:42 -04:00
Kaiyu Shi
605307f8f3 Add support for printing extra information in Module and refactor redundant codes (#5936)
This PR enables users to print extra information of their subclassed nn.Module.
Now I simply insert the user-defined string at the ending of module name, which should be discussed in this PR.

Before this PR, users should redefine the __repr__ and copy&paste the source code from Module.

* Add support for extra information on Module

* Rewrite the repr method of Module

* Fix flake8

* Change the __repr__ to get_extra_repr in Linear

* Fix extra new-line for empty line

* Add test for __repr__ method

* Fix bug of block string indent

* Add indent for multi-line repr test.

* Address review comments

* Update tutorial for creating nn.Module

* Fix flake8, add extra_repr of bilinear

* Refactor DropoutNd

* Change to extra_repr in some Modules

* Fix flake8

* Refactor padding modules

* Refactor pooling module

* Fix typo

* Change to extra_repr

* Fix bug for GroupNorm

* Fix bug for LayerNorm
2018-04-02 13:52:33 -04:00
Peter Goldsborough
47f31cb1e6 Update FAQ to make more sense after tensor/variable merge (#6017) 2018-03-27 07:48:25 -07:00
Richard Zou
5d628db0a2 Deprecate ctx.saved_variables via python warning. (#5923)
* Deprecate ctx.saved_variables via python warning.

Advises replacing saved_variables with saved_tensors.
Also replaces all instances of ctx.saved_variables with ctx.saved_tensors in the
codebase.

Test by running:
```
import torch
from torch.autograd import Function

class MyFunction(Function):
    @staticmethod
    def forward(ctx, tensor1, tensor2):
        ctx.save_for_backward(tensor1, tensor2)
        return tensor1 + tensor2

    @staticmethod
    def backward(ctx, grad_output):
        var1, var2 = ctx.saved_variables
        return (grad_output, grad_output)

x = torch.randn((3, 3), requires_grad=True)
y = torch.randn((3, 3), requires_grad=True)
model = MyFunction()
model.apply(x, y).sum().backward()
```
and assert the warning shows up.

* Address comments

* Add deprecation test for saved_variables
2018-03-26 14:13:45 -04:00
Tongzhou Wang
00cc962670 typo (#5847) 2018-03-17 10:26:00 -04:00
Tongzhou Wang
392fc8885c add faq on cuda memory management and dataloder (#5378) 2018-02-27 18:35:30 -05:00
Tongzhou Wang
8c18220a59 Fix layer_norm initialization and nn.Module docs (#5422)
* Fix LN initialization; Support single int normalized_shape

* disable docstring inheritance

* fix sphinx warnings
2018-02-26 19:32:08 -05:00
Junior Rojas
642e4d0762 Fix typos (#5340) 2018-02-21 16:27:12 -05:00
brett koonce
596470011b minor sp, underlyhing->underlying (#5304) 2018-02-19 22:28:17 -05:00
Edward Z. Yang
e411525f2c
Add a FAQ, for now just 'out of memory' advice. (#5251)
* Add a FAQ, for now just 'out of memory' advice.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Updates based on comments.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* minor copyedit
2018-02-15 17:38:55 -08:00
Thibault FEVRY
e39e86f119 Remove deprecated references to volatile (#5193) 2018-02-12 21:08:27 +01:00
Peter Goldsborough
65353f1342 Remove volatile section from autograd notes 2018-02-01 00:26:36 +01:00
Tongzhou Wang
6420c6b224 Improve torch.cuda.empty_cache documentation (#4879)
* add doc about empty_cache wont increase amount of memory available

* typo
2018-01-27 04:54:25 -05:00
Yongjik Kim
dd5c195646 More documentation for CUDA stream functions. (#4756) 2018-01-21 12:58:51 +01:00
Tongzhou Wang
5918243b0c Methods for checking CUDA memory usage (#4511)
* gpu mem allocated

* add test

* addressed some of @apaszke 's comments

* cache stats

* add more comments about test
2018-01-09 11:47:48 -05:00
SsnL
bb1b826cdc Exposing emptyCache from allocator (#3518)
* Add empty_cache binding

* cuda.empty_cache document

* update docs
2017-11-07 17:00:38 -05:00
Kaixhin
5de7f9e731 Tidy up CUDA notes 2017-11-05 14:42:06 +01:00
Kai Arulkumaran
a7c5be1d45 Document CUDA best practices (#3227) 2017-10-25 22:38:17 +02:00
Sang-gil Lee
42448cf07f Fix to make the sample code executable as-is in "Extending PyTorch" (#2621) 2017-09-05 10:19:49 -04:00
Gabriel Bianconi
cdae579c22 Fix typos in "Extending PyTorch" (#2558) 2017-08-29 09:39:29 -04:00
Soumith Chintala
b079469af0 self -> ctx in Extending note 2017-08-25 07:19:20 -04:00
Adam Paszke
4599c0c7df Update autograd notes (#2295) 2017-08-05 05:18:05 +05:30
ngimel
66bbe5d75a .creator -> .grad_fn in the code example (#2171) 2017-07-21 14:43:16 -04:00
brett koonce
16dd997239 Spelling tweaks for documentation (#2114) 2017-07-15 13:16:32 -07:00
yunjey
52a9367fa7 Fix minor typo (#2100)
Fixed minor typo in Autograd mechanics docs.
2017-07-14 10:20:13 -04:00
Hungryof
73128f7b08 fix minor typos (#2051)
* Update extending.rst

fix typo

* Update cuda.rst

fix typo
2017-07-11 11:01:41 -04:00
Soumith Chintala
ee1b7b50b3 fix docs for broadcast warning 2017-06-26 14:50:57 -04:00
Sam Gross
9c53c6dcb9 Fix errors and warnings when building docs (#1806) 2017-06-14 13:50:14 -04:00
Gregory Chanan
1ef4cc1591 Incorporate review comments:
1) Line up trailing dimensions in broadcast docs.
2) remove unnecessary expand_as in common_nn test.
3) use view in tensor_str instead of resize_.
4) newExpand remove raiseErrors change.
5) clarify expandedSizes/expandedStrides parameters in inferExpandGeometry.
6) simplify inferSize2/inferSizeN implementations.
7) use new-style classes for warning.
2017-06-11 05:37:59 -04:00
Gregory Chanan
deec86cc05 Clarify a number of comments. 2017-06-11 05:37:59 -04:00
Gregory Chanan
5e1a714386 Add backwards incompatibility docs. 2017-06-11 05:37:59 -04:00
Gregory Chanan
cd35091d9b Include simple broadcasting example and demonstrate lining up trailing dimensions. 2017-06-11 05:37:59 -04:00
Gregory Chanan
471dfe9791 Add documentation including links to numpy broadcasting semantics. 2017-06-11 05:37:59 -04:00
Bubble
2ce5875a4d Modify the sample code of extending autograd (#1720)
The original input can not be used as input of Linear(), because forward() takes at least 3 arguments (2 given)
2017-06-05 23:36:58 -04:00
Bubble
447fe953e5 Modify the sample code of volatile (#1694)
The original two inputs (torch.randn(5,5)) can not be used as input of resnet, which must be (batch, channels, width, height)
2017-06-01 09:46:04 -04:00
Edward Z. Yang
2f4bf4ab39 Rewrite 'How autograd encodes the history' to accurately describe current setup. (#1580)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-05-17 19:21:20 -04:00
Soumith Chintala
aa506fa4d7 fix docs typo 2017-04-05 23:42:02 -04:00
Du Phan
86e40ed875 Fix a typo in docs about pinned memory buffers (#1023)
* remove misleading guide for BCELoss

* fix docs about pinned memory buffers
2017-03-17 05:08:03 -04:00