Adam Paszke
80a827d3da
Fix data_parallel bugs
2016-11-23 18:48:41 +01:00
Sam Gross
d40a7bf9eb
Fix Scatter.backward() ( #232 )
2016-11-18 13:58:09 -05:00
Adam Paszke
e51d0bef97
Add cuDNN bindings for 2D transposed convolution
2016-11-17 14:34:40 -08:00
Adam Paszke
56fc639c9f
Fix no bias mode of autogenerated THNN function
2016-11-16 15:32:18 -08:00
Adam Lerer
7f51af7cbc
adding dropout, bidirection, etc. to RNN ( #214 )
2016-11-10 13:25:14 -05:00
Soumith Chintala
469dce4a2d
skip test_scatter_gpu on no CUDA
2016-11-05 20:10:07 -04:00
Sam Gross
15377ac391
Copy Module._buffers in nn.parallel.replicate ( #180 )
2016-10-31 12:12:29 -04:00
Sam Gross
f2d7e94948
Use torch.Size for Tensor sizes and tuple for strides
...
See issue #20
The torch.Size class is a tuple subclass which distinguishes sizes from
other tuples so that torch.Tensor(size) is interpreted as size instead
of data.
2016-10-28 19:37:09 +02:00
Adam Paszke
e2458bce97
Add Parameter class to nn
2016-10-27 22:31:36 +02:00
Adam Lerer
b5d13296c6
addressing comments
2016-10-23 21:11:22 -07:00
Adam Lerer
86288265ad
Adding rnn cell library
2016-10-23 20:23:48 -07:00
Adam Lerer
a559d94a44
docs and such
2016-10-23 20:23:48 -07:00
Adam Lerer
1eb6870853
add nobias option to rnn
2016-10-23 20:23:48 -07:00
Adam Lerer
942ca477a6
Copying weights for CUDNN
2016-10-23 20:23:48 -07:00
Sam Gross
98f67e90d5
Fix super call in Container.modules and Container.parameters ( #142 )
2016-10-19 13:21:03 -04:00
Sam Gross
fee67c2e1a
Allow parameters and child modules to be assigned by attribute ( #136 )
...
For example:
self.linear = nn.Linear(10, 20)
self.weight = torch.autograd.Variable(torch.Tensor(10, 20))
2016-10-18 23:34:20 +02:00
soumith
59c628803a
fixing padding_idx option
2016-10-14 15:05:21 -07:00
Adam Paszke
518cb6ec7c
Allow specifying output size in MaxUnpooling
2016-10-10 20:51:15 -07:00
Adam Paszke
34bcd4c237
Rename FullConv to ConvTranspose and allow specifying output size
2016-10-10 20:51:15 -07:00
Adam Paszke
a22af69335
Add versioning and shared storage handling to autograd ( #105 )
2016-10-06 17:12:58 -04:00
Adam Lerer
1213149a2f
add bias option to linear; allow modules to return nested lists/tuples of tensors ( #106 )
...
* add bias option to linear; allow modules to return nested lists/tuples of tensors
2016-10-06 15:59:12 -04:00
Adam Paszke
3cbe66ba8c
Change requires_grad default to False
2016-10-05 08:46:34 -07:00
Adam Paszke
6efefac2df
Add parameter_dict and load_parameter_dict methods for modules
2016-10-04 14:47:56 -07:00
Sam Gross
f4ebc65a12
Add Module.modules() and Module.children() ( #90 )
...
modules(): returns an iterator over all modules in the network
children(): returns an iterator over immediate children
Also fix __getitem__ in Sequential
2016-10-01 21:18:53 -04:00
Adam Paszke
c8a4734b97
Add RReLU to both nn packages
2016-09-29 11:33:34 -07:00
Adam Paszke
2d8c2972ae
Only allow leaf variables as module parameters
2016-09-29 11:31:26 -07:00
Sam Gross
f5a6a3b0e9
Fix torch.nn.Module._apply with None types ( #66 )
2016-09-28 19:31:07 -04:00
Sam Gross
bab7f89cdc
Fix no_bias constructor for conv2d ( #65 )
2016-09-28 19:30:43 -04:00
Soumith Chintala
412019dbe4
fixing CPU builds by making cuda imports optional
2016-09-28 11:56:18 -04:00
Adam Paszke
7f4ff0e615
Fix type conversions in nn
2016-09-27 15:45:49 -07:00
Adam Paszke
3eac7164f4
Add data parallel functions to nn
2016-09-27 15:45:45 -07:00
Sam Gross
44481354fc
Add back support for child=None in Container constructor ( #55 )
...
It's often useful to have optional child modules, such as the
downsampling operation in ResNets. Add a test for this case:
nn.Container(
child=None,
)
2016-09-26 17:18:02 -04:00
Sam Gross
980300b381
Combine autograd.Leaf and autograd.Variable ( #52 )
...
Prior to this change, there was a circular reference between Leaf and
Variable. This means that the objects (and referenced Tensors) are not
collected as soon as they go out of scope, which lead to higher memory
usage and out-of-memory errors.
2016-09-25 20:21:14 -04:00
Adam Paszke
4cdeae3283
Return only unique variables from parameters()
2016-09-25 12:23:43 -07:00
Adam Paszke
8fdec15a55
Codemod to remove camel case method naming
2016-09-20 08:40:28 -07:00
Adam Paszke
d1fda539b7
Fix nn serialization errors
2016-09-15 19:28:34 -07:00
Adam Paszke
fb39971464
Add more modules to nn
2016-09-14 11:05:56 -07:00
Sam Gross
cd0929aa5e
Use chainer-style constructor for Conv2d
...
* Conv2d, MaxPool2d, and AvgPool2d have one argument for each of ksize,
stride, and pad. This argument can be either a single number or a
tuple of (h, w)
2016-09-07 15:51:44 -07:00
Sam Gross
b738b09606
Clean up Module forward and __call__ ( #14 )
...
* _forward is renamed forward since users should override it
* some __call__ overrides are changed to forward
* function which return a single variable are changed to return that
variable instead of a one-element tuple
2016-09-07 15:41:39 -04:00
Adam Paszke
774a6f1093
Add in-place operations to autograd and nn
2016-08-25 09:34:54 -07:00
Adam Paszke
24476090df
Add volatile variables
2016-08-24 08:43:11 -07:00
Adam Paszke
ea93fb7ac0
Add more nn modules
2016-08-23 19:15:21 -07:00
Adam Paszke
2bf68e72d5
Add hook system to autograd and nn
2016-08-23 13:51:34 -07:00
Adam Paszke
d467a068c2
Add tests for new modules
2016-08-19 14:57:01 -07:00