Sam Gross
18a3c62d9b
Allow NoneType for parameters in Module.load_state_dict
2016-12-01 20:12:15 +01:00
Adam Paszke
2e24da2a0b
Change parameter_dict to state_dict in torch.nn
2016-11-23 18:48:41 +01:00
Soumith Chintala
26d626a47c
adding docs for loss functions, container, module and fix typos
2016-11-17 15:11:27 -05:00
Adam Paszke
78c1094d93
Don't override __call__ in modules
2016-11-16 15:32:18 -08:00
Soumith Chintala
28e3f07b63
adding apply function
2016-11-07 16:17:49 -05:00
Adam Paszke
b4f4cca875
Rename training and evaluation methods
2016-10-30 00:16:06 +02:00
Adam Paszke
e2458bce97
Add Parameter class to nn
2016-10-27 22:31:36 +02:00
Adam Paszke
30be715900
Add training and evaluation to torch.nn
2016-10-24 22:29:43 +02:00
Adam Lerer
b5d13296c6
addressing comments
2016-10-23 21:11:22 -07:00
Adam Lerer
f88c3e9c12
fix some missing features in pytorch needed for RNNs
2016-10-23 20:23:48 -07:00
Sam Gross
fee67c2e1a
Allow parameters and child modules to be assigned by attribute ( #136 )
...
For example:
self.linear = nn.Linear(10, 20)
self.weight = torch.autograd.Variable(torch.Tensor(10, 20))
2016-10-18 23:34:20 +02:00
Adam Paszke
a22af69335
Add versioning and shared storage handling to autograd ( #105 )
2016-10-06 17:12:58 -04:00
Adam Lerer
1213149a2f
add bias option to linear; allow modules to return nested lists/tuples of tensors ( #106 )
...
* add bias option to linear; allow modules to return nested lists/tuples of tensors
2016-10-06 15:59:12 -04:00
Adam Paszke
3cbe66ba8c
Change requires_grad default to False
2016-10-05 08:46:34 -07:00
Adam Paszke
6efefac2df
Add parameter_dict and load_parameter_dict methods for modules
2016-10-04 14:47:56 -07:00
Sam Gross
f4ebc65a12
Add Module.modules() and Module.children() ( #90 )
...
modules(): returns an iterator over all modules in the network
children(): returns an iterator over immediate children
Also fix __getitem__ in Sequential
2016-10-01 21:18:53 -04:00
Adam Paszke
2d8c2972ae
Only allow leaf variables as module parameters
2016-09-29 11:31:26 -07:00
Sam Gross
cb5d4e836f
Lazy load CUDA and THNN modules ( #64 )
2016-09-28 19:29:53 -04:00
Adam Paszke
7f4ff0e615
Fix type conversions in nn
2016-09-27 15:45:49 -07:00
Adam Paszke
f9d25e8e72
Refactor nn (require specifying parameters explicitly)
2016-09-27 15:22:26 -07:00
Adam Paszke
4cdeae3283
Return only unique variables from parameters()
2016-09-25 12:23:43 -07:00
Adam Paszke
eefa0c7b40
Require torch.nn.cuda automatically when calling .cuda()
2016-09-23 18:06:26 -07:00
Adam Paszke
8fdec15a55
Codemod to remove camel case method naming
2016-09-20 08:40:28 -07:00
Adam Paszke
fb39971464
Add more modules to nn
2016-09-14 11:05:56 -07:00
Sam Gross
b738b09606
Clean up Module forward and __call__ ( #14 )
...
* _forward is renamed forward since users should override it
* some __call__ overrides are changed to forward
* function which return a single variable are changed to return that
variable instead of a one-element tuple
2016-09-07 15:41:39 -04:00
Adam Paszke
774a6f1093
Add in-place operations to autograd and nn
2016-08-25 09:34:54 -07:00
Adam Paszke
ff785e5f17
Make optimizers accept a closure
2016-08-25 09:23:39 -07:00
Adam Paszke
ea93fb7ac0
Add more nn modules
2016-08-23 19:15:21 -07:00
Adam Paszke
7bcb2a4081
Initial optim version
2016-08-23 19:03:30 -07:00
Adam Paszke
2bf68e72d5
Add hook system to autograd and nn
2016-08-23 13:51:34 -07:00
Adam Paszke
e055ffbdc7
Add nn
2016-08-19 14:56:55 -07:00