Commit Graph

28 Commits

Author SHA1 Message Date
Vishwak Srinivasan
89acc10f85 Adding description for Optimizers (#4371) 2017-12-28 16:55:52 +01:00
Sam Gross
d605058212
Replace Variable.volatile with torch.no_grad() (#3970)
This removes volatile from Variable. The functionality is mostly
replaced by a global (thread-local) flag, which is controlled by
torch.set_grad_enabled() and the context manager torch.no_grad().

In C++, the flag is exposed through GradMode::is_enabled() and GradMode::set_enabled()

Fixes #3627
2017-12-18 15:46:13 -05:00
Adam Paszke
af9fd35d82 Cast tensors when loading optimizer state dicts (#3658) 2017-11-28 09:56:39 -05:00
Taehoon Lee
5d9de014bd Fix typos 2017-10-01 03:09:25 -04:00
randxie
1a83c372ec address issue #1488 by using defaultdict in load_state_dict 2017-09-20 14:56:21 -04:00
Michael Dietz
e69063405e Allow param groups to be added to Optimizer dynamically (#2374) 2017-08-30 11:20:58 -04:00
Yan Wang
a76098ac15 fix optimizer when given single parameters (instead of an iterable)
When I use the named_parametes to modify the lr and weight decay, I will face a bug. Because the value of the named_parameters return is  torch.nn.paramter.Parameter, not a generator of the Parameter.
2017-06-05 23:47:56 -04:00
Adam Paszke
feef54ec34 Don't modify non-volatile grads in zero_grad 2017-05-10 16:43:14 +02:00
Adam Paszke
20aa5b066f Convert some of the functions to new format
Also, fix a lot of issues that appeared after the previous commits.
2017-05-01 16:44:56 -04:00
Adam Paszke
2ca787fcf4 Refactor attribute names in autograd 2017-05-01 16:44:56 -04:00
Martin Raison
f17cfe4293 sparse tensor operations (#735) 2017-03-03 18:37:03 +01:00
Adam Paszke
bd7a5ad6f0 Make Optimizer.load_state_dict use __setstate__ 2017-02-26 20:02:42 +01:00
Luke Yeager
3ed720079e [pep8] Fix most remaining lint manually 2017-01-28 01:15:51 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
ecfcf39f30 Improve optimizer serialization
Also, add optimizer.load_state_dict
2017-01-24 17:30:50 -05:00
Adam Paszke
3238786ea1 Improve optimizer error messages 2017-01-22 18:32:51 -05:00
Adam Paszke
95f0fa8a92 Change .grad attribute of Variables to be a Variable 2017-01-16 12:59:47 -05:00
Adam Paszke
676ffee542 Check params type in optimizers 2017-01-16 12:59:47 -05:00
Adam Paszke
604e13775f Add optim docs 2017-01-16 12:59:47 -05:00
Sam Gross
162170fd7b Add optional weight decay to optim.SGD (#269) 2016-11-29 20:35:40 -05:00
Adam Paszke
09493603f6 Change optimizer API 2016-11-08 18:12:56 +01:00
Adam Paszke
df59b89fbb Add more optimizers 2016-11-07 22:50:56 +01:00
Adam Paszke
3cbe66ba8c Change requires_grad default to False 2016-10-05 08:46:34 -07:00
Adam Paszke
99de537a2e Remove CUDA sync points from losses and trainer 2016-10-05 08:46:31 -07:00
Adam Paszke
4db6667923 Allow specifying per-parameter optimization parameters 2016-10-04 18:21:50 -07:00
Adam Paszke
58b134b793 Allow exporting optimizer state as a dict 2016-10-04 17:33:49 -07:00
Adam Paszke
ff785e5f17 Make optimizers accept a closure 2016-08-25 09:23:39 -07:00
Adam Paszke
7bcb2a4081 Initial optim version 2016-08-23 19:03:30 -07:00