Tongzhou Wang
5ed75ec1d7
Fix SparseAdam consuming iterator ( #86210 )
...
Fixes https://github.com/pytorch/pytorch/issues/86209
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86210
Approved by: https://github.com/cpuhrsch
2022-10-06 23:11:25 +00:00
ProGamerGov
71d50f4f89
Change docstring type callable to Callable for consistency ( #82487 )
...
### Description
Across PyTorch's docstrings, both `callable` and `Callable` for variable types. The Callable should be capitalized as we are referring to the `Callable` type, and not the Python `callable()` function.
### Testing
There shouldn't be any testing required.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82487
Approved by: https://github.com/albanD
2022-08-01 17:26:09 +00:00
Sudarshan Raghunathan
52aae5aa19
[Sparse Adam] Fix error in loading serialized models due to introduction of new parameter ( #82273 )
...
### Description
PR #80336 introduced a new parameter to the Sparse Adam optimizer. The new parameter is accessed inside the `step` method of the optimizer. If we try to deserialize and run an older version of the optimizer before this change was introduced, it fails in the step that tries to access the missing parameter.
I have added a workaround to set a default value in case the parameter is unavailable in the optimizer.
### Issue
<!-- Link to Issue ticket or RFP -->
### Testing
* Testing on PyTorch CI
* Manual validation against existing serialized models to make sure they continue to work
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82273
Approved by: https://github.com/mehtanirav , https://github.com/albanD
2022-07-27 12:48:38 +00:00
Rob Zinkov
f24c94d7ae
Adding maximize to SparseAdam ( #80336 )
...
Added the maximize flag #68052 to SparseAdam optimizer and updates the respective tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80336
Approved by: https://github.com/albanD
2022-07-08 12:17:27 +00:00
anjali411
bda04e9f5e
Add __all__ for torch.optim and torch.nn.modules modules ( #80237 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80237
Approved by: https://github.com/albanD
2022-06-24 21:34:10 +00:00
Ilqar Ramazanli
7c2938bf67
To refactor Sparse Adam algorithm for functional form ( #59171 )
...
Summary:
Adds Functional Interface for Sparse Adam Optimizer.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59171
Reviewed By: vincentqb
Differential Revision: D29360582
Pulled By: iramazanli
fbshipit-source-id: 5ceffd7f4b7abd1e0b758a5b8445abdf5555eba0
2021-06-25 06:35:39 -07:00
Samuel Marks
e6779d4357
[*.py] Rename "Arguments:" to "Args:" ( #49736 )
...
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.
```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args: 1095
Arguments: 0336
```
It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:
- https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md )
- https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md )
- https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst )
Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.
PS: For related PRs, see tensorflow/tensorflow/pull/45420
PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch ) organisation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736
Reviewed By: albanD
Differential Revision: D25710534
Pulled By: soumith
fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
mariosasko
f2c3efd51f
Fix generator exhaustion in SparseAdam ( #47724 )
...
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47594
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47724
Reviewed By: heitorschueroff
Differential Revision: D25304131
Pulled By: albanD
fbshipit-source-id: 67c058b0836b9b4fba4f7b966396e4f3fa61f939
2020-12-07 09:38:07 -08:00
Randall Hunt
24eea364f7
Check SparseAdam params are dense on init ( #41966 ) ( #43668 )
...
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41966
Raises a value error if user attempts to create SparseAdam optimizer with sparse parameter tensors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43668
Reviewed By: glaringlee
Differential Revision: D23388109
Pulled By: ranman
fbshipit-source-id: 1fbcc7527d49eac6fae9ce51b3307c609a6ca38b
2020-09-01 14:25:59 -07:00
albanD
6e2bb1c054
End of the .data removal in torch/optim ( #34211 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34211
Test Plan: Imported from OSS
Differential Revision: D20248684
Pulled By: albanD
fbshipit-source-id: 2294bfa41b82ff47f000bc98860780f59d7d4421
2020-03-09 06:40:39 -07:00
Eleanor Dwight Holland
6a97777f72
Remove use of .data from optimizers ( #33640 )
...
Summary:
Removes all uses of `.data` from optimizers.
Or tries to.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33640
Reviewed By: vincentqb
Differential Revision: D20203216
Pulled By: albanD
fbshipit-source-id: 9bfe78bbed00fd4aaa690801cff0201f0bd680a0
2020-03-03 13:21:55 -08:00
Vitaly Fedyunin
877c96cddf
explicitly provide memory format when calling to *_like operators
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30008
Test Plan: Imported from OSS
Differential Revision: D18575981
Pulled By: VitalyFedyunin
fbshipit-source-id: ec3418257089ad57913932be1a8608cd20ce054c
2019-11-19 16:19:29 -08:00
Soumith Chintala
cf235e0894
fix lint after new flake8 release added new style constraints ( #13047 )
...
Summary:
fix lint after new flake8 release added new style constraints
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13047
Differential Revision: D10527804
Pulled By: soumith
fbshipit-source-id: 6f4d02662570b6339f69117b61037c8394b0bbd8
2018-10-24 09:03:38 -07:00
Peter Goldsborough
fb4e8088f3
Remove methods that start with an underscore from at::Tensor ( #11152 )
...
Summary:
This PR cleans up the `at::Tensor` class by removing all methods that start with an underscore in favor of functions in the `at::` namespace. This greatly cleans up the `Tensor` class and makes it clearer what is the public and non-public API.
For this I changed `native_functions.yaml` and `Declarations.cwrap` to make all underscore methods `variant: function` (or add such a statement to begin with), and then fixed all code locations using the underscore methods.
ezyang colesbury gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11152
Differential Revision: D9683607
Pulled By: goldsborough
fbshipit-source-id: 97f869f788fa56639c05a439e2a33be49f10f543
2018-09-07 11:55:11 -07:00
lazypanda1
063946d2b3
Added parameter range checks for all optimizers ( #6000 )
2018-03-28 11:22:23 +02:00
Dr. Kashif Rasul
859a173502
fix AMSGrad for SparseAdam ( #4314 )
2017-12-30 13:00:17 +01:00
Dr. Kashif Rasul
68c0998cbe
added AMSgrad optimizer to Adam and SparseAdam ( #4034 )
...
* initial AMSGrad
* added test for amsgrad
* added amsgrad to adam
* fixed tests
* added option to sparse adam
* flake8
2017-12-18 13:24:49 -05:00
SsnL
f76d6c029c
Sparse Adam optimizer for sparse gradients ( #3137 )
...
* sparse adam
* Favor dense addition over sparse_mask
2017-11-06 14:20:51 -05:00