Aaron Orenstein
00ffeca1b1
PEP585 update - torch/distributed ( #145164 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145164
Approved by: https://github.com/bobrenjc93
2025-01-21 04:23:29 +00:00
PyTorch MergeBot
6374332d33
Revert "PEP585 update - torch/distributed ( #145164 )"
...
This reverts commit 6cb186e279 .
Reverted https://github.com/pytorch/pytorch/pull/145164 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing an inductor test ([comment](https://github.com/pytorch/pytorch/pull/145164#issuecomment-2602875679 ))
2025-01-20 16:46:46 +00:00
Aaron Orenstein
6cb186e279
PEP585 update - torch/distributed ( #145164 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145164
Approved by: https://github.com/bobrenjc93
2025-01-20 00:19:01 +00:00
bobrenjc93
08be9ec312
Migrate from Tuple -> tuple in torch/distributed ( #144258 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144258
Approved by: https://github.com/aorenste
2025-01-10 08:34:54 +00:00
Yifu Wang
3d26c08dda
Fix unintended deprecation warning in torch.distributed.optim ( #140889 )
...
We have a deprecation warning for scripted functional optimizer at module level in `torch/distributed/optim/__init__.py`. However, not all optimizers exposed by the module are scripted functional optimizers, causing some false deprecation warning (e.g. https://github.com/pytorch/pytorch/issues/139661 ).
This PR moves the deprecation warning to the `__init__` functions of the deprecated scripted functional optimizers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140889
Approved by: https://github.com/d4l3k , https://github.com/kwen2501 , https://github.com/XilunWu
2024-11-18 02:34:51 +00:00
Xuehai Pan
3b798df853
[BE][Easy] enable UFMT for torch/distributed/{fsdp,optim,rpc}/ ( #128869 )
...
Part of #123062
- #123062
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128869
Approved by: https://github.com/fegin
ghstack dependencies: #128868
2024-06-18 21:49:08 +00:00
Aaron Orenstein
7c12cc7ce4
Flip default value for mypy disallow_untyped_defs [6/11] ( #127843 )
...
See #127836 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127843
Approved by: https://github.com/oulgen
ghstack dependencies: #127842
2024-06-08 18:49:29 +00:00
Jon Chuang
f74d766632
feat(optim): use has_complex shortcut flag for all applicable optimizers, use _view_as_real auxiliary function ( #110706 )
...
Follow up to: https://github.com/pytorch/pytorch/pull/110607
CC: @lezcano @janeyx99
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110706
Approved by: https://github.com/lezcano
2023-10-31 20:33:03 +00:00
Justin Chu
232b96b6e2
[BE] Enable ruff's UP rules and autoformat distributed/ ( #105433 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105433
Approved by: https://github.com/albanD
2023-07-19 14:27:11 +00:00
Aaron Gokaslan
8fce9a09cd
[BE]: pyupgrade Python to 3.8 - imports and object inheritance only ( #94308 )
...
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang , https://github.com/albanD
2023-02-07 21:10:56 +00:00
fduwjj
1a48ae96ba
[PT-D][Easy] Reformat the optim code within PTD code base ( #90399 )
...
Just run two commands:
```
ufmt format torch/distributed/optim/
ufmt format test/distributed/optim/
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90399
Approved by: https://github.com/awgu
2022-12-08 06:38:59 +00:00
anjali411
93912b1a73
Add __all__ to torch.distributed submodules ( #80523 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80523
Approved by: https://github.com/rohan-varma
2022-07-11 06:54:24 +00:00
Rob Zinkov
2a496e2f80
Adding maximize to Adamax ( #77409 )
...
Added the maximize flag #68052 to Adamax optimizer and updates the respective tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77409
Approved by: https://github.com/albanD
2022-05-16 17:34:44 +00:00
Mikayla Gawarecki
d9acfef831
Optim foreach cleanup for Adamax ( #69982 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69982
Test Plan: Imported from OSS
Reviewed By: anjali411
Differential Revision: D33767865
Pulled By: mikaylagawarecki
fbshipit-source-id: c5efd351e359825d38b71f57a2c61a2055c3c114
(cherry picked from commit 37bb80c2d7 )
2022-02-09 16:52:13 +00:00
Mikayla Gawarecki
7176c92687
[optim] update step in functional and pass state_steps instead of state ( #71333 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71333
Updated
- Adagrad
- Adamax
- Adam
- AdamW
- RAdam
make multi_tensor functionals take `state_steps: List[Tensor]` instead of taking `states: List[Dict]`
make `state_steps: List[int]s -> state_steps:List[Tensor]` where each is a Singleton tensor so step can be updated within the functional
(NAdam and ASGD) were updated in separate diffs to fold their handling of state into the functionals
Test Plan: Imported from OSS
Reviewed By: anjali411
Differential Revision: D33767872
Pulled By: mikaylagawarecki
fbshipit-source-id: 9baa7cafb6375eab839917df9287c65a437891f2
(cherry picked from commit 831c02b3d0 )
2022-02-08 16:51:19 +00:00
Andrew Gu
1b1f1e36b4
Add `allow_empty_param_list` to functional optimizers ( #62522 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62522
Addresses https://github.com/pytorch/pytorch/issues/62481
Test Plan: Imported from OSS
Reviewed By: zou3519
Differential Revision: D30072074
Pulled By: andwgu
fbshipit-source-id: 1a5da21f9636b8d74a6b00c0f029427f0edff0e3
2021-08-09 11:18:56 -07:00
Wanchao Liang
4611387608
[optim] take kw-only argument for functional optim APIs ( #56185 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56185
ghstack-source-id: 126670123
Reviewed By: albanD
Differential Revision: D27802169
fbshipit-source-id: f5e1cb2046dcdeecf5f6b0f70892828bf0adb22f
2021-04-15 20:08:04 -07:00
Wanchao Liang
4e9e7200f2
[dist_optim] Add distributed functional Adamax optimizer ( #55833 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55833
Add distributed functional Adamax optimizer, to support in TorchScript
ghstack-source-id: 126325538
Reviewed By: rohan-varma
Differential Revision: D26696540
fbshipit-source-id: 6242faebd2476847831a05df7f8b0d616f2b5355
2021-04-15 15:19:43 -07:00