| .. |
|
__init__.py
|
[Optim in backward] API to retrieve in-backward optimizers (#105991)
|
2023-07-29 01:36:25 +00:00 |
|
apply_optimizer_in_backward.py
|
[Distributed] Small nits to apply_optimizer_in_backward (#110903)
|
2023-10-11 07:45:45 +00:00 |
|
functional_adadelta.py
|
feat(optim): Add adadelta multi_tensor support for complex, with has_complex shortcut (#110631)
|
2023-10-06 03:34:41 +00:00 |
|
functional_adagrad.py
|
[BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
|
2023-02-07 21:10:56 +00:00 |
|
functional_adam.py
|
perf(inductor): improve Adam compile times by shortcutting for loops (via has_complex) (#110607)
|
2023-10-06 05:08:49 +00:00 |
|
functional_adamax.py
|
[BE] Enable ruff's UP rules and autoformat distributed/ (#105433)
|
2023-07-19 14:27:11 +00:00 |
|
functional_adamw.py
|
[BE] Enable ruff's UP rules and autoformat distributed/ (#105433)
|
2023-07-19 14:27:11 +00:00 |
|
functional_rmsprop.py
|
[BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
|
2023-02-07 21:10:56 +00:00 |
|
functional_rprop.py
|
[BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
|
2023-02-07 21:10:56 +00:00 |
|
functional_sgd.py
|
[BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
|
2023-02-07 21:10:56 +00:00 |
|
named_optimizer.py
|
Merge and improve torch optim optimizer type stubs (#102593)
|
2023-07-26 11:56:42 +00:00 |
|
optimizer.py
|
Improve type annotations for jit.script (#108782)
|
2023-09-13 19:20:25 +00:00 |
|
post_localSGD_optimizer.py
|
[nn] zero_grad() set_to_none default True (#92731)
|
2023-01-26 01:04:28 +00:00 |
|
utils.py
|
[PT-D][Easy] Reformat the optim code within PTD code base (#90399)
|
2022-12-08 06:38:59 +00:00 |
|
zero_redundancy_optimizer.py
|
Format: fixing multiple string concatenation in single line (#106013)
|
2023-07-26 18:39:18 +00:00 |
|
zero_redundancy_optimizer.pyi
|
Merge and improve torch optim optimizer type stubs (#102593)
|
2023-07-26 11:56:42 +00:00 |