mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
[distributed][docs] Delete distributed optimimzer section from RPC and add reference to namespace docs page (#68068)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68068 cc pietern mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse SciPioneer H-Huang Test Plan: Imported from OSS Reviewed By: pritamdamania87 Differential Revision: D32286554 Pulled By: jamesr66a fbshipit-source-id: a43fe1f0cfa74721f467b128f2e878bd02f32546
This commit is contained in:
parent
7c90bd77ec
commit
eaf0457eef
|
|
@ -4,11 +4,8 @@
|
|||
Distributed Optimizers
|
||||
======================
|
||||
|
||||
.. autoclass:: torch.distributed.optim.DistributedOptimizer
|
||||
:members:
|
||||
.. warning ::
|
||||
Distributed optimizer is not currently supported when using CUDA tensors
|
||||
|
||||
.. autoclass:: torch.distributed.optim.PostLocalSGDOptimizer
|
||||
:members:
|
||||
|
||||
.. autoclass:: torch.distributed.optim.ZeroRedundancyOptimizer
|
||||
:members:
|
||||
.. automodule:: torch.distributed.optim
|
||||
:members: DistributedOptimizer, PostLocalSGDOptimizer, ZeroRedundancyOptimizer
|
||||
|
|
|
|||
|
|
@ -261,11 +261,7 @@ using RPC. For more details see :ref:`distributed-autograd-design`.
|
|||
Distributed Optimizer
|
||||
---------------------
|
||||
|
||||
.. warning ::
|
||||
Distributed optimizer is not currently supported when using CUDA tensors
|
||||
|
||||
.. automodule:: torch.distributed.optim
|
||||
:members: DistributedOptimizer
|
||||
See the `torch.distributed.optim <https://pytorch.org/docs/master/distributed.optim.html>`__ page for documentation on distributed optimizers.
|
||||
|
||||
Design Notes
|
||||
------------
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user