mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29175 Updates our docs to include a design doc for distributed autograd. Currently, this doc only covers the FAST mode algorithm. The Smart mode algorithm section just refers to the original RFC. There is a section for Distributed Optimizer that we can complete once we've finalized the API for the same. ghstack-source-id: 93701129 Test Plan: look at docs. Differential Revision: D18318949 fbshipit-source-id: 670ea1b6bb84692f07facee26946bbc6ce8c650c |
||
|---|---|---|
| .. | ||
| autograd.rst | ||
| broadcasting.rst | ||
| cpu_threading_torchscript_inference.rst | ||
| cpu_threading_torchscript_inference.svg | ||
| cuda.rst | ||
| distributed_autograd.rst | ||
| extending.rst | ||
| faq.rst | ||
| large_scale_deployments.rst | ||
| multiprocessing.rst | ||
| randomness.rst | ||
| serialization.rst | ||
| windows.rst | ||