mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
This PR: - Updates autograd.Function.forward docs to reflect how you either define a forward with ctx or a separate forward and setup_context - Updates the "Extending Autograd" docs to suggest the usage of autograd.Function with separate forward and setup_context. This should be the default because there is a low barrier to go from this to an autograd.Function that is fully supported by functorch transforms. - Adds a new "Extending torch.func with autograd.Function" doc that explains how to use autograd.Function with torch.func. It also explains how to use generate_vmap_rule and how to manually write a vmap staticmethod. While writing this, I noticed that the implementation of setup_context staticmethod/generate_vmap_rule/vmap staticmethod are a bit inconsistent with the other method/attributes on autograd.Function: - https://github.com/pytorch/pytorch/issues/91451 - I'm happy to fix those if we think it is a problem, either in this PR or a followup (this PR is getting long, I want some initial docs out that I can point early adopters at, and fixing the problems in the future isn't really BC-breaking). Test Plan: - view docs preview Pull Request resolved: https://github.com/pytorch/pytorch/pull/91452 Approved by: https://github.com/soulitzer |
||
|---|---|---|
| .. | ||
| amp_examples.rst | ||
| autograd.rst | ||
| broadcasting.rst | ||
| cpu_threading_runtimes.svg | ||
| cpu_threading_torchscript_inference.rst | ||
| cpu_threading_torchscript_inference.svg | ||
| cuda.rst | ||
| ddp.rst | ||
| extending.func.rst | ||
| extending.rst | ||
| faq.rst | ||
| gradcheck.rst | ||
| hip.rst | ||
| large_scale_deployments.rst | ||
| modules.rst | ||
| mps.rst | ||
| multiprocessing.rst | ||
| numerical_accuracy.rst | ||
| randomness.rst | ||
| serialization.rst | ||
| windows.rst | ||