pytorch/torch/autograd
Richard Zou 7342251281 functorch.grad support for autograd.Function (#89860)
Happy to split this PR more if it helps.

This PR adds functorch.grad support for autograd.Function. There's a lot
going on; here is the high level picture and there are more details as
comments in the code.

Mechanism (PyOperator)
- Somehow, autograd.Function needs to dispatch with functorch. This is
necessary because every layer of functorch needs to see the
autograd.Function; grad layers need to preserve the backward pass.
- The mechanism for this is via PyOperator. If functorch transforms are
active, then we wrap the autograd.Function in a `custom_function_call`
PyOperator where we are able to define various rules for functorch
transforms.
- `custom_function_call` has a rule for the functorch grad transform.

autograd.Function changes
- I needed to make some changes to autograd.Function to make this work.
- First, this PR splits autograd.Function into a _SingleLevelFunction
(that works with a single level of functorch transform) and
autograd.Function (which works with multiple levels). This is necessary
because functorch's grad rule needs some way of specifying a backward
pass for that level only.
- This PR changes autograd.Function's apply to eitehr call
`custom_function_call` (if functorch is active) or super().apply (if
functorch isn't active).

Testing
- Most of this PR is just testing. It creates an autograd.Function
OpInfo database that then gets passed to the functorch grad-based tests
(grad, vjp, vjpvjp).
- Since functorch transform tests are autogenerated from OpInfo tests,
this is the easiest way to test various autograd.Function with
functorch.

Future
- jvp and vmap support coming next
- better error message (functorch only supports autograd.Function that
have the optional setup_context staticmethod)
- documentation to come when we remove the feature flag

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89860
Approved by: https://github.com/soulitzer
2022-12-08 19:31:04 +00:00
..
_functions
__init__.py Add Context Manager for Disabling Multithreading in Backwards, use in aot autograd (#86245) 2022-10-06 03:27:42 +00:00
anomaly_mode.py Add option to run anomaly mode without nan checking (#83481) 2022-08-16 22:56:23 +00:00
forward_ad.py Make Python op registration work with torchdeploy/multipy (#87162) 2022-11-03 12:56:44 +00:00
function.py functorch.grad support for autograd.Function (#89860) 2022-12-08 19:31:04 +00:00
functional.py Add __all__ to torch.{autograd, fx, cuda} submodules (#85343) 2022-10-09 14:46:54 +00:00
grad_mode.py Deprecate decorating classes with torch.no_grad and similar (#89522) 2022-11-23 16:51:42 +00:00
gradcheck.py Fix exception causes all over the codebase (#90271) 2022-12-07 04:29:00 +00:00
graph.py Add context manager to allow mutation on saved tensors (#79056) 2022-11-11 15:18:28 +00:00
profiler_legacy.py Add __all__ to torch.{autograd, fx, cuda} submodules (#85343) 2022-10-09 14:46:54 +00:00
profiler_util.py Add __all__ to torch.{autograd, fx, cuda} submodules (#85343) 2022-10-09 14:46:54 +00:00
profiler.py record_function: update to use custom_class API (#76420) 2022-11-02 00:39:28 +00:00
variable.py Add __all__ to torch.{autograd, fx, cuda} submodules (#85343) 2022-10-09 14:46:54 +00:00