pytorch/torch/_C
Richard Zou 7342251281 functorch.grad support for autograd.Function (#89860)
Happy to split this PR more if it helps.

This PR adds functorch.grad support for autograd.Function. There's a lot
going on; here is the high level picture and there are more details as
comments in the code.

Mechanism (PyOperator)
- Somehow, autograd.Function needs to dispatch with functorch. This is
necessary because every layer of functorch needs to see the
autograd.Function; grad layers need to preserve the backward pass.
- The mechanism for this is via PyOperator. If functorch transforms are
active, then we wrap the autograd.Function in a `custom_function_call`
PyOperator where we are able to define various rules for functorch
transforms.
- `custom_function_call` has a rule for the functorch grad transform.

autograd.Function changes
- I needed to make some changes to autograd.Function to make this work.
- First, this PR splits autograd.Function into a _SingleLevelFunction
(that works with a single level of functorch transform) and
autograd.Function (which works with multiple levels). This is necessary
because functorch's grad rule needs some way of specifying a backward
pass for that level only.
- This PR changes autograd.Function's apply to eitehr call
`custom_function_call` (if functorch is active) or super().apply (if
functorch isn't active).

Testing
- Most of this PR is just testing. It creates an autograd.Function
OpInfo database that then gets passed to the functorch grad-based tests
(grad, vjp, vjpvjp).
- Since functorch transform tests are autogenerated from OpInfo tests,
this is the easiest way to test various autograd.Function with
functorch.

Future
- jvp and vmap support coming next
- better error message (functorch only supports autograd.Function that
have the optional setup_context staticmethod)
- documentation to come when we remove the feature flag

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89860
Approved by: https://github.com/soulitzer
2022-12-08 19:31:04 +00:00
..
_dynamo Add mypy checking for a few files in torch/_dynamo (#89731) 2022-11-28 13:14:06 +00:00
__init__.pyi.in Add feature flag for the autograd.Function extension (#89858) 2022-12-08 19:31:01 +00:00
_autograd.pyi [Profiler][Minor] Separate standalone profilers from the main PyTorch profiler. (#85511) 2022-10-14 05:38:48 +00:00
_cudnn.pyi
_distributed_autograd.pyi
_distributed_c10d.pyi [c10d] Implement __instancecheck__ for c10d::ReduceOp (#88275) 2022-11-15 13:21:41 +00:00
_distributed_rpc_testing.pyi
_distributed_rpc.pyi Fix use-dict-literal lint (#83718) 2022-08-24 00:26:46 +00:00
_functions.pyi
_functorch.pyi functorch.grad support for autograd.Function (#89860) 2022-12-08 19:31:04 +00:00
_itt.pyi Fix ITT unit-tests if PyTorch is compiled with USE_ITT=OFF (#86199) 2022-10-04 21:57:05 +00:00
_lazy_ts_backend.pyi
_lazy.pyi Add step closures (#84300) 2022-09-06 20:55:34 +00:00
_monitor.pyi
_nn.pyi.in
_nvtx.pyi
_onnx.pyi
_profiler.pyi [Profiler] Memory profiler part 3: Schema parsing and mutable arguments (#86854) 2022-11-15 19:17:57 +00:00
_VariableFunctions.pyi.in improve annotations (#86105) 2022-10-05 10:33:26 +00:00
_verbose.pyi [RFC] enable oneMKL&oneDNN on-demands verbose functinality (#63212) 2022-07-27 23:29:35 +00:00
build.bzl
return_types.pyi.in