pytorch/torch/csrc/autograd/functions
Simon Fan a80eb84a5f [ca] support higher order gradients (create_graph=True) (#153222)
Adds create_graph support if you don't compile or compile only with torch.compile(backend="eager").

Using a backend that uses AOTDispatch produces a post-dispatch AOT backward, where its double backward will be silently incorrect if the forward trace involved any ops that are not composite implicit.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153222
Approved by: https://github.com/jansel
ghstack dependencies: #153193
2025-05-13 16:42:09 +00:00
..
accumulate_grad.cpp [ca] support higher order gradients (create_graph=True) (#153222) 2025-05-13 16:42:09 +00:00
accumulate_grad.h [reland][ca] side-effect free inital trace: compiled_args (#148376) 2025-03-11 01:57:36 +00:00
basic_ops.cpp [reland][ca] side-effect free inital trace: compiled_args (#148376) 2025-03-11 01:57:36 +00:00
basic_ops.h [reland][ca] side-effect free inital trace: compiled_args (#148376) 2025-03-11 01:57:36 +00:00
comm.cpp
comm.h
init.cpp Enable misc-use-internal-linkage check and apply fixes (#148948) 2025-03-12 14:22:56 +00:00
pybind.h
tensor.cpp [5/N] Remove unnecessary once flag usage (#147445) 2025-04-10 01:48:10 +00:00
tensor.h [reland][ca] side-effect free inital trace: compiled_args (#148376) 2025-03-11 01:57:36 +00:00
utils.cpp
utils.h