pytorch/torch/csrc/autograd/functions
Simon Fan 578160c875 [ca] don't inline accumulate grad op (#149014)
we use dummy tensors in our initial trace, so we should never inline. the subclass dispatch might not support the dummy tensor, e.g. DTensor accumulate grad will check that both param and grad are DTensors

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149014
Approved by: https://github.com/jansel
ghstack dependencies: #149064
2025-03-15 01:10:54 +00:00
..
accumulate_grad.cpp [ca] don't inline accumulate grad op (#149014) 2025-03-15 01:10:54 +00:00
accumulate_grad.h [reland][ca] side-effect free inital trace: compiled_args (#148376) 2025-03-11 01:57:36 +00:00
basic_ops.cpp [reland][ca] side-effect free inital trace: compiled_args (#148376) 2025-03-11 01:57:36 +00:00
basic_ops.h [reland][ca] side-effect free inital trace: compiled_args (#148376) 2025-03-11 01:57:36 +00:00
comm.cpp [14/N] Fix extra warnings brought by clang-tidy-17 (#141644) 2024-12-13 06:22:13 +00:00
comm.h
init.cpp Enable misc-use-internal-linkage check and apply fixes (#148948) 2025-03-12 14:22:56 +00:00
pybind.h
tensor.cpp [ca] support for dynamic shapes CopySlices (#148799) 2025-03-13 17:30:20 +00:00
tensor.h [reland][ca] side-effect free inital trace: compiled_args (#148376) 2025-03-11 01:57:36 +00:00
utils.cpp [14/N] Fix extra warnings brought by clang-tidy-17 (#141644) 2024-12-13 06:22:13 +00:00
utils.h