pytorch/torch/csrc/autograd
2025-08-11 17:57:32 +00:00
..
functions [BE][9/16] fix typos in torch/ (torch/csrc/) (#156319) 2025-06-23 02:57:50 +00:00
utils
anomaly_mode.cpp
anomaly_mode.h [BE][9/16] fix typos in torch/ (torch/csrc/) (#156319) 2025-06-23 02:57:50 +00:00
autograd_meta.cpp [BE][9/16] fix typos in torch/ (torch/csrc/) (#156319) 2025-06-23 02:57:50 +00:00
autograd_not_implemented_fallback.cpp
autograd_not_implemented_fallback.h
autograd.cpp
autograd.h
cpp_hook.cpp
cpp_hook.h
custom_function.cpp
custom_function.h
edge.h
engine.cpp [autograd] match 0-dim gradients device type regardless of subclassness (#160165) 2025-08-11 17:57:32 +00:00
engine.h
forward_grad.cpp
forward_grad.h [BE][9/16] fix typos in torch/ (torch/csrc/) (#156319) 2025-06-23 02:57:50 +00:00
function_hook.h [ca] suggest to disable compiled autograd for trace-time NotImplementedErrors (#156509) 2025-06-21 18:33:46 +00:00
function.cpp
function.h [BE][9/16] fix typos in torch/ (torch/csrc/) (#156319) 2025-06-23 02:57:50 +00:00
FunctionsManual.cpp Fused RMSNorm implementation (#153666) 2025-07-22 22:25:44 +00:00
FunctionsManual.h Fused RMSNorm implementation (#153666) 2025-07-22 22:25:44 +00:00
grad_mode.h
graph_task.h
InferenceMode.h
init.cpp Add is_hidden_event method to KinetoEvent Python interface (#155214) 2025-07-02 16:29:21 +00:00
input_buffer.cpp [autograd] Avoid creating and recording event when unnecessary (#157503) 2025-07-09 03:36:14 +00:00
input_buffer.h
input_metadata.cpp used guard_or_false instead of guard_size_oblivious inside maybe_reduce (#154172) 2025-05-26 21:59:52 +00:00
input_metadata.h
jit_decomp_interface.cpp
jit_decomp_interface.h [Lint] Update clang-format to 19.1.4 (#153889) 2025-05-20 14:12:46 +00:00
profiler_kineto.cpp Add is_hidden_event method to KinetoEvent Python interface (#155214) 2025-07-02 16:29:21 +00:00
profiler_kineto.h Add is_hidden_event method to KinetoEvent Python interface (#155214) 2025-07-02 16:29:21 +00:00
profiler_legacy.cpp
profiler_legacy.h
profiler_python.cpp [Profiler] Fix unexpected C return events (#159574) 2025-08-07 01:17:55 +00:00
profiler_python.h
profiler.h
python_anomaly_mode.cpp
python_anomaly_mode.h
python_autograd.h
python_cpp_function.cpp
python_cpp_function.h
python_engine.cpp Fix clang-tidy bugprone* warnings (#148529) 2025-06-23 23:09:56 +00:00
python_engine.h
python_enum_tag.h
python_fft_functions.h
python_function.cpp Remove unsafe PyTorchError constructor (#154961) 2025-07-11 18:22:53 +00:00
python_function.h
python_hook.cpp
python_hook.h
python_legacy_variable.cpp Remove unsafe PyTorchError constructor (#154961) 2025-07-11 18:22:53 +00:00
python_legacy_variable.h
python_linalg_functions.h
python_nested_functions_manual.cpp
python_nested_functions.h
python_nn_functions.h
python_saved_variable_hooks.cpp
python_saved_variable_hooks.h
python_sparse_functions.h
python_special_functions.h
python_torch_functions_manual.cpp [aotd] Support mutations of the same input in fw and bw (#155354) 2025-06-26 14:05:54 +00:00
python_torch_functions.h
python_variable_indexing.cpp Remove unsafe PyTorchError constructor (#154961) 2025-07-11 18:22:53 +00:00
python_variable_indexing.h
python_variable.cpp [BE] Make PyObjectSlot use a global PyInterpreter and remove (#158427) 2025-07-30 17:29:43 +00:00
python_variable.h
README.md
record_function_ops.cpp
record_function_ops.h
saved_variable_hooks.h
saved_variable.cpp [aotd] Support saved tensors hooks in aot_autograd (#150032) 2025-05-22 14:09:38 +00:00
saved_variable.h
symbolic.h
TraceTypeManual.cpp [BE][9/16] fix typos in torch/ (torch/csrc/) (#156319) 2025-06-23 02:57:50 +00:00
variable_info.cpp
variable_info.h
variable.cpp
variable.h Revert "Enable Leak Sanitizer (#154584)" 2025-06-23 10:08:40 +00:00
VariableTypeManual.cpp Revert "Enable Leak Sanitizer (#154584)" 2025-06-23 10:08:40 +00:00
VariableTypeUtils.h

Autograd

Autograd is a hotspot for PyTorch performance, so most of the heavy lifting is implemented in C++. This implies that we have to do some shuffling between Python and C++; and in general, we want data to be in a form that is convenient to manipulate from C++.

Our general model is that for any key data type that autograd manipulates, there are two implementations: a C++ type and a Python object type. For example, consider variables in autograd: we have both Variable in variable.h (the C++ type) and THPVariable in python_variable.h (the Python type.) (By the way, THP stands for TorcH Python, not to be confused with THPP, TorcH C++). Variable contains the payload of a variable, while THPVariable just contains a shared_ptr reference to Variable, as well as references to other Python objects which the Python runtime needs to know about. A lot of data accessor implementations in python_variable.cpp simply reach through to the underlying Variable and return the appropriate value.

The most complicated application of this principle is Function, which also supports users implementing custom behavior in Python. We have the following classes:

  • Node in function.h, the C++ type.
  • THPFunction in python_function.h, the Python object type. In python_function.cpp, you can see the boilerplate that tells the Python interpreter about this object.
  • PyNode in python_function.h, a subclass of Node which forwards apply to a Python THPFunction. (NOT a Python object, despite its name!)

Outside of PyNode, the C++ objects largely avoid referencing Python objects (there are a few exceptions, like pyobj in Variable, and PyNode, whose whole point is to let C++ call into Python). And pyobj in Node to ensure uniqueness of the associated python wrapper (if it exists).