pytorch/torch/csrc/autograd
2023-12-17 17:36:20 +00:00
..
functions [dynamo] compiled_autograd support for post_acc_grad hooks (#112326) 2023-10-31 22:53:01 +00:00
utils
anomaly_mode.cpp
anomaly_mode.h
autograd_meta.cpp
autograd_not_implemented_fallback.cpp Deprecate "fallthrough" as autograd fallback default (#113166) 2023-11-07 20:26:39 +00:00
autograd_not_implemented_fallback.h
autograd.cpp
autograd.h
cpp_hook.cpp
cpp_hook.h
custom_function.cpp [PyTorch] Redirect c10::optional to std::optional (#101995) 2023-11-30 02:46:41 +00:00
custom_function.h [Reland] [1/N] Fixes clang-tidy warnings in header files (#114668) 2023-11-29 07:11:51 +00:00
edge.h
engine.cpp Terminate handler (#101332) 2023-12-12 17:55:27 +00:00
engine.h
forward_grad.cpp
forward_grad.h
function_hook.h [dynamo] compiled_autograd support for post_acc_grad hooks (#112326) 2023-10-31 22:53:01 +00:00
function.cpp Split out input_metadata.cpp from input_metadata.h (#113031) 2023-11-07 00:03:21 +00:00
function.h Enable set sequence nr (#114120) 2023-11-21 19:47:28 +00:00
FunctionsManual.cpp SymInt'ify sparse_compressed_tensor (#107903) 2023-12-17 17:36:20 +00:00
FunctionsManual.h Add values backward support for sparse CSR, CSC, BSR, and BSC tensors (#115586) 2023-12-14 23:09:13 +00:00
grad_mode.h
graph_task.h
InferenceMode.h
init.cpp [PyTorch] Redirect c10::optional to std::optional (#101995) 2023-11-30 02:46:41 +00:00
input_buffer.cpp [PyTorch] Redirect c10::optional to std::optional (#101995) 2023-11-30 02:46:41 +00:00
input_buffer.h
input_metadata.cpp [NT] Backward support for broadcasting binary ops (#112519) 2023-11-07 00:03:21 +00:00
input_metadata.h [NT] Backward support for broadcasting binary ops (#112519) 2023-11-07 00:03:21 +00:00
jit_decomp_interface.cpp
jit_decomp_interface.h
profiler_kineto.cpp [4/N] Fixes clang-tidy warnings in header files (#115163) 2023-12-06 05:00:01 +00:00
profiler_kineto.h [Kineto][NCCL][3/n] Get the NCCL communication info from PARAM_COMMS_INFO (#111846) 2023-10-25 20:35:06 +00:00
profiler_legacy.cpp [c10] Move profiler clock to libc10 for timestamps (#111972) 2023-10-27 16:18:40 +00:00
profiler_legacy.h
profiler_python.cpp [PyTorch] Redirect c10::optional to std::optional (#101995) 2023-11-30 02:46:41 +00:00
profiler_python.h
profiler.h
python_anomaly_mode.cpp
python_anomaly_mode.h
python_autograd.h
python_cpp_function.cpp Enable set sequence nr (#114120) 2023-11-21 19:47:28 +00:00
python_cpp_function.h Enable set sequence nr (#114120) 2023-11-21 19:47:28 +00:00
python_engine.cpp Fix autograd engine callback error propagation from device thread (#113702) 2023-11-17 20:17:02 +00:00
python_engine.h
python_enum_tag.h
python_fft_functions.h
python_function.cpp [PyTorch] Redirect c10::optional to std::optional (#101995) 2023-11-30 02:46:41 +00:00
python_function.h Allow specifiying inputs as GradientEdge in autograd APIs (#110867) 2023-10-12 04:08:44 +00:00
python_hook.cpp [dynamo] compiled_autograd support for post_acc_grad hooks (#112326) 2023-10-31 22:53:01 +00:00
python_hook.h [Cmake] Check that gcc-9.4 or newer is used (#112858) 2023-11-06 17:19:53 +00:00
python_legacy_variable.cpp Enable misc clang-tidy checks (#110283) 2023-09-30 10:39:52 +00:00
python_legacy_variable.h
python_linalg_functions.h
python_nested_functions_manual.cpp
python_nested_functions.h
python_nn_functions.h
python_saved_variable_hooks.cpp
python_saved_variable_hooks.h
python_sparse_functions.h
python_special_functions.h
python_torch_functions_manual.cpp AOTAutograd: keep input mutations in the graph if they are under no_grad, even if they require_grad (#114646) 2023-11-29 04:29:32 +00:00
python_torch_functions.h
python_variable_indexing.cpp
python_variable_indexing.h
python_variable.cpp Consolidate sym/non-sym overloads for _make_wrapper_subclass (#114236) 2023-11-22 02:03:29 +00:00
python_variable.h
README.md
record_function_ops.cpp Revert "record_function: remove legacy internal operators (#72303)" 2023-10-24 20:01:14 +00:00
record_function_ops.h
saved_variable_hooks.h
saved_variable.cpp
saved_variable.h
symbolic.h
TraceTypeManual.cpp
variable.cpp
variable.h
VariableTypeManual.cpp
VariableTypeUtils.h [caffe2] avoid variable shadowing (#111476) 2023-10-23 13:22:11 +00:00

Autograd

Autograd is a hotspot for PyTorch performance, so most of the heavy lifting is implemented in C++. This implies that we have to do some shuffling between Python and C++; and in general, we want data to be in a form that is convenient to manipulate from C++.

Our general model is that for any key data type that autograd manipulates, there are two implementations: a C++ type and a Python object type. For example, consider variables in autograd: we have both Variable in variable.h (the C++ type) and THPVariable in python_variable.h (the Python type.) (By the way, THP stands for TorcH Python, not to be confused with THPP, TorcH C++). Variable contains the payload of a variable, while THPVariable just contains a shared_ptr reference to Variable, as well as references to other Python objects which the Python runtime needs to know about. A lot of data accessor implementations in python_variable.cpp simply reach through to the underlying Variable and return the appropriate value.

The most complicated application of this principle is Function, which also supports users implementing custom behavior in Python. We have the following classes:

  • Node in function.h, the C++ type.
  • THPFunction in python_function.h, the Python object type. In python_function.cpp, you can see the boilerplate that tells the Python interpreter about this object.
  • PyNode in python_function.h, a subclass of Node which forwards apply to a Python THPFunction. (NOT a Python object, despite its name!)

Outside of PyNode, the C++ objects largely avoid referencing Python objects (there are a few exceptions, like pyobj in Variable, and PyNode, whose whole point is to let C++ call into Python). And pyobj in Node to ensure uniqueness of the associated python wrapper (if it exists).