pytorch/torch/csrc/autograd
Edward Yang 5ae909b443 Revert D15920763: Move TensorOptions to ATen/core
Differential Revision:
D15920763

Original commit changeset: c3429973180a

fbshipit-source-id: 0efb27722b371e1047f02240f071bc222b52e51d
2019-08-13 12:07:18 -07:00
..
functions Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
utils Add qscheme() method (#20608) 2019-06-14 16:29:29 -07:00
anomaly_mode.cpp C++ changes toward libtorch and libcaffe2 unification (#19554) 2019-04-26 01:38:10 -07:00
anomaly_mode.h C++ changes toward libtorch and libcaffe2 unification (#19554) 2019-04-26 01:38:10 -07:00
autograd.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
custom_function.cpp Allow empty Variables to be saved for backwards (#23618) 2019-07-31 19:51:35 -07:00
custom_function.h Allow forward functions with single output to return Variable (#23803) 2019-08-09 11:10:14 -07:00
edge.h Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
engine.cpp Thread local debug info 2019-08-12 14:53:57 -07:00
engine.h Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
function_hook.cpp C++ changes toward libtorch and libcaffe2 unification (#19554) 2019-04-26 01:38:10 -07:00
function_hook.h C++ changes toward libtorch and libcaffe2 unification (#19554) 2019-04-26 01:38:10 -07:00
function.cpp Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
function.h Remove torch::autograd::Node::get_shared_ptr() 2019-07-24 13:50:47 -07:00
grad_mode.h Move GradMode / AutoGradMode / NoGradGuard to ATen core (#18573) 2019-07-05 23:41:37 -07:00
init.cpp PyTorch Profiler Shape aggregation support (#20035) 2019-05-07 14:47:01 -07:00
input_buffer.cpp Cleanup includes in torch/csrc/autograd/* (#19923) 2019-05-06 13:48:42 -07:00
input_buffer.h Enable autograd to recognize the XLA backend as one providing multiple devices (#17847) 2019-03-20 13:58:36 -07:00
input_metadata.h Add ScalarType argument to Type::options() (#19270) 2019-04-21 21:16:07 -07:00
profiler_cuda.cpp Unify cudaGetDeviceCount implementations. (#18445) 2019-03-26 09:50:14 -07:00
profiler.cpp Fix with emit_nvtx, also allow shape information to appear in nvtx ranges. (#21691) 2019-06-14 07:35:00 -07:00
profiler.h Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
python_anomaly_mode.cpp Cleanup includes in torch/csrc/* (#19924) 2019-05-06 14:03:18 -07:00
python_anomaly_mode.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
python_cpp_function.cpp Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
python_cpp_function.h Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
python_engine.cpp Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
python_engine.h Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
python_function.cpp Support custom autograd functions in C++ (#23572) 2019-07-31 11:30:48 -07:00
python_function.h Support custom autograd functions in C++ (#23572) 2019-07-31 11:30:48 -07:00
python_hook.cpp Remove Variable::Impl and DifferentiableViewImpl (#17072) 2019-05-23 21:09:04 -07:00
python_hook.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
python_legacy_variable.cpp Invert ownership between PyFunction and THPFunction. 2019-07-22 14:13:14 -07:00
python_legacy_variable.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
python_variable_indexing.cpp Revert D15920763: Move TensorOptions to ATen/core 2019-08-13 12:07:18 -07:00
python_variable_indexing.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
python_variable.cpp Rename tensor.is_named to has_named, expose has_named to python. 2019-07-31 07:14:07 -07:00
python_variable.h Make PythonArgs::tensor and PythonArgs::scalar faster (#22782) 2019-07-12 11:57:29 -07:00
README.md Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
record_function.cpp Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
record_function.h Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
saved_variable.cpp Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
saved_variable.h Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
symbolic.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
type_and_shape.h Extends type and shape tracing with device (#9796) 2018-08-07 12:25:17 -07:00
variable.cpp Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
variable.h Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00
VariableTypeManual.cpp avoid Include the same header file twice (#23418) 2019-07-29 13:34:11 -07:00
VariableTypeUtils.h Rename torch::autograd::Function to torch::autograd::Node 2019-07-23 20:52:22 -07:00

Autograd

Autograd is a hotspot for PyTorch performance, so most of the heavy lifting is implemented in C++. This implies that we have to do some shuffling between Python and C++; and in general, we want data to be in a form that is convenient to manipulate from C++.

Our general model is that for any key data type that autograd manipulates, there are two implementations: a C++ type and a Python object type. For example, consider variables in autograd: we have both Variable in variable.h (the C++ type) and THPVariable in python_variable.h (the Python type.) (By the way, THP stands for TorcH Python, not to be confused with THPP, TorcH C++). Variable contains the payload of a variable, while THPVariable just contains a shared_ptr reference to Variable, as well as references to other Python objects which the Python runtime needs to know about. A lot of data accessor implementations in python_variable.cpp simply reach through to the underlying Variable and return the appropriate value.

The most complicated application of this principle is Function, which also supports users implementing custom behavior in Python. We have the following classes:

  • Node in function.h, the C++ type.
  • THPFunction in python_function.h, the Python object type. In python_function.cpp, you can see the boilerplate that tells the Python interpreter about this object.
  • PyNode in python_function.h, a subclass of Node which forwards apply to a Python THPFunction. (NOT a Python object, despite its name!)

Outside of PyNode, the C++ objects largely avoid referencing Python objects (there are a few exceptions, like pyobj in Variable, and PyNode, whose whole point is to let C++ call into Python). And pyobj in Node to ensure uniqueness of the associated python wrapper (if it exists).