pytorch/torch/csrc/jit
Lu Fang e240e89984 move the torch/csrc/jit/serialization.h to caffe2 source folder and rename to inline_container.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12781

Reviewed By: dzhulgakov

Differential Revision: D10436151

Pulled By: houseroad

fbshipit-source-id: 7f59eec21df5acbab0ea693e1a1cd4fa152f05e5
2018-10-18 09:47:19 -07:00
..
batched Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
fusers Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
passes Fix bug in script for where (#12385) 2018-10-16 21:05:14 -07:00
script Windows CI integration for custom ops (#11527) 2018-10-18 07:55:05 -07:00
argument_spec.h Introduce type variables to implement generic list operators (#12040) 2018-09-26 17:02:51 -07:00
assertions.h remove ATen/Error.h and ATen/core/Error.h (#12792) 2018-10-17 17:25:42 -07:00
attributes.h Add bool type to IR (#11834) 2018-10-03 12:40:03 -07:00
autodiff.cpp Diff against master and enable bugprone-* checks (#12378) 2018-10-10 07:23:57 -07:00
autodiff.h Implement requires_grad propagation in the JIT (#11586) 2018-09-13 19:25:26 -07:00
catch_utils.hpp Use CATCH prefix to avoid name conflicts with Caffe2. 2018-09-18 08:12:45 -07:00
code_template.h cuda guards for fusion compiler 2017-09-05 17:48:55 -04:00
constants.cpp Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
constants.h Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
custom_operator.h Move some files to c10/util (#12245) 2018-10-15 16:25:12 -07:00
dynamic_dag.h Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
export.cpp move the torch/csrc/jit/serialization.h to caffe2 source folder and rename to inline_container.h 2018-10-18 09:47:19 -07:00
export.h Use streams in JIT serialization, allow JIT serialization to/from buffer (#11932) 2018-09-28 07:54:27 -07:00
function_schema.h Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
generic_if.h Split Type into its own header file. 2017-09-20 12:24:27 -04:00
graph_executor.cpp constant pooling pass (#12222) 2018-10-08 11:55:02 -07:00
graph_executor.h add autodiff expressions for common operations (#11832) 2018-09-26 08:10:04 -07:00
graph_node_list.h Pool constants during script compilation. (#10231) 2018-09-01 22:40:50 -07:00
import.cpp move the torch/csrc/jit/serialization.h to caffe2 source folder and rename to inline_container.h 2018-10-18 09:47:19 -07:00
import.h Fix torch::jit::load docs (#12709) 2018-10-18 07:52:13 -07:00
init.cpp move the torch/csrc/jit/serialization.h to caffe2 source folder and rename to inline_container.h 2018-10-18 09:47:19 -07:00
init.h Move JIT passes to a separate directory 2017-09-19 10:53:32 -04:00
interned_strings_class.h Move interned_strings and get build working (#12039) 2018-10-05 00:41:18 -07:00
interned_strings.h Cleanup namespace that were moved to ATen accidentally (#12680) 2018-10-16 01:25:08 -07:00
interpreter.cpp Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
interpreter.h Cleanup namespace that were moved to ATen accidentally (#12680) 2018-10-16 01:25:08 -07:00
ir.cpp Fix bug in script for where (#12385) 2018-10-16 21:05:14 -07:00
ir.h Cleanup namespace that were moved to ATen accidentally (#12680) 2018-10-16 01:25:08 -07:00
ivalue.h Cleanup namespace that were moved to ATen accidentally (#12680) 2018-10-16 01:25:08 -07:00
named_value.h Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
node_hashing.cpp constant pooling pass (#12222) 2018-10-08 11:55:02 -07:00
node_hashing.h constant pooling pass (#12222) 2018-10-08 11:55:02 -07:00
operator.cpp Add upcoming features to schema parser (#12585) 2018-10-15 14:51:42 -07:00
operator.h Windows CI integration for custom ops (#11527) 2018-10-18 07:55:05 -07:00
pybind_utils.h remove ATen/Error.h and ATen/core/Error.h (#12792) 2018-10-17 17:25:42 -07:00
pybind.h Cleanup namespace that were moved to ATen accidentally (#12680) 2018-10-16 01:25:08 -07:00
python_arg_flatten.cpp Replace std::size_t with size_t (#8093) 2018-06-04 11:10:44 -04:00
python_arg_flatten.h Rename at::getType to at::getNonVariableType (#11096) 2018-08-31 09:10:49 -07:00
python_interpreter.cpp Properly catch errors in PythonOps (#12243) 2018-10-03 17:25:03 -07:00
python_ir.cpp Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
python_ir.h Move JIT passes to a separate directory 2017-09-19 10:53:32 -04:00
python_tracer.cpp remove ATen/Error.h and ATen/core/Error.h (#12792) 2018-10-17 17:25:42 -07:00
python_tracer.h Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
README.md Remove legacy code from the JIT (#9323) 2018-07-11 10:25:38 -07:00
register_prim_ops.cpp fix forward and backward for norm with negative infinity norm (#12722) 2018-10-17 21:07:43 -07:00
register_special_ops.cpp Fix split_size test failures (#11051) 2018-09-07 15:39:24 -07:00
resource_guard.h Document TemplateEnv & PR fixes 2017-09-05 17:48:55 -04:00
source_location.h Bag of clang tidy fixes for torch/csrc/ and torch/csrc/autograd (#11050) 2018-09-05 19:55:50 -07:00
source_range.h Move IValue to ATen/core (#11610) 2018-09-17 18:25:50 -07:00
stack.h Diff against master and enable bugprone-* checks (#12378) 2018-10-10 07:23:57 -07:00
symbolic_variable.h add autodiff expressions for common operations (#11832) 2018-09-26 08:10:04 -07:00
test_jit.cpp Dedup MethodValue and FunctionValue (#12589) 2018-10-15 15:00:54 -07:00
tracer.cpp Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
tracer.h Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
tracing_state.h Pop stashed IntList in resize_, warn about its usage when tracing. 2018-09-21 08:40:20 -07:00
type.cpp Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
type.h Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
variable_tensor_list.h Match parameter names and = default (#9737) 2018-07-30 14:10:00 -07:00

jit

The jit directory contains infrastructure for a just-in-time compiler for PyTorch and associated 'script' subset of python it can execute directly.

The JIT compiler has several phases.

  1. Parsing - An AST (defined in tree_views.h) is generated either by parsing a string of python-like code (jit/script/parser.h) or by translation from the Python AST (jit/frontend.py). This phase only checks for syntactic correctness and for use of the syntactic subset of python that the script supports.

  2. Semantic Checking/Specialization - We lower the AST into an IR Graph object. In this phase we check that variables are in scope and resolve any free variables to python objects. When we find free variables that are python objects, or references to non-first-class values such as modules, we temporarily represent them as SugaredValue objects. This phase then de-sugars these values by e.g. inserting a PythonOp into the graph to call a python function.

  3. Optimizations - A GraphExecutor works on an initial Graph object, performing optimizations, possibly differentiating it, and possibly specializing it to a particular size.

  4. Translation to Instructions - to execute a graph, it is lowered by the interpreter into a linear list of Instruction objects.

  5. Execution - the interpreter reads the instruction stream, executing ATen operations and any generated code fragments.

Well-known functions

Ordinarily, when defining a compiler you want the set of functions to be user extensible; e.g., a user can add to the set of defined functions by defining an appropriate autograd Function. However, there are some functions where we want to make assumptions about their semantics, because we are going to write optimizations over them or insert them into the program. Such functions are "well-known" functions, because the JIT compiler knows about them, and a user implementation must abide by the contract (sometimes implicitly) specified by the compiler.

A well-known function is usually implemented in several parts:

  • First, we pre-intern the string (interned_strings.h) that identifies the node. This allows us to more conveniently refer to these operators without having to first do a lookup through the intern table.

  • If we generate this operator during optimizations, we will often have a helper function in Graph (ir.h) for creating the operator. This is the easiest way to find out, in code, what attributes we assume for an operator.

  • There is a runtime interpretation of the operator in torch/csrc/autograd/functions/interpreter.cpp, which specifies how we actually interpret programs that contain such an operator.

So, whence the specifications! For the most part, we are following the ONNX operator specification to determine the semantics of our operators. However, there are a few other well-known functions which are specific to PyTorch.

  • FusionGroup

    A fusion group takes some number of input tensors, applies a graph Subgraph to them, producing the returned tensors of the subgraph. Operationally, operators inside a FusionGroup are fused into a single kernel, so that their intermediate results are never materialized. Not all operators support fusion:

    • attribute:
      Subgraph
      The graph of fused operators. Its inputs and outputs should match the number of inputs and outputs to the FusionGroup operator.
    • input: 1 - ∞ (same as inputs of Subgraph)
    • output: 1 - ∞ (same as outputs of Subgraph)