pytorch/torch/csrc/jit
BowenBao 956bafef8b [onnx export] Add broadcast to matmul shape inference (#70534)
Reuse the same broadcast code from the function `ProcessBroadcastNode`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72990
2022-02-18 18:44:19 +00:00
..
api Make fusion strategy api public (#72639) 2022-02-16 03:45:15 +00:00
backends [Pytorch Edge] Wrap lowered module in to_backend (#71597) 2022-01-25 06:30:19 +00:00
codegen [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT 2022-02-15 10:41:24 +00:00
cuda Make pytorch clang-tidy clean (#60649) 2021-07-01 12:21:07 -07:00
docs s/foward/forward/g (#58497) 2021-05-19 11:42:42 -07:00
frontend (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change (#72899) 2022-02-16 03:45:15 +00:00
ir Nvfuser code bump 2_1_2022 (#72127) 2022-02-15 00:43:16 +00:00
mobile [nnc][aot_compiler] Memory formats args to aot_compiler (#72873) 2022-02-16 18:39:31 +00:00
operator_upgraders More update on the guidance (#72818) 2022-02-17 20:05:17 +00:00
passes [onnx export] Add broadcast to matmul shape inference (#70534) 2022-02-18 18:44:19 +00:00
python [ONNX] List of files to consider for mergebot onnx rule (#72297) 2022-02-16 23:01:13 +00:00
runtime [Static Runtime] Do not replace with copy variants if TE fuser is enabled (#72946) 2022-02-18 18:34:50 +00:00
serialization Integrate full ONNX check into ONNX export API (#71125) 2022-02-18 18:40:09 +00:00
tensorexpr [NNC] TensorExprKernel state should not be modified on calls to run methods (#73028) 2022-02-17 23:14:27 +00:00
testing Back out "[pytorch][PR] Add ability for a mobile::Module to save as flatbuffer" (#69796) 2021-12-10 21:29:53 -08:00
jit_log.cpp Remove copies in jit_log.cpp (#67841) 2022-01-25 20:32:12 +00:00
jit_log.h Remove WindowsTorchApiMacro.h in favor of Export.h (#69585) 2021-12-09 17:30:09 -08:00
jit_opt_limit.cpp Implement optimization bisect (#49031) 2021-01-11 12:25:28 -08:00
jit_opt_limit.h Remove WindowsTorchApiMacro.h in favor of Export.h (#69585) 2021-12-09 17:30:09 -08:00
JIT-AUTOCAST.md Add fp16/fp32 autocasting to JIT/TorchScript (#63939) 2021-10-27 12:11:36 -07:00
OVERVIEW.md aliasing fixes (#66977) 2021-11-09 18:33:37 -08:00
README.md [JIT] improve documentation (#57991) 2021-05-19 11:47:32 -07:00
resource_guard.h Fix modernize-use-equals-default nolint failures in torch/csrcs (#61142) 2021-07-06 09:46:46 -07:00

PyTorch JIT

This folder contains (most of) the C++ code for the PyTorch JIT, a language and compiler stack for executing PyTorch models portably and efficiently. To learn more about the JIT from a user perspective, please consult our reference documentation and tutorials.

A brief summary of the source tree:

  • OVERVIEW.md: High-level technical overview of the JIT.
  • frontend/: Taking PyTorch modules in Python and translating them into the JIT IR.
  • ir/: Core IR abstractions.
  • runtime/: Interpreter, graph execution, and JIT operators.
  • codegen/: Generating efficient, hardware-specific code for JIT subgraphs.
  • serialization/: Saving and loading modules.
  • api/: Any user-facing C++ or Python interfaces.
  • python/: Binding stuff into Python or accessing information from the Python environment.
  • testing/: Utilities and helpers for testing.
  • mobile/: Mobile-specific implementations of runtime components.
  • passes/: IR-to-IR passes, generally for optimization and lowering.
  • generated/: This folder is generated by the PyTorch build, and contains bindings for native PyTorch operators into the JIT.

Refer to each folder for more in-depth documentation.

Other relevant parts of the codebase not contained here:

  • aten/src/ATen/core: contains JIT code re-used by other elements of the runtime system (eager, mobile, etc.)