mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 00:21:07 +01:00
Summary:
Separating CUDA fuser from CPU fuser.
1. New node in IR - prim::CudaFusionGroup:
This enables the cuda fuser to co-exist along side the old fuser. Allows us
to incrementally build and expand cuda fuser.
2. copied FuseGraph optimization passes to CudaFuserGraph:
We will re-factor & reuse Chunk/Concat in the old fuser logic, which is
handled in the optimization pass at this moment. Unfortunately many code in
the pass is tightly binded with the legacy fuser, which makes code sharing
difficult.
The CudaFusionGraph will support only a subset of operations comparing to
legacy fuser (CUDA only). It is registered as a custom pass post fusion via
```torch._C._jit_register_cuda_fuser()```
To have it in effect, you should also turn off fusion on GPU via
```torch._C._jit_override_can_fuse_on_gpu(False)```
3. We don't have codegen in this PR yet (WIP). Currently we just fall back to
the old fuser.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33527
Differential Revision: D20171598
Pulled By: ZolotukhinM
fbshipit-source-id: 9a3c0f06f46da7eaa80ae7551c04869f5b03ef71
|
||
|---|---|---|
| .. | ||
| amd_build | ||
| autograd | ||
| code_analyzer | ||
| docker | ||
| jit | ||
| pyi | ||
| setup_helpers | ||
| shared | ||
| __init__.py | ||
| aten_mirror.sh | ||
| build_libtorch.py | ||
| build_pytorch_libs.py | ||
| build_variables.bzl | ||
| clang_format.py | ||
| clang_tidy.py | ||
| download_mnist.py | ||
| flake8_hook.py | ||
| generated_dirs.txt | ||
| git_add_generated_dirs.sh | ||
| git_reset_generated_dirs.sh | ||
| git-pre-commit | ||
| pytorch.version | ||
| README.md | ||
| update_disabled_tests.sh | ||
This folder contains a number of scripts which are used as
part of the PyTorch build process. This directory also doubles
as a Python module hierarchy (thus the __init__.py).
Overview
Modern infrastructure:
- autograd - Code generation for autograd. This includes definitions of all our derivatives.
- jit - Code generation for JIT
- shared - Generic infrastructure that scripts in
tools may find useful.
- module_loader.py - Makes it easier to import arbitrary Python files in a script, without having to add them to the PYTHONPATH first.
Legacy infrastructure (we should kill this):
- cwrap - Implementation of legacy code generation for THNN/THCUNN. This is used by nnwrap.
Build system pieces:
- setup_helpers - Helper code for searching for third-party dependencies on the user system.
- build_pytorch_libs.py - cross-platform script that builds all of the constituent libraries of PyTorch, but not the PyTorch Python extension itself.
- build_libtorch.py - Script for building libtorch, a standalone C++ library without Python support. This build script is tested in CI.
Developer tools which you might find useful:
- clang_tidy.py - Script for running clang-tidy on lines of your script which you changed.
- git_add_generated_dirs.sh and git_reset_generated_dirs.sh - Use this to force add generated files to your Git index, so that you can conveniently run diffs on them when working on code-generation. (See also generated_dirs.txt which specifies the list of directories with generated files.)
Important if you want to run on AMD GPU:
- amd_build - HIPify scripts, for transpiling CUDA
into AMD HIP. Right now, PyTorch and Caffe2 share logic for how to
do this transpilation, but have separate entry-points for transpiling
either PyTorch or Caffe2 code.
- build_amd.py - Top-level entry point for HIPifying our codebase.
Tools which are only situationally useful:
- aten_mirror.sh - Mirroring script responsible for keeping https://github.com/zdevito/ATen up-to-date.
- docker - Dockerfile for running (but not developing) PyTorch, using the official conda binary distribution. Context: https://github.com/pytorch/pytorch/issues/1619
- download_mnist.py - Download the MNIST dataset; this is necessary if you want to run the C++ API tests.
- run-clang-tidy-in-ci.sh - Responsible for checking that C++ code is clang-tidy clean in CI on Travis