pytorch/test/cpp_extensions/setup.py
Aaron Bockover c78ab28441 Add support for the ONNX Runtime Eager Mode backend (#58248)
Summary:
This PR implements the necessary hooks/stubs/enums/etc for complete ONNX Runtime (ORT) Eager Mode integration. The actual extension will live out of tree at https://github.com/pytorch/ort.

We have been [working on this at Microsoft](https://github.com/microsoft/onnxruntime-pytorch/tree/eager-ort/torch_onnxruntime) for the last few months, and are finally ready to contribute the PyTorch core changes upstream (nothing major or exciting, just the usual boilerplate for adding new backends).

The ORT backend will allow us to ferry [almost] all torch ops into granular ONNX kernels that ORT will eagerly execute against any devices it supports (therefore, we only need a single ORT backend from a PyTorch perspective).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58248

Reviewed By: astaff

Differential Revision: D30344992

Pulled By: albanD

fbshipit-source-id: 69082b32121246340d686e16653626114b7714b2
2021-08-20 11:17:13 -07:00

57 lines
1.8 KiB
Python

import sys
import torch.cuda
import os
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CppExtension, CUDAExtension
from torch.utils.cpp_extension import CUDA_HOME, ROCM_HOME
if sys.platform == 'win32':
vc_version = os.getenv('VCToolsVersion', '')
if vc_version.startswith('14.16.'):
CXX_FLAGS = ['/sdl']
else:
CXX_FLAGS = ['/sdl', '/permissive-']
else:
CXX_FLAGS = ['-g']
USE_NINJA = os.getenv('USE_NINJA') == '1'
ext_modules = [
CppExtension(
'torch_test_cpp_extension.cpp', ['extension.cpp'],
extra_compile_args=CXX_FLAGS),
CppExtension(
'torch_test_cpp_extension.ort', ['ort_extension.cpp'],
extra_compile_args=CXX_FLAGS),
CppExtension(
'torch_test_cpp_extension.rng', ['rng_extension.cpp'],
extra_compile_args=CXX_FLAGS),
]
if torch.cuda.is_available() and (CUDA_HOME is not None or ROCM_HOME is not None):
extension = CUDAExtension(
'torch_test_cpp_extension.cuda', [
'cuda_extension.cpp',
'cuda_extension_kernel.cu',
'cuda_extension_kernel2.cu',
],
extra_compile_args={'cxx': CXX_FLAGS,
'nvcc': ['-O2']})
ext_modules.append(extension)
if torch.cuda.is_available() and (CUDA_HOME is not None or ROCM_HOME is not None):
extension = CUDAExtension(
'torch_test_cpp_extension.torch_library', [
'torch_library.cu'
],
extra_compile_args={'cxx': CXX_FLAGS,
'nvcc': ['-O2']})
ext_modules.append(extension)
setup(
name='torch_test_cpp_extension',
packages=['torch_test_cpp_extension'],
ext_modules=ext_modules,
include_dirs='self_compiler_include_dirs_test',
cmdclass={'build_ext': BuildExtension.with_options(use_ninja=USE_NINJA)})