mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
* Split libATen.so into libATen_cpu.so and libATen_cuda.so
Previously, ATen could be built with either CPU-only support, or
CPU/CUDA support, but only via a compile-time flag, requiring
two separate builds. This means that if you have a program which
indirectly uses a CPU-only build of ATen, and a CPU/CUDA-build of
ATen, you're gonna have a bad time. And you might want a CPU-only
build of ATen, because it is 15M (versus the 300M of a CUDA build).
This commit splits libATen.so into two libraries, CPU/CUDA, so
that it's not necessary to do a full rebuild to get CPU-only
support; instead, if you link against libATen_cpu.so only, you
are CPU-only; if you additionally link/dlopen libATen_cuda.so,
this enables CUDA support. This brings ATen's dynamic library
structure more similar to Caffe2's. libATen.so is no more
(this is BC BREAKING)
The general principle for how this works is that we introduce
a *hooks* interface, which introduces a dynamic dispatch indirection
between a call site and implementation site of CUDA functionality,
mediated by a static initialization registry. This means that we can continue
to, for example, lazily initialize CUDA from Context (a core, CPU class) without
having a direct dependency on the CUDA bits. Instead, we look up
in the registry if, e.g., CUDA hooks have been loaded (this loading
process happens at static initialization time), and if they
have been we dynamic dispatch to this class. We similarly use
the hooks interface to handle Variable registration.
We introduce a new invariant: if the backend of a type has not
been initialized (e.g., it's library has not been dlopened; for
CUDA, this also includes CUDA initialization), then the Type
pointers in the context registry are NULL. If you access the
registry directly you must maintain this invariant.
There are a few potholes along the way. I document them here:
- Previously, PyTorch maintained a separate registry for variable
types, because no provision for them was made in the Context's
type_registry. Now that we have the hooks mechanism, we can easily
have PyTorch register variables in the main registry. The code
has been refactored accordingly.
- There is a subtle ordering issue between Variable and CUDA.
We permit libATen_cuda.so and PyTorch to be loaded in either
order (in practice, CUDA is always loaded "after" PyTorch, because
it is lazily initialized.) This means that, when CUDA types are
loaded, we must subsequently also initialize their Variable equivalents.
Appropriate hooks were added to VariableHooks to make this possible;
similarly, getVariableHooks() is not referentially transparent, and
will change behavior after Variables are loaded. (This is different
to CUDAHooks, which is "burned in" after you try to initialize CUDA.)
- The cmake is adjusted to separate dependencies into either CPU
or CUDA dependencies. The generator scripts are adjusted to either
generate a file as a CUDA (cuda_file_manager) or CPU file (file_manager).
- I changed all native functions which were CUDA-only (the cudnn functions)
to have dispatches for CUDA only (making it permissible to not specify
all dispatch options.) This uncovered a bug in how we were handling
native functions which dispatch on a Type argument; I introduced a new
self_ty keyword to handle this case. I'm not 100% happy about it
but it fixed my problem.
This also exposed the fact that set_history incompletely handles
heterogenous return tuples combining Tensor and TensorList. I
swapped this codegen to use flatten() (at the possible cost of
a slight perf regression, since we're allocating another vector now
in this code path).
- thc_state is no longer a public member of Context; use getTHCState() instead
- This PR comes with Registry from Caffe2, for handling static initialization.
I needed to make a bunch of fixes to Registry to make it more portable
- No more ##__VA_ARGS__ token pasting; instead, it is mandatory to pass at
least one argument to the var-args. CUDAHooks and VariableHooks pass a nullary
struct CUDAHooksArgs/VariableHooksArgs to solve the problem. We must get rid of
token pasting because it does not work with MSVC.
- It seems MSVC is not willing to generate code for constructors of template
classes at use sites which cross DLL boundaries. So we explicitly instantiate
the class to get around the problem. This involved tweaks to the boilerplate
generating macros, and also required us to shuffle around namespaces a bit,
because you can't specialize a template unless you are in the same namespace as
the template.
- Insertion of AT_API to appropriate places where the registry must be exported
- We have a general problem which is that on recent Ubuntu distributions,
--as-needed is enabled for shared libraries, which is (cc @apaszke who was
worrying about this in #7160 see also #7160 (comment)). For now, I've hacked
this up in the PR to pass -Wl,--no-as-needed to all of the spots necessary to
make CI work, but a more sustainable solution is to attempt to dlopen
libATen_cuda.so when CUDA functionality is requested.
- The JIT tests somehow manage to try to touch CUDA without loading libATen_cuda.so. So
we pass -Wl,--no-as-needed when linking libATen_cuda.so to _C.so
- There is a very subtle linking issue with lapack, which is solved by making sure libATen_cuda.so links against LAPACK. There's a comment in aten/src/ATen/CMakeLists.txt about htis as well as a follow up bug at #7353
- autogradpp used AT_CUDA_ENABLED directly. We've expunged these uses and added
a few more things to CUDAHooks (getNumGPUs)
- Added manualSeedAll to Generator so that we can invoke it polymorphically (it
only does something different for CUDAGenerator)
- There's a new cuda/CUDAConfig.h header for CUDA-only ifdef macros (AT_CUDNN_ENABLED, most prominently)
- CUDAHooks/VariableHooks structs live in at namespace because Registry's
namespace support is not good enough to handle it otherwise (see Registry
changes above)
- There's some modest moving around of native functions in ReduceOps and
UnaryOps to get the CUDA-only function implementations into separate files, so
they are only compiled into libATen_cuda.so. sspaddmm needed a separate CUDA
function due to object linkage boundaries.
- Some direct uses of native functions in CUDA code has to go away, since these
functions are not exported, so you have to go through the dispatcher
(at::native::empty_like to at::empty_like)
- Code in THC/THCS/THCUNN now properly use THC_API macro instead of TH_API
(which matters now that TH and THC are not in the same library)
- Added code debt in torch/_thnn/utils.py and other THNN parsing code to handle
both TH_API and THC_API
- TensorUtils.h is now properly exported with AT_API
- Dead uses of TH_EXPORTS and co expunged; we now use ATen_cpu_exports and
ATen_cuda_exports (new, in ATenCUDAGeneral.h) consistently
- Fix some incorrect type annotations on _cudnn_rnn_backward, where we didn't
declare a type as possibly undefined when we should have. We didn't catch this
previously because optional annotations are not tested on "pass-through" native
ATen ops (which don't have dispatch). Upstream issue at #7316
- There's a new cmake macro aten_compile_options for applying all of our
per-target compile time options. We use this on the cpu and cuda libraries.
- test/test_cpp_extensions.py can be run directly by invoking in Python,
assuming you've setup your PYTHONPATH setup correctly
- type_from_string does some new funny business to only query for all valid CUDA
types (which causes CUDA initialization) when we see "torch.cuda." in the
requested string
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Last mile libtorch fixes
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* pedantic fix
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
119 lines
3.5 KiB
Python
119 lines
3.5 KiB
Python
import os
|
|
import itertools
|
|
import importlib
|
|
|
|
THNN_H_PATH = os.path.join(os.path.dirname(__file__), '..', 'lib', 'THNN.h')
|
|
THCUNN_H_PATH = os.path.join(os.path.dirname(__file__), '..', 'lib', 'THCUNN.h')
|
|
|
|
|
|
def _unpickle_backend(backend_name):
|
|
import torch._thnn
|
|
return torch._thnn.type2backend[backend_name]
|
|
|
|
|
|
class THNNBackendBase(object):
|
|
|
|
def __init__(self):
|
|
self.methods = {}
|
|
|
|
def __getattr__(self, name):
|
|
method = self.methods.get(name, None)
|
|
if method is None:
|
|
raise NotImplementedError
|
|
return method
|
|
|
|
def register_method(self, name, ctypes_fn):
|
|
self.methods[name] = ctypes_fn
|
|
|
|
@property
|
|
def library_state(self):
|
|
return 0
|
|
|
|
def __reduce__(self):
|
|
return (_unpickle_backend, (type(self).__name__,))
|
|
|
|
|
|
class Function(object):
|
|
|
|
def __init__(self, name):
|
|
self.name = name
|
|
self.arguments = []
|
|
|
|
def add_argument(self, arg):
|
|
assert isinstance(arg, Argument)
|
|
self.arguments.append(arg)
|
|
|
|
def __repr__(self):
|
|
return self.name + '(' + ', '.join(map(lambda a: a.__repr__(), self.arguments)) + ')'
|
|
|
|
|
|
class Argument(object):
|
|
|
|
def __init__(self, _type, name, is_optional):
|
|
self.type = _type
|
|
self.name = name
|
|
self.is_optional = is_optional
|
|
|
|
def __repr__(self):
|
|
return self.type + ' ' + self.name
|
|
|
|
|
|
def parse_header(path):
|
|
with open(path, 'r') as f:
|
|
lines = f.read().split('\n')
|
|
|
|
# Remove empty lines and preprocessor directives
|
|
lines = filter(lambda l: l and not l.startswith('#'), lines)
|
|
# Remove line comments
|
|
lines = map(lambda l: l.partition('//'), lines)
|
|
# Select line and comment part
|
|
lines = map(lambda l: (l[0].strip(), l[2].strip()), lines)
|
|
# Remove trailing special signs
|
|
lines = map(lambda l: (l[0].rstrip(');').rstrip(','), l[1]), lines)
|
|
# Split arguments
|
|
lines = map(lambda l: (l[0].split(','), l[1]), lines)
|
|
# Flatten lines
|
|
new_lines = []
|
|
for l, c in lines:
|
|
for split in l:
|
|
new_lines.append((split, c))
|
|
lines = new_lines
|
|
del new_lines
|
|
# Remove unnecessary whitespace
|
|
lines = map(lambda l: (l[0].strip(), l[1]), lines)
|
|
# Remove empty lines
|
|
lines = filter(lambda l: l[0], lines)
|
|
generic_functions = []
|
|
for l, c in lines:
|
|
if l.startswith('TH_API void THNN_'):
|
|
fn_name = l.lstrip('TH_API void THNN_')
|
|
if fn_name[0] == '(' and fn_name[-2] == ')':
|
|
fn_name = fn_name[1:-2]
|
|
else:
|
|
fn_name = fn_name[:-1]
|
|
generic_functions.append(Function(fn_name))
|
|
elif l.startswith('THC_API void THNN_'):
|
|
fn_name = l.lstrip('THC_API void THNN_')
|
|
if fn_name[0] == '(' and fn_name[-2] == ')':
|
|
fn_name = fn_name[1:-2]
|
|
else:
|
|
fn_name = fn_name[:-1]
|
|
generic_functions.append(Function(fn_name))
|
|
elif l:
|
|
t, name = l.split()
|
|
if '*' in name:
|
|
t = t + '*'
|
|
name = name[1:]
|
|
generic_functions[-1].add_argument(Argument(t, name, '[OPTIONAL]' in c))
|
|
return generic_functions
|
|
|
|
|
|
def load_backend(t, lib, generic_functions, mixins=tuple()):
|
|
backend_name = 'THNN{}Backend'.format(t)
|
|
backend = type(backend_name, mixins + (THNNBackendBase,), {})()
|
|
for function in generic_functions:
|
|
full_fn_name = '{}{}'.format(t, function.name)
|
|
fn = getattr(lib, full_fn_name)
|
|
backend.register_method(function.name, fn)
|
|
return backend
|