pytorch/torch/csrc
Pieter Noordhuis 9b69da2b55 Allow for iterations where no module parameter is used (#19821)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19821

It is possible that not a single parameter is used during an
iteration. If this is the case, the `prepare_for_backward` function
marks all parameters as unused, kicks off reduction of all buckets,
*and* finalizes the reduction.

This is different from the prior implementation where we assumed that
autograd would produce a gradient for at least a single parameter.
We then used the autograd callback mechanism to queue a finalizer
callback. Now, this finalizer may be executed in line.

Reviewed By: mrshenli

Differential Revision: D15113272

fbshipit-source-id: dc91458b569cd8c106ddaeea558464b515683550
2019-04-27 22:57:59 -07:00
..
api Ignore nn::Functional submodules in nn::Module serialization (#19740) 2019-04-26 12:47:23 -07:00
autograd C++ changes toward libtorch and libcaffe2 unification (#19554) 2019-04-26 01:38:10 -07:00
cuda C++ changes toward libtorch and libcaffe2 unification (#19554) 2019-04-26 01:38:10 -07:00
distributed Allow for iterations where no module parameter is used (#19821) 2019-04-27 22:57:59 -07:00
generic Add device and dtype to storage. (#18749) 2019-04-03 07:59:02 -07:00
jit Use QualifiedName for classes (#19575) 2019-04-27 16:13:27 -07:00
multiprocessing Binding for prctl(PR_SET_PDEATHSIG) (#14491) 2018-11-29 20:09:19 -08:00
nn Remove uses of TypeID (#19452) 2019-04-19 12:07:35 -07:00
onnx Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
tensor Generate only one Type class per backend (#19295) 2019-04-21 21:16:14 -07:00
utils C++ changes toward libtorch and libcaffe2 unification (#19554) 2019-04-26 01:38:10 -07:00
byte_order.cpp Bool tensor. Part 0: Boolean storage implementation (#16810) 2019-02-19 08:22:13 -08:00
byte_order.h Bool tensor. Part 0: Boolean storage implementation (#16810) 2019-02-19 08:22:13 -08:00
copy_utils.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
CudaIPCTypes.cpp Implement reference counting for shared IPC CUDA tensors (#16854) 2019-03-25 10:24:38 -07:00
CudaIPCTypes.h Implement reference counting for shared IPC CUDA tensors (#16854) 2019-03-25 10:24:38 -07:00
DataLoader.cpp Refactor dataloader.py (#15331) 2018-12-19 12:36:03 -08:00
DataLoader.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
Device.cpp Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
Device.h Python <-> C++ Frontend inter-op (#13481) 2018-12-13 08:04:02 -08:00
dl.c
Dtype.cpp Fix pickling torch.float32 (#18045) 2019-04-18 12:28:10 -07:00
Dtype.h Python <-> C++ Frontend inter-op (#13481) 2018-12-13 08:04:02 -08:00
DynamicTypes.cpp Generate only one Type class per backend (#19295) 2019-04-21 21:16:14 -07:00
DynamicTypes.h Introduce DeprecatedTypeProperties class (#17991) 2019-04-04 02:24:13 -07:00
Exceptions.cpp Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
Exceptions.h Use IndexError instead of RuntimeError in ATen CPU kernels 2019-02-13 10:19:28 -08:00
Generator.cpp Generate only one Type class per backend (#19295) 2019-04-21 21:16:14 -07:00
Generator.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
Layout.cpp Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
Layout.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
Module.cpp Generate only one Type class per backend (#19295) 2019-04-21 21:16:14 -07:00
Module.h
PtrWrapper.cpp Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
PtrWrapper.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
python_headers.h Return namedtuples from torch.* function with multiple return arguments for C++ operators (#15429) 2019-01-22 11:12:18 -08:00
PythonTypes.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
README.md
serialization.cpp Bool tensor. Part 0: Boolean storage implementation (#16810) 2019-02-19 08:22:13 -08:00
serialization.h Bool tensor. Part 0: Boolean storage implementation (#16810) 2019-02-19 08:22:13 -08:00
Size.cpp PyPy compatibility: let unmodified slots be inherited in the standard way (#17837) 2019-03-09 11:42:16 -08:00
Size.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
Storage.cpp Add device and dtype to storage. (#18749) 2019-04-03 07:59:02 -07:00
Storage.h Bool tensor. Part 0: Boolean storage implementation (#16810) 2019-02-19 08:22:13 -08:00
StorageDefs.h Define THPStorage struct only once (rather than N times) (#14802) 2018-12-05 13:19:29 -08:00
stub.cpp add torch-python target (#12742) 2018-11-16 11:43:48 -08:00
THP_export.h
THP.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
TypeInfo.cpp Generate only one Type class per backend (#19295) 2019-04-21 21:16:14 -07:00
TypeInfo.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
Types.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
utils.cpp C++ changes toward libtorch and libcaffe2 unification (#19554) 2019-04-26 01:38:10 -07:00
utils.h Bool tensor. Part 0: Boolean storage implementation (#16810) 2019-02-19 08:22:13 -08:00
WindowsTorchApiMacro.h Enable torch static build on Windows 2019-01-08 13:19:57 -08:00

csrc

The csrc directory contains all of the code concerned with integration with Python. This is in contrast to lib, which contains the Torch libraries that are Python agnostic. csrc depends on lib, but not vice versa.

There are a number of utilities for easing integration with Python which are worth knowing about, which we briefly describe here. But the most important gotchas:

  • DO NOT forget to take out the GIL with AutoGil before calling Python API or bringing a THPObjectPtr into scope.

  • Make sure you include Python.h first in your header files, before any system headers; otherwise, you will get error: "_XOPEN_SOURCE" redefined error. If you pay attention to warnings, you will see where you need to do this.

Notes

Note [Storage is not nullptr]

Historically, Torch supported nullptr storage, as a minor optimization to avoid having to allocate a storage object when it would be empty. However, this is actually a confusing special case to deal with, so by-in-large, PyTorch assumes that, in fact, storage is never nullptr.

One important case where this assumption is important is when tracking the CUDA device a tensor is stored in: this information is stored solely in the storage, so if a storage is nullptr, we lose this information.

Although storage is never nullptr, the data field of THStorage may be nullptr. This mostly occurs when we want to pre-allocate an output tensor struct, but then have it be resized and filled with data by some operator: there's no point in allocating data for it in this case!

Files

Exceptions.h

Frequently when working with the Python API, you may call a function which returns an error. In this case, we want to return directly to the Python interpreter, so that this exception can be propagated accordingly; however, because the Python API is C-based, what actually will happen is it will return control to whatever C++ code called it. Similarly, if we raise a C++ exception, prior to returning to the Python interpreter, we must set the Python error flags, so it turns into a C++ exception.

Exceptions defines some useful helpers: HANDLE_TH_ERRORS, END_HANDLE_TH_ERRORS and an exception class python_error. You call them like this:

// Entry point from Python interpreter
PyObject* run() {
  HANDLE_TH_ERRORS
  ...
  if (!x) throw python_error();
  ...
  END_HANDLE_TH_ERRORS
}

The HANDLE_TH_ERRORS macro will catch all exceptions and convert them into an appropriate Python signal. python_error is a special exception which doesn't contain any info, instead it says, "An error occurred in the Python API; if you return to the interpreter, Python will raise that exception, nothing else needs to be done."

utils/auto_gil.h

Whenever you make any calls to the Python API, you must have taken out the Python GIL, as none of these calls are thread safe. AutoGIL is a RAII struct which handles taking and releasing the GIL. Use it like this:

void iWantToUsePython() {
  AutoGil gil;
  ...
}

In general, the compiler will NOT warn you if you use Python functionality without taking out the GIL, so DO NOT FORGET this call.

utils/object_ptr.h

THPPointer is a smart pointer class analogous to std::shared_ptr, but which is overloaded to handle reference counting scheme of various objects which are not based on shared_ptr. The most important overloads are:

  • PyObject (so important we've aliased it as THPObjectPtr), which hooks into Python reference counting. (By the way, that means you MUST take out the GIL before bringing one of these into scope!)

  • The various TH tensor and storage types (e.g., THTensor), which hook into TH's reference counting. (TH's reference counting IS thread safe, no locks necessary.)