Commit Graph

862 Commits

Author SHA1 Message Date
Edward Z. Yang
a797ab9343 Rewrite AST to a new, more functional representation.
Previously, our AST was a DAG, where shared Nodes indicated a computation
should be reused.  This commit rewrites the IR into a new functional
representation which represents sharing explicitly using variable
bindings.

We offer a few justifications for this new style:

1. The new representation is not all that different from the
old one; it is about as easy to construct, and the lack of an
explicit graph doesn't negatively impact our ability to interpret
the graph, since we've chosen, as a matter of design, to NOT have
the IR participate in the actual execution of a graph.

2. The new let-binding representation has an implicit ordering,
which we can use to conveniently keep track of the original order
the trace showed up as.  This automatically gives us a topsort,
and gives us an easier to read textual representation of our
IR:

  %14 = Embedding %11, %0, -1, None, 2, False, False
  %15 = Dropout %14, 0.2, True, False
  %16 = Index %12, 0
  %17 = Index %12, 1
  %18 = Index %13, 0
  %19 = Index %13, 1
  %20 = Index %15, 0
  %21 = Linear %20, %1, %3
  %22 = Linear %16, %2, %4

3. It moves us closer to a Futhark style language
(http://futhark-lang.org/publications/pldi17.pdf).

Major aspects of the diff

- Node is replaced with Expr and Arg, a pair of mutually recursive
  structures which represent our new language.  In BNF, the language
  looks like this:

    a ::= c | %i
    e ::= %i, ... = e
        | PyOp e, ...
        | Ret %i, ...

  Technically, Ret is not actually a return (no control flow is involved),
  it just tuples up a series of tensors (identified by variables).

  One important invariant is that locals are always tensors; they
  are never constants (this is asymmetric with Args.)

- Arguments support Python constants.  This is an important piece because
  many operators take extra Python literals like integers and tuples in
  order to specify extra parameters about how an operator operates.  Adding
  this was essential to getting word_language_model to work.

- As both Expr and Arg have multiple variants, there is new infrastructure
  for doing case on the variants using ExprVisitor and ArgVisitor.  The
  strategy here is adapted from WebAssembly's visitors, although we have
  generalized to permit arbitrary argument forwarding, which is necessary
  to support tail-recursive visitor calls.  TCO is important because our
  interpreter may recurse arbitrarily deep into a stack of nested lets.
  If users wish, they can also manually case on the type tag.

- Tracing is now turned on and off using _tracer_enter/_tracer_exit in
  torch._C.  _tracer_enter accepts a list of variables which are to be
  treated as arguments; _tracer_exit accepts the list of traced variables
  which should be returned when you reexecute the trace, and returns
  the trace expression which can be reexecuted.  GlobalTracingState
  is a global variable which tracks whether or not we are tracing or not.

- You use run_forward to execute a trace on some set of parameters.

- When under tracing, variables keep track, via trace_local, what the
  name of their variables in the IR are.

Here is a simple runner which leaks memory but can be used to JIT models:

  import torch.autograd.function as F
  import torch._C

  def jit(model):
      import types
      real_forward = model.forward
      def forward(self, *args):
          def flatten(x):
              return tuple(F._iter_variables(x))
          if not hasattr(self, "saved_trace"):
              torch._C._tracer_enter(tuple(self.parameters()) + flatten(args))
              out = real_forward(*args)
              self.saved_trace = torch._C._tracer_exit(flatten(out))
              self.saved_outs = out
              return out
          else:
              flat_out = Variable._execution_engine.run_forward(self.saved_trace, tuple(self.parameters()) + flatten(args))
              return F._unflatten(flat_out, self.saved_outs)

Major problems:

- Sanity checking is spotty at best, especially when users pass in variables.

- The interpreter leaks tensor memory from the store.  When we add back def-use
  we should be able to deallocate tensors as soon as we know they are no longer
  necessary.

- The interpreter needs to reach feature parity with the old execution engine.
  From there, we need to see if backwards can be subsumed as well.

- I still have no confidence in having memory managed everything correctly.
  This requires a close look.

- Rather than return an *open* expression as a trace, we should return a
  *lambda* instead, which knows about how many formal parameters it
  requires.

- The IR is not introspectable from Python at the moment, but this is simply a
  matter of implementing all the binding code.

- The tracer is NOT reentrant (you can't trace while you're inside a trace.)
  Furthermore, no sanity checking is done if you try to incorrectly reuse
  things from one trace in another.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Edward Z. Yang
e1b7872fc2 Make it possible to access IR from Python.
Also, add a new trace_fn field to attach forward IR to Variables.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Edward Z. Yang
c5faaf69d8 Initial IR representation for forward trace.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
hongyi-zhang
bf013f4c99 fix Python 2 gloo install (#2597) 2017-09-02 20:05:37 -04:00
Edward Z. Yang
a03e5cb409 Remind users to submodule update.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-08-30 16:14:38 -04:00
Sam Gross
966fdbd93a Add commands to re-build individual libraries. (#2506)
When working on PyTorch dependencies we often want to rebuild only that
dependency and the Python extension. You can now do that by running:

  python setup.py build_thc

to only re-build THC
2017-08-23 07:16:05 -04:00
Thomas Viehmann
7c04f11d88 search for ldconfig in /sbin for nccl detection (#2276) 2017-08-03 05:32:21 +05:30
Zachary DeVito
43c944acbd Remove dead THPP code that has been replaced with ATen objects. (#2235)
THPP usage is now isolated in THD.
2017-07-29 08:07:41 +05:30
Trevor Killeen
c304d04fc6 Replace thpp::Tensor with ATen Tensor in autograd csrc (#2170) 2017-07-28 10:18:37 -04:00
Soumith Chintala
ea6f9a26b8 fix version number 2017-07-20 13:30:53 -04:00
Soumith Chintala
09abaa2189 make keepdim backcompat warnings emit in autograd as well (#2157) 2017-07-20 01:48:05 -04:00
Soumith Chintala
a5c2546c0f version bump 2017-07-19 12:34:43 -07:00
Soumith Chintala
b660303a16 Static linking against libstdc++ in Binary Build mode 2017-07-19 12:19:36 -04:00
Soumith Chintala
169ca67a4e Adding Spatial Transformers w/CuDNN support 2017-07-12 14:32:06 -04:00
Zach DeVito
ab3d85c410 add build commands for ATen 2017-07-11 10:35:03 -04:00
Trevor Killeen
6df23b418d mark tools as excluded in find_packages (#1915) 2017-06-29 13:49:56 -04:00
Trevor Killeen
cb4eaa9c5d TensorLib/Aten --> changes required in pytorch 2017-06-22 12:55:55 -04:00
gchanan
a64560c22e Remove flattening for torch.dot (#1781) 2017-06-16 02:15:33 +02:00
Edward Z. Yang
3ada9da808 Make csrc -Werror clean. (#1795)
Primary things I had to fix:

- Suppress _XOPEN_SOURCE warnings by ensuring that Python.h is included
  first, because it always unconditionally defines this macro.

- Turn off strict aliasing, because Python 2 doesn't work with strict
  aliasing.

- Workaround setuptools bug, where it's incorrectly passing
  -Wstrict-prototypes to C++ compilers (where this doesn't make
  any sense)

To compile csrc with -Werror, run `CFLAGS="-Werror" python setup.py build_ext`

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-06-13 20:18:09 -04:00
Adam Paszke
714351ff39 Officially enable process-group mode 2017-06-12 22:02:11 -04:00
Gregory Chanan
65b23f146e Add broadcasting support for copy_, simplify code generation by moving a lot of currently generated code to expand_utils. 2017-06-11 05:37:59 -04:00
Gregory Chanan
6a40acb4f0 Add Broadcast plugin. 2017-06-11 05:37:59 -04:00
Edward Z. Yang
ba690d5607 Add support for NVTX functions. (#1748) 2017-06-10 18:26:58 +02:00
Adam Paszke
8ea7c87c29 Improve init methods 2017-06-02 23:42:11 +02:00
Adam Paszke
702a2e3bc5 Make Variables not subclass Function anymore
Because of this Variables can no longer appear in the graph.
Every usage of a leaf Variable will leave an AccumulateGrad
function that has no outputs, but modifies var.grad as a side
effect.
2017-05-01 16:44:56 -04:00
Adam Paszke
2ca787fcf4 Refactor attribute names in autograd 2017-05-01 16:44:56 -04:00
Soumith Chintala
2197e4c766 version bump 2017-05-01 15:54:52 -04:00
Adam Paszke
9169f60a84 Parallelize TensorMethods.cpp builds (#1400) 2017-04-29 09:07:21 -04:00
Soumith Chintala
24e5a9057e Revert "Parallelize TensorMethods.cpp builds (#1364)" (#1390)
This reverts commit 060048bcd8.
2017-04-28 07:59:40 -04:00
Adam Paszke
060048bcd8 Parallelize TensorMethods.cpp builds (#1364) 2017-04-28 07:45:21 -04:00
albanD
f0c7124420 Allow support for negative dimension argument for all functions 2017-04-06 16:37:00 -07:00
Soumith Chintala
1c391f6f93 bump version 2017-03-29 10:08:34 -04:00
Sam Gross
b9379cfab7 Use cuDNN and NCCL symbols from _C library (#1017)
This ensures that we use the same library at the C++ level and with
Python ctypes. It moves the searching for the correct library from
run-time to compile-time.
2017-03-16 16:10:17 -04:00
Low Kian Seong
2f5c215d34 Update setup.py (#981)
Adding `description` to `setup.py`
2017-03-11 12:14:07 -05:00
Sam Gross
15a9fbdedb Merge pull request #881 from colesbury/parallelize_backwards
Parallelize autograd backwards
2017-03-06 16:57:19 -05:00
soumith
76f7d749e4 bump version 2017-03-05 08:49:52 -08:00
Sam Gross
34ce58c909 Parallelize backwards 2017-03-03 11:26:00 -08:00
Adam Paszke
0db9c63300 Use library_dirs in setup.py 2017-02-20 23:28:31 -08:00
Adam Paszke
1bdc28161a Add torch.__version__ 2017-02-17 10:40:08 +05:30
Dr. Kashif Rasul
8d90ab2d9b compile with cudart (#737) 2017-02-14 06:40:35 +05:30
Sam Gross
bd5303010d Refactor autograd package to separate Python dependencies. (#662)
The core autograd Variable, Function, and Engine no longer depend on the
Python API. This let's us implement functions in C++. In the future, we
can also multithread engine and release the GIL for most of the
non-Python backwards.
2017-02-13 16:00:16 -08:00
Sam Gross
f8fb25e0a2 Add generic bindings to THNN and THCUNN (#645)
Adds bindings using thpp::Tensor to THNN and THCUNN. This allows calling
into those APIs without knowing the concrete types of the tensor
arguments.
2017-01-31 13:23:02 -05:00
Adam Paszke
79232c24e2 Fixes after rebase 2017-01-31 01:58:09 +01:00
Janusz Marcinkiewicz
76520512e7 DataChannel tests rewrite (#42); DataChannel isend and irecv implementation (#44) 2017-01-31 01:58:09 +01:00
Adam Paszke
60d1852c7b Major improvements to master-worker mode
* Fixed all undefined symbol errors
* Implemented storage interface and THStorage class
* RPC improvements
* Code refactor
2017-01-31 01:58:09 +01:00
Filip Binkiewicz
9fc3c5e4d2 THDTensor constructors implemented + some minor fixes 2017-01-31 01:58:09 +01:00
Adam Paszke
55632d81d2 Add Python wrappers for process group mode 2017-01-31 01:58:09 +01:00
Adam Paszke
9c411513bf Patch distutils crash when linking with ccache 2017-01-28 00:28:33 +01:00
Luke Yeager
2ad967dbe4 Fix pep8 in setup.py with "autopep8 -i setup.py" 2017-01-25 22:23:22 -05:00
Sam Gross
c9db9c2317 Add C++ tensor library (from THD fork) (#526) 2017-01-20 15:23:34 -05:00
Sam Gross
9302f860ae Remove unused file TensorDocstrings.cpp (#481)
Tensor docstrings are created in _tensor_docs.py
2017-01-18 13:34:40 -05:00
soumith
57a2ccf777 PYTORCH_BUILD_VERSION to setup.py 2017-01-17 17:51:16 -08:00
soumith
e4812b3903 add binary version to setup.py 2017-01-17 14:14:01 -08:00
Sam Gross
fd92470e23 Add cuDNN bindings for BatchNorm (#421) 2017-01-07 15:35:24 -05:00
Zeming Lin
59d66e6963 Sparse Library (#333) 2017-01-05 00:43:41 +01:00
Soumith Chintala
6a2785aef7 remove link_prefix from linker arguments (#395) 2017-01-02 12:37:52 -05:00
Soumith Chintala
b650a45b9c fix botched merge in setup.py 2016-12-31 16:55:53 -05:00
Soumith Chintala
b5dc36f278 explicitly linking against v1 libs to avoid lua-torch conflicts (#386) 2016-12-31 10:30:36 -05:00
Adam Paszke
08d346df9c Print libraries used for building the extension 2016-12-15 00:47:55 +01:00
Adam Paszke
28f0cf6cee Add docstring support to cwrap (#295) 2016-12-11 23:25:14 +01:00
Adam Paszke
cb849524f3 Improve cuDNN detection at build time 2016-12-01 23:14:41 +01:00
Adam Paszke
ebc70f7919 Look for libcudart in default CUDA installation paths (#195) 2016-11-02 19:36:10 -04:00
Adam Paszke
ef557761dd Allow to not use all function outputs in autograd 2016-10-31 22:47:09 +01:00
Sam Gross
ad5fdef6ac Make every user-visible Tensor have a Storage (#179) 2016-10-31 12:12:22 -04:00
Sam Gross
f2d7e94948 Use torch.Size for Tensor sizes and tuple for strides
See issue #20

The torch.Size class is a tuple subclass which distinguishes sizes from
other tuples so that torch.Tensor(size) is interpreted as size instead
of data.
2016-10-28 19:37:09 +02:00
Sam Gross
ad2d413c0b Add C++ bindings for cuDNN (#167)
The Python ctypes bindings overhead was high enough that it slowed down
multi-gpu training when using 4+ Maxwell GPUs.
2016-10-26 19:51:48 -04:00
Soumith Chintala
140c65e52b fixing python setup.py clean 2016-10-21 23:20:02 -04:00
Sam Gross
79ead42ade Add CUDA Stream and Event API (#133) 2016-10-18 12:15:57 -04:00
Adam Paszke
0325e2f646 Major autograd refactor
Improves autograd performance by more than 2x and fixes a couple
of bugs. All core functions have been moved to C.
2016-10-13 17:17:49 -07:00
Adam Paszke
2acee24332 Add keyword argument support to most tensor functions 2016-10-13 12:32:04 -04:00
Adam Paszke
96f61bff30 Add LAPACK functions 2016-10-08 20:37:37 -07:00
Sam Gross
e8a5f00866 Auto GPU for CUNN (#71) 2016-09-30 14:04:53 -04:00
Adam Paszke
941cf4e63d Add ffi utils for user C extensions 2016-09-29 09:35:56 -07:00
Sam Gross
cb5d4e836f Lazy load CUDA and THNN modules (#64) 2016-09-28 19:29:53 -04:00
Adam Paszke
52ed57352a Free GIL in C functions 2016-09-27 15:22:20 -07:00
Soumith Chintala
1cf87e8a0b OSX + Python 2 build fixes 2016-09-25 19:26:13 -04:00
Adam Paszke
ddf1598ef8 Add a method for catching exceptions thrown in ctypes 2016-09-25 12:25:54 -07:00
Adam Paszke
06ab3f962f Refactor _C extension to export some utilities 2016-09-21 08:36:54 -07:00
soumith
65d4055366 adding static linking on binary builds 2016-09-13 10:34:13 -07:00
Sam Gross
1486d880b0 Add Storage.from_buffer
The from_buffer is similar to numpy's frombuffer. It decodes a Python
buffer object into a Storage object. For byte and char storages, it
simply copies the bytes.
2016-09-07 15:32:33 -07:00
Soumith Chintala
4cffa2219a build fixes for OSX 2016-09-06 22:06:06 -04:00
Adam Paszke
f9d186d33a Add initial version of multiprocessing module 2016-08-31 19:46:08 -07:00
Adam Paszke
686e8d32e2 Add torch.save and torch.load 2016-08-23 07:51:55 -07:00
Adam Paszke
8d933cbfc4 Fixes for OS X 2016-08-22 22:45:35 -04:00
Adam Paszke
4c51a523c8 Add super basic CUDA autodetection 2016-08-19 14:23:53 -07:00
Adam Paszke
b06c000478 Fix <3.5 compatibility and travis configuration 2016-08-16 21:11:10 -07:00
Adam Paszke
207d6ae60d Override build commands in setup.py 2016-08-14 20:47:27 -07:00
Adam Paszke
1902bc0bfb Interface with numpy 2016-08-13 20:19:17 -07:00
Adam Paszke
9fff8e7392 Fixes for changes in libs 2016-08-12 22:02:57 -07:00
Adam Paszke
ef7364b80e Fix Python 2.7 compatibility 2016-08-12 18:26:10 -07:00
Adam Paszke
12bed8dc0d Add CUDA device selection 2016-08-12 07:46:46 -07:00
Adam Paszke
e9f9fd3727 Major refactor 2016-08-10 09:24:53 -07:00
Adam Paszke
652a31b714 Add build scripts for libraries 2016-08-04 14:12:31 -07:00
Adam Paszke
6df0ae5d35 Add cunn 2016-08-02 09:20:18 -07:00
Adam Paszke
2f342af22f Move optim to legacy 2016-08-01 12:01:46 -04:00
Adam Paszke
ae40bcd58c Base for nn conversion 2016-07-22 22:21:29 -04:00
Adam Paszke
554a1d8336 Add optim 2016-07-21 16:42:06 -04:00
Adam Paszke
bc7bd7a8b3 Add unit tests and fix detected bugs 2016-07-21 13:46:59 -04:00
Adam Paszke
3a44259b32 Add support for CUDA 2016-07-19 10:45:59 -04:00
Adam Paszke
cf90bee8af Enable parallel builds 2016-07-18 23:56:50 -04:00
Adam Paszke
3cec305524 Restructure python code 2016-06-23 22:55:05 +02:00
Adam Paszke
077bfbde03 Add all constructors for Tensor and Storage 2016-06-19 23:45:41 +02:00
Adam Paszke
4f66ea42af Add random-related Tensor methods 2016-06-18 21:36:10 +02:00
Soumith Chintala
5ee3358a92 python 2 support 2016-06-08 19:14:57 -04:00
Adam Paszke
449ac4ca2a Add torch.* functions 2016-05-09 19:14:40 +02:00
Adam Paszke
7567a0bb13 Add cwrap 2016-05-07 15:28:13 +02:00
Adam Paszke
c3b3df9f22 Add utilities and clenup Tensor wrappers 2016-05-06 15:04:57 +02:00
Adam Paszke
842e1b6358 Add exception handling 2016-05-05 20:58:13 +02:00
Adam Paszke
f4b3554d9e Refactor generic/Tensor.c and add Short objects 2016-05-03 21:20:54 +02:00
Adam Paszke
690d470c71 Add Storage.py template 2016-05-03 15:13:12 +02:00
Adam Paszke
b0d90e3688 Add templated __init__ 2016-05-02 23:54:59 +02:00
Adam Paszke
731041cb6a Initial commit 2016-05-02 23:19:57 +02:00