Commit Graph

23 Commits

Author SHA1 Message Date
Elias Ellison
59f8e8ada7 First step at adding exceptions (#12789)
Summary:
This is a first step towards adding exceptions. We need minimal support in order to begin converting the torch library to weak script mode (which is the main goal here).

Some limitations (that are documented in the tests & compiler):
1. Cannot assign exceptions to variables
2. Any name after raise is being treated as a valid Exception
3. No control flow analysis yet. Below a will be undefined:

if True:
     a = 1
else:
     raise Exception("Hi")
return a
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12789

Differential Revision: D12848936

Pulled By: eellison

fbshipit-source-id: 1f60ceef2381040486123ec797e97d65b074862d
2018-10-30 20:25:50 -07:00
Elias Ellison
fed91f873f (Very small) allow trailing commas in assign or tuples (#11723)
Summary:
Allow trailing commas in assign statements or tuples, which also allows single element tuples.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11723

Differential Revision: D10052162

Pulled By: eellison

fbshipit-source-id: 344d908a3ad942a23ebd9f341794bc9734226aa8
2018-10-01 10:10:13 -07:00
James Reed
278e304c18 Implement elif in string frontend (#11667)
Summary:
Closes #11625
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11667

Differential Revision: D9828145

Pulled By: jamesr66a

fbshipit-source-id: c72dc41cb310a4211b4e4c6b33f7e2c1fb3581a0
2018-09-14 10:09:46 -07:00
Wanchao Liang
739e6af869 Add reminder % to the jit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11557

Reviewed By: apaszke

Differential Revision: D9784642

Pulled By: wanchaol

fbshipit-source-id: b7c60c3e9534555c9d7db83769965b3f2f277cdf
2018-09-12 12:40:38 -07:00
James Reed
32bb4040dd Unified type annotation parsing for script frontends (#10279)
Summary:
After this, all combinations of {String frontend, Python AST Frontend}{Python 3-style type annotations, MyPy-style type comments}{Script method, Script function} should properly accept type annotations.

Possible TODOs:
- Clean up the functions marked HACK
- Clean up the Subscript tree-view to better match the Python AST versions
- Can we use this for Python functions? That's the only place annotations.get_signature() is still needed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10279

Differential Revision: D9319726

Pulled By: jamesr66a

fbshipit-source-id: b13f7d4f066b0283d4fc1421a1abb9305c3b28fa
2018-08-14 18:13:15 -07:00
Elias Ellison
170d29769b Strings lexing, parsing, implementation in print (#9324)
Summary:
This PR adds strings to the ast and implements them for print statements. Strings are lifted as attributes to the print node. They must be arguments to print itself, not as an argument for an object that is passed to print.  If they are encountered elsewhere a NYI exception will be thrown.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9324

Reviewed By: jramseyer

Differential Revision: D8807128

Pulled By: eellison

fbshipit-source-id: 984401ff458ed18d473c6d1bd86750e56c77d078
2018-08-02 11:09:03 -07:00
Michael Suo
191482fa39 Distinguish TupleLiteral from ListLiteral (#10128)
Summary:
Previously, the parser was emitting list literals for tuples, but the IR was representing list literals internally with TupleTypes.

For implementing most list operations, I think it will be helpful distinguish between lists (dynamic size, homogeneous types) and tuples (fixed arity, heterogeneous types)

This diff modifies the parser logic to emit tuple literals. This frees us to represent lists as ListType in the IR, while still properly mapping tuple literals to TupleTypes.

A following diff will actually switch over list literals to emit ListTypes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10128

Differential Revision: D9121305

Pulled By: michaelsuo

fbshipit-source-id: e0cad07ae8bac680f7f8113d10e5129d5a1a511d
2018-08-01 19:18:31 -07:00
Jenny Ramseyer
4ed5b9267c #8518 Support for empty tuples (#10027)
Summary:
Fixing #8518

Sorry for the pile of commits; I forgot to rebase.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10027

Reviewed By: ezyang

Differential Revision: D9070028

Pulled By: jramseyer

fbshipit-source-id: 49729c9755ab8a586711e9f6d6a574f3035a7e75
2018-08-01 16:10:00 -07:00
Wanchao Liang
47c1badf90 Fix the clamp special case and gradient problem on None, add None to JIT (#9596)
Summary:
Supersedes #8925

This PR fixes #8502, it fixes the gradients problem for clamp when passing None to the function, and add support for the NoneLiteral and NoneType in script to enable clamp tests. Now we could have corner cases like:

```python
torch.jit.script
def func():
    x = torch.randn(3, 3, requires_grad=True)
    y = torch.clamp(x, None, 0) # max = 0
    y = torch.clamp(x, min=None, max=0)
```

In both JIT and Aten, we use Scalar(NAN) as a sentinel value when passing None type to function clamp, this is the current way we used to support None type in JIT and to solve the gradient problem when user explicitly passing None into clamp.

In JIT side, we create a tensor(NAN) and undefinedTensor if we encounter None when matching the function schema, and later in the interpreter, it will translate to Scalar(NAN) if needed.

Ideally we don't need clamp_min and clamp_max in ATenNative/Autograd and could only support clamp after this change, but since bunch of other operators (e.g. Activation.cpp, Loss.cpp) is using clamp_min in several places, we will still have the functions available, but all python invocations will only call clamp instead of clamp_min/max (with calling underlying th_max/th_min in clamp).

zdevito jamesr66a
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9596

Reviewed By: zdevito

Differential Revision: D8940839

Pulled By: wanchaol

fbshipit-source-id: c543a867b82e0ab8c99384773b173fdde2605d28
2018-07-27 22:54:33 -07:00
Zachary DeVito
a949245a86 Switch interpreter to use IValue's primitive int/floats (#9718)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9718

This patch switches the interpreter to use IValue's primitive numbers rather than tensors for computing on integers and floats. In addition to preparing the interpreter for first-class support of other types, this cleans up the handling of primitive numbers, making it possible to just use the normal operator overloading dispatch to find the right implementation for numbers. As a result of this change, a lot of other functionality needed to be updated since it was the first time we use non-tensors in a lot of places in the code base.

Notes:
* Fixes code_template.py so that multi-line strings are indented correctly when used on a standalone line
* Cast operators (`int(x)`) now are functional. Some tests have addition conversions to integers because
we no longer allow implicit tensor -> integer conversions following the same convention as in python
* prim::ListConstruct/createList has been added to the interpreter for creating lists and this has
replaced aten::stack for integers lists
* gen_jit_dispatch.py has been refactored so that non-tensor types use operators on IValues to extract
the primitives
* IValue gains a .to<T> method that is the equivalent of tensor_as but for IValue instead of at::Tensor
* `constant_as<T>` is switched over to using IValues's `.to<T>` method, to make conversion from constant->IValue->C++ type
more consistent. This functionality combined with `toIValue(Value*)` replaces the `tensor_as` and `as_tensor` family of functions.
* conditional expressions (if, loop) and operators related to them are now computed on integers rather than tensors
* IValue gains constructors for constructing from at::Scalar and converting to it. However, IValue itself will always store
the scalars as a double or int64.
* To align with python 3 syntax, TK_INT, TK_FLOAT, and TK_BOOL have been removed from the parser, and int/float/bool are just treated as special identifiers in the compiler,
along with print. These are represented as special sugared values with a `call` method implemented. For int/float/bool this implements casting behavior.
* Dropped shared_from_this from Type/Module. They were not needed and they making debugging harder because they internally throw/catch exceptions.
* Shape propagation has been updated to support running nodes that include floating point primitive types, this required some refactoring of internal functions.
* TensorToNum and NumToTensor have actual implementations as operators now
* regster_prim_ops now contains implementations of math operators for float/int primitive types, and for mixed (prim <+> tensor) versions. This removes the need for special handling in compiler.cpp
* Primitive math is now entirely handled by letting the compiler choose the right overloads. This removes tons of special casing in the compiler.
* incorporates eellison's change to allow casting from return values. Due to the addition of primitive support, the code need slight modifications, so I just pre-merged it here.
* stack.h gains generic vararg versions of push/pop that know how to convert to/from C++ types:

```
at::Tensor a;
at::Scalar b;
pop(stack, a, b);
at::Tensor c = a + b;
push(stack, c);
```
apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9584

Reviewed By: apaszke

Differential Revision: D8910546

Pulled By: zdevito

fbshipit-source-id: 0f3e60d4d22217f196a8f606549430e43b7e7e30
2018-07-23 14:11:11 -07:00
Zachary DeVito
2f25d1fbc1
Enable tracing and script autograd tests (#8145)
This commit turns autograd function/method tests into tests run inside of a trace, or directly written using
script. These tests have uncovered many bugs and limited functionality
in the trace/script pathway, and these failing parts of the tests
are disabled using new exclusion sets. The size of these sets will shrink
as the bugs are fixed.
2018-06-14 11:48:15 -07:00
Zachary DeVito
b8ada7380a
Tuple literal and cat support (#6691)
* Support list and tuple literals: Adds support for [a, b], (a, b) and "a, "

* Allow non-tensors to reach emitBuiltinCall, each SugaredValue::call
is now responsible for checking the types of its inputs.

Add support for calling cat with a tuple to emitBuiltinOp
2018-04-23 10:58:07 -07:00
Zachary DeVito
8995ddda05
[jit][script] Check that each builtin returns the right number of values. (#6492)
* Fixes to the way script handles multiple values, and other minor fixes.

This commit improves our handling of operators that return multiple values.
Builtins are now checked so that they return the right number of values,
and support for TupleValue is extended to all things that can return
multiple values.

This resolves issues where the compiler accepted things like:

  a, b = c + c

This would cause the interpreter to crash. Now each operator knows
how many results it will produce and can check it against the number
of requested inputs.

Notes:
* Allow True/False literals in constant expressions
* make handling of keyword constants more consistent to support True/False
* make parsing constants match the way we construct constants from python
* improve the error messages when accessing bad graph attributes.
* switch findTensorOp to return an optional.
* check that attribute types are correct in findTensorOp
* Check the correct number of outputs for builtins

This also changes emitExpr to return a single SugaredValue

Rather than possibly returning multiple values, emitExpr now
always returns a single value, which _might_ be a tuple. This approach
more closely follows python making the code easier to follow.

Checks for returning the right number of values are now located in
the assignment operator, and occur when unpacking the tuple.

We still pass `n_binders` to function calls so that calls into python
know how many values they should return.
2018-04-12 10:32:49 -07:00
James Reed
1533155c4e
[JIT][script] Implement compile-time tuples & starred unpacking (#6214)
* Something that works

* Tuple sugared value

* Works with commenting out input size check

* support string frontend

* Initial starred assignment

* Fix parser

* Fixup tests

* clang-format

* fix rebase error

* lint

* move star assign test to string frontend to make py2 happy

* Py2 fix: parse starargs from Call node

* Address some comments

* Fixup merge

* Remove overloaded unary operators

* Bugfix and test case

* Address a few more comments

* asValues -> asTuple

* Remove unrolledFor stuff

* Fixup getValues

* Pass CallsiteDescriptor struct and have different behavior for different call types

* Address comments and lint

* some type checks

* Address comments

* lint

* Fix mistake
2018-04-09 19:34:51 -07:00
Adam Paszke
da6c3c90d9
Relax constraints on return statements in the script (#6070)
Script functions can now have no return statements, empty
return statements, or return one or more values.

Additionally fix the lexer to always emit TK_NEWLINE before
TK_DEDENT, which simplifies the parser.
2018-03-31 18:35:33 +02:00
James Reed
213fa61706 Implement range for loop in script (#5827)
* Implement range for loop in script

* Fix handling of boolean constants

* Use WithInsertPoint

* Allow dynamic max trip count

* fix symbols

* Fix argument order

* fix test

* Add insert{Input,Output} APIs and use them

* Factor out condition stuff

* clang-format

* Address remaining comments

* Fix tests

* Implement script in AST frontend
2018-03-23 11:55:32 -04:00
Adam Paszke
eeb90d9c95
Add a Number node to the JIT AST and unify script syntax with Python (#5716) 2018-03-15 20:56:23 +01:00
Zachary DeVito
41285edbb6 [jit] add a compiled script module (#5630)
Add script::Module C++ class to represent script modules
switch AST -> IR conversion to work on Modules/Methods rather than raw graphs
function-only AST -> IR conversion is just a simplified case where there is
only one module with a single method and no parameters.
introduce SugaredValue in compiler.h to represent values in scope in a script
function that are not first-class and that get desugared. This is used to
represent the module's self parameter, as well as python function calls,
and method calls on tensor
provide a Python ScriptModule that provides a nice API on top of script::Module
allowing for the definition of script modules with methods, parameters,
and submodules
Not in this PR but intended for the future:

ScriptModule actually subclasses nn.Module, with most methods implemented
Unification of tracedmodule and script module functionality into one container class.

Detailed changelog:

* Switch compiler over to using Module, but don't
use them yet.

* Remove intermediate attribute encoding in compiler

* Create SugaredValue object to handle resolution
of compiled module.

* switch to_ir to modules, implement Select

* hacky python wrappers

* Private ScriptModule

* Add `define` to script module

* Attributes use TK_LIST_LITERAL

this anticipates adding a real list literal expression to the language.

* Add a metaclass to make sure script stubs are registered

* Add a test

* Doc createResolutionCallback

* Docs and minor editing

* Address PR comments

* Document

* Fix unicode issue
2018-03-12 09:52:40 -04:00
Adam Paszke
cdd0febd86
Fix for a confusion around grammar of Maybe (#5593) 2018-03-06 23:05:20 +01:00
Adam Paszke
5597aba868
Add return statement to the JIT AST (#5578) 2018-03-06 13:14:53 +01:00
James Reed
60415cf0d2 Big batch of fixes for JIT (#5517)
* Check if node output matches in shape propagation

* Fix list attributes and view shape propagation

* fix inferred shapes for view

* Fix shape inference for integrally typed tensors

* Fixes for concat in control flow

* Fix print
2018-03-02 15:03:44 -08:00
Adam Paszke
fedb7095d6
Make tree views statically typed in JIT script AST (#5145) 2018-02-09 22:18:31 +01:00
bddppq
3e85613751 Experimental jit script (#5074) 2018-02-07 20:43:45 +01:00