Commit Graph

214 Commits

Author SHA1 Message Date
Zachary DeVito
330990d878 Serialize first-class version of functions (#19723)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19723
ghimport-source-id: 7f7ec6200c3b42d19046a3e228a3d82212697f14

Reviewed By: jamesr66a

Differential Revision: D15078533

Pulled By: zdevito

fbshipit-source-id: fe421afab9607ee942f6d200f04bb6335fc0aa97
2019-04-25 15:53:07 -07:00
Zachary DeVito
31524bda1f @torch.jit.script(fn) now is a torch.jit.Function (#19721)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19721
ghimport-source-id: b4f5024adc845a82dc5197d19aab1496bf85089f

Reviewed By: jamesr66a

Differential Revision: D15078534

Pulled By: zdevito

fbshipit-source-id: 408d3a871302c5ac5d6426dc5de567f2188ebf4c
2019-04-25 15:53:00 -07:00
Michael Suo
73c166a5ed add resolveType to Resolver (#19237)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19237
ghimport-source-id: 70777ec37155be37efef1b743d564752e4dff9de

Differential Revision: D14928289

Reviewed By: zdevito

Pulled By: suo

fbshipit-source-id: 46827da9ace16730669fc654bf781d83172d18b1
2019-04-19 13:02:02 -07:00
Michael Suo
1e94a3bc4d Turn resolver into a class (#19236)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19236
ghimport-source-id: d36705ea5ecff085d0d84ea57bb96d18d7c260dd

Differential Revision: D14928292

Reviewed By: zdevito

Pulled By: suo

fbshipit-source-id: cd038100ac423fa1c19d0547b9e5487a633a2258
2019-04-19 13:01:59 -07:00
Nikolay Korovaiko
2d0d153288 Refactor EmitLoopCommon to make it more amenable to future extensions (#19341)
Summary:
This PR paves the way for support more iterator types in for-in loops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19341

Differential Revision: D14992749

Pulled By: Krovatkin

fbshipit-source-id: e2d4c9465c8ec3fc74fbf23006dcb6783d91795f
2019-04-18 09:59:21 -07:00
Elias Ellison
4371cb5e01 Cast not expressions to bool (#19361)
Summary:
As part of implicitly casting condition statements, we should be casting not expressions as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19361

Differential Revision: D14984275

Pulled By: eellison

fbshipit-source-id: f8dae64f74777154c25f7a6bcdac03cf44cbb60b
2019-04-17 16:06:48 -07:00
Nikolay Korovaiko
ada10ad416 Ellipsis in subscript
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17763

Differential Revision: D14893533

Pulled By: Krovatkin

fbshipit-source-id: c46b4e386d3aa30e6dc03e3052d2e5ff097fa74b
2019-04-15 22:10:44 -07:00
Zachary DeVito
ef406ee925 First class modules in the compiler, round 2 (#19167)
Summary:
This PR propagates where we use first-class modules objects into the compiler. This creates a transitionary state where:

* compiler.cpp creates Graphs where `self` is a Module class and attributes/parameters/buffers/submodules are looked up with `prim::GetAttr`
* GraphExecutor still runs "lowered graphs" where the self object has been removed by a compiler pass `lower_first_class_method`.
* Tracing still creates "lowered graphs", and a pass "lift_lowered_method" creates a first-class method graph for things.

* This PR separates out Method and Function. A script::Function is a pure Graph with no `self` bound.  Similar to Python, a script::Method is just a bound `self` and its underlying `script::Function`.
* This PR also separates CompilationUnit from Module. A CompilationUnit is just a list of named script::Functions.  Class's have a CompilationUnit holding the class methods, and Modules also have a CompilationUnit holding their Methods. This avoids the weird circular case Module --has a-> Class -> has a -> Module ...

Details:
* In this transitionary state, we maintain two copies of a Graph, first-class module and lowered. Th first-class one has a self argument that is the module's class type. The lowered one is the lowered graph that uses the initial_ivalues inputs.
* When defining lowered methods using `_defined_lowered` we immediately create the first-class equivalent. The reverse is done lazily, creating lowered_methods on demand from the class.
* The two way conversions will be deleted in a future PR when the executor itself runs first-class objects. However this requires more changes to (1) the traces, (2) the python bindings, and (3) the onnx export pass and would make this PR way to large.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19167

Differential Revision: D14891966

Pulled By: zdevito

fbshipit-source-id: 0b5f03118aa65448a15c7a7818e64089ec93d7ea
2019-04-11 13:55:48 -07:00
Zachary DeVito
f5165ade5b Revert D14842057: Compiler uses first-class modules**
Differential Revision:
D14842057

Original commit changeset: ca6e7b5a4380

fbshipit-source-id: e8f1862a59bf20d5f78648b2fdc53a8b3750ead3
2019-04-11 06:17:01 -07:00
Zachary DeVito
5e1f0b2a07 Compiler uses first-class modules** (#19043)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19043
ghimport-source-id: 0c9e80d5f35654af6d472abd5643bff3e9eb9ddf

Differential Revision: D14842057

Pulled By: zdevito

fbshipit-source-id: ca6e7b5a43805240f40b84d30e54495061067dc0
2019-04-11 00:00:48 -07:00
Dmytro Dzhulgakov
92f70bb639 Split python_ir.h in a more sensible way (#19081)
Summary:
Files included in libtorch do depend on torch/csrc/utils/object_ptr.h, e.g. ir.cpp: https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/ir.h#L10 (including usage in std::vector that requires destructor for THPPointer)

However, object_ptr.h depends on python stub: https://github.com/pytorch/pytorch/blob/master/torch/csrc/utils/object_ptr.h#L3

Whereas object_ptr.cpp depends full on on python: https://github.com/pytorch/pytorch/blob/master/torch/csrc/utils/object_ptr.cpp#L8

`torch/csrc/utils/object_ptr.cpp` is included only in Python extension target: https://github.com/pytorch/pytorch/blob/master/torch/CMakeLists.txt#L541

The only reason it was working on master is that compiler was aggressive enough in pruning unused inline functions. With a bit of changes in flags, it started breaking (like in kostmo's PR).

This PR splits out python-dependent bits more explicitly by forward declaring THPPointer for real.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19081

Reviewed By: ezyang

Differential Revision: D14860091

Pulled By: dzhulgakov

fbshipit-source-id: 4e86cb8e2ac57aedb3cd00c15270d65bb376206c
2019-04-10 10:26:50 -07:00
Thomas Viehmann
026a9c6bf2 Handle None indexing in TorchScript (#18615)
Summary:
t[None], t[None, 1:, None] and friends for unsqueezing

Fixes: #12810
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18615

Differential Revision: D14837039

Pulled By: wanchaol

fbshipit-source-id: ab3862c41629f087b0a46b7c59c93dac4018e6fe
2019-04-08 13:44:49 -07:00
Elias Ellison
b80a4fa201 Allow ints, floats, and tensors in conditional (#18755)
Summary:
Per our offline discussion, allow Tensors, ints, and floats to be casted to be bool when used in a conditional

Fix for https://github.com/pytorch/pytorch/issues/18381
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18755

Reviewed By: driazati

Differential Revision: D14752476

Pulled By: eellison

fbshipit-source-id: 149960c92afcf7e4cc4997bccc57f4e911118ff1
2019-04-03 17:12:17 -07:00
Michael Kösel
46a68c1b37 add 'abs' builtin
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18502

Differential Revision: D14750173

Pulled By: eellison

fbshipit-source-id: 359cf08938ada442ca1a3b3ea14022ce10229499
2019-04-03 12:47:13 -07:00
David Riazati
be01c90797 Add string index/slice operations (#18247)
Summary:
Adds support for string indexing (`"a"[0]`) and slicing (`"abc"[1:3]`)
to script.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18247

Differential Revision: D14574486

Pulled By: driazati

fbshipit-source-id: 4b42aa0881e5398ea7f112be46c0335e6e19dced
2019-04-01 12:11:35 -07:00
David Riazati
7b0ef31780 Add hash() global (#18258)
Summary:
This adds `hash()` which supports `int`, `str`, and `float`. It relies on `std::hash` which is implementation defined, so the result of `hash()` in TorchScript is not the same as in Python, but should satisfy the same properties.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18258

Differential Revision: D14692317

Pulled By: driazati

fbshipit-source-id: 909df5d024bb3feea157d5a203b7de53c72261c9
2019-03-29 18:29:34 -07:00
eellison
dc6b5b2a52 Optimize boolean expressions & unwraps (#18259)
Summary:
Simplify or eliminate boolean and/or expressions, optimize unwrapping a value that cannot be None, and optimize using `is` with a None and a non-None value

Since peephole optimize is now introducing constants, i added another constant propagation pass after running it.

Previously i had a PR that did this & optimized shape ops - i will add the shape optimizations in a separate PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18259

Differential Revision: D14602749

Pulled By: eellison

fbshipit-source-id: 1c3f5a67067d8dfdf55d7b78dcb616472ea8a267
2019-03-25 21:50:57 -07:00
Michael Suo
ff3ecfec89 Turn script_type_parser into a class (#18211)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18211
ghimport-source-id: 73b81e9ec631937b14db1da10991831788a6894b

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18296 [jit] Add namespacing for ScriptClasses
* #18284 [jit] make test module hook use save/load
* **#18211 [jit] Turn script_type_parser into a class**
* #18148 [jit] python interop for script classes

If we are namespacing classes, the type parser will need to carry around
some state about which namespaces to look in. This PR just wraps it in a
class in preparation.

Also, subscriptToType can no longer be static, since parseTypeFromExpr
may give different results depending on the namespaces available, so
it's been made a regular function instead of a static map lookup.

Reviewed By: eellison

Differential Revision: D14581128

fbshipit-source-id: 711315472ccde1920abf9fdb5a871ac27fb86787
2019-03-22 16:30:05 -07:00
Nikolay Korovaiko
2ad2b2c7b1 Support for basic list comprehensions (#17267)
Summary:
Supports the following syntax:
```
        torch.jit.script
        def comp(l):
            # type: (List[float]) -> List[float]

            n = [x * 3 for x in l]
            return n
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17267

Differential Revision: D14581119

Pulled By: Krovatkin

fbshipit-source-id: 6fd091a8a9ab607386ac58fda6ad88bf8aea380e
2019-03-22 15:25:13 -07:00
Sebastian Messmer
be364ac8d7 Specify overload name in function schema (#18037)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18037

The FunctionSchema can now store an overload name and the parser knows how to parse it. Specify like this:

    my_func.overload1(arg1: Tensor) -> Tensor
    my_func.overload2(arg1: Tensor, arg2: Tensor) -> Tensor

Reviewed By: zdevito

Differential Revision: D14467497

fbshipit-source-id: 8832b32f07351bb61090357b17b77a6a2fed3650
2019-03-15 16:58:13 -07:00
Michael Suo
18f721fb9a support serialization of classes (#17856)
Summary:
Stack:
      **#17856 [jit] support serialization of classes**  [💛](https://our.intern.facebook.com/intern/diff/D14402599/)

Add support for saving/loading TorchScript modules that depend on user-defned classes.

We track class dependencies the same we track tensor constants, then write them
all out such that we can just compile them in order before compiling the module
hierarchy.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17856

Reviewed By: shannonzhu

Differential Revision: D14461599

Pulled By: suo

fbshipit-source-id: 7115f87e069fd00dc8381d7de9997864fef7ea9f
2019-03-15 12:06:23 -07:00
Michael Suo
f9820e55af initializing class value (#17585)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17585

Create a sugared value that represents a class during initialization. This is
so that assignments to attributes correctly define attributes in __init__ but
raise an error elsewhere.

Reviewed By: shannonzhu

Differential Revision: D14263403

fbshipit-source-id: 09b2feeb272302f00a79c2a0302fbdf5483aed6a
2019-03-11 19:13:52 -07:00
Ailing Zhang
fefaebabba fix dropout AD & rename range to rangelist (#17691)
Summary:
fixes #17669
Address apaszke 's comments in #17523
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17691

Differential Revision: D14328083

Pulled By: ailzhang

fbshipit-source-id: 9ec4a54f13bfd1aaf4b1821dd00c31793ac07a44
2019-03-05 20:50:10 -08:00
Michael Suo
e6a9062335 usertype -> class (#17528)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17528

as title. register_prim_ops is messy because someone ruined clang-format, but I figured it's okay to include here since this is such a mechanical change

Reviewed By: driazati

Differential Revision: D14236943

fbshipit-source-id: c2b22845837b7f830015510e48ec2ee5202fa407
2019-03-01 10:08:23 -08:00
Ailing Zhang
03132c1f56 convolution/matmul/dropout (#17523)
Summary:
* Add AD formula for _convolution & matmul & dropout
* add prim::range, fixes #17483
Example:
```
dim = 3
x = range(dim)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17523

Differential Revision: D14254002

Pulled By: ailzhang

fbshipit-source-id: ba60d77b047db347929b72beca2623fb26aec957
2019-02-27 21:41:59 -08:00
Michael Suo
2cdbb140e6 user defined types (#17314)
Summary:
First pass at user defined types. The following is contained in this PR:
- `UserType` type, which contains a reference to a module with all methods for the type, and a separate namespace for data attributes (map of name -> TypePtr).
- `UserTypeRegistry`, similar to the operator registry
- `UserObject` which is the runtime representation of the user type (just a map of names -> IValues)
- `UserTypeValue` SugaredValue, to manage getattr and setattr while generating IR, plus compiler.cpp changes to make that work.
- Frontend changes to get `torch.jit.script` to work as a class decorator
- `ClassDef` node in our AST.
- primitive ops for object creation, setattr, and getattr, plus alias analysis changes to make mutation safe.

Things that definitely need to get done:
- Import/export, python_print support
- String frontend doesn't understand class definitions yet
- Python interop (using a user-defined type outside TorchScript) is completely broken
- Static methods (without `self`) don't work

Things that are nice but not essential:
- Method definition shouldn't matter (right now you can only reference a method that's already been defined)
- Class definitions can only contain defs, no other expressions are supported.

Things I definitely won't do initially:
- Polymorphism/inheritance
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17314

Differential Revision: D14194065

Pulled By: suo

fbshipit-source-id: c5434afdb9b39f84b7c85a9fdc2891f8250b5025
2019-02-26 01:34:07 -08:00
Elias Ellison
81b43202ae Refactor Type Parser b/w Schemas & IRParser into a type common parser (#17383)
Summary:
Creates a new shared type parser to be shared between the IR parser and the Schema Parser.

Also adds parsing of CompleteTensorType and DimensionedTensorType, and feature-gates that for the IRParser.

Renames the existing type_parser for python annotations, python_type_parser, and names the new one jit_type_parser.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17383

Differential Revision: D14186438

Pulled By: eellison

fbshipit-source-id: bbd5e337917d8862c7c6fa0a0006efa101c76afe
2019-02-22 13:43:55 -08:00
David Riazati
2370c989d8 Add LSTM to standard library (#15744)
Summary:
**WIP**

Attempt 2 at #14831

This adds `nn.LSTM` to the jit standard library. Necessary changes to the module itself are detailed in comments. The main limitation is the lack of a true `PackedSequence`, instead this PR uses an ordinary `tuple` to stand in for `PackedSequence`.

Most of the new code in `rnn.py` is copied to `nn.LSTM` from `nn.RNNBase` to specialize it for LSTM since `hx` is a `Tuple[Tensor, Tensor]` (rather than just a `Tensor` as in the other RNN modules) for LSTM.

As a hack it adds an internal annotation `@_parameter_list` to mark that a function returns all the parameters of a module. The weights for `RNN` modules are passed to the corresponding op as a `List[Tensor]`. In Python this has to be gathered dynamically since Parameters could be moved from CPU to GPU or be deleted and replaced (i.e. if someone calls `weight_norm` on their module, #15766), but in the JIT parameter lists are immutable, hence a builtin to handle this differently in Python/JIT.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15744

Differential Revision: D14173198

Pulled By: driazati

fbshipit-source-id: 4ee8113159b3a8f29a9f56fe661cfbb6b30dffcd
2019-02-21 16:24:19 -08:00
Zachary DeVito
4c6da649e5 Partial support for kwarg_only arguments in script (#17339)
Summary:
This provides the minimum necessary to allow derivative formulas for things that have a kwarg only specifier in their schema. Support for non-parser frontend default arguments for kwargs is not completed.
Fixes #16921
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17339

Differential Revision: D14160923

Pulled By: zdevito

fbshipit-source-id: 822e964c5a3fe2806509cf24d9f51c6dc01711c3
2019-02-21 15:27:06 -08:00
Michael Suo
2c302b6ea6 allow lists to contain any tensor type (#17321)
Summary:
If something is a TensorList, it should be a list of `TensorType`, not a list of some specialized type.
Fixes #17140, #15642
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17321

Differential Revision: D14158192

Pulled By: suo

fbshipit-source-id: ba8fe6ae8d618c73b23cd00cbcb3111c390c5514
2019-02-21 00:18:50 -08:00
Michael Suo
501d346da8 batched cleanups (#17288)
Summary:
Bunch of random stuff I came across while doing UDT stuff. Putting in a separate PR to avoid noise
- fix up the alias analysis list ops to include fork/wait
- improve dump() for aliasDb to print writes
- Move BuiltinFunction::call() to sugaredvalue with the rest of the methods
- formatting and includes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17288

Differential Revision: D14147105

Pulled By: suo

fbshipit-source-id: 62e2a922a1726b684347365dc42c72188f154e9c
2019-02-20 18:31:53 -08:00
eellison
82aa511146 move prim::None to prim::Constant (again) (#17186)
Summary:
Trying to land again, make prim::None into a case of prim::Constant. Reverted the previous landing because it broke an important onnx export test.

https://github.com/pytorch/pytorch/pull/16160
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17186

Differential Revision: D14115304

Pulled By: eellison

fbshipit-source-id: 161435fc30460b4e116cdd62c7b2e5b94581dcb7
2019-02-19 11:45:50 -08:00
Elias Ellison
91c1d728ac Revert D14109636: [pytorch][PR] move prim::None to a case in prim::Constant
Differential Revision:
D14109636

Original commit changeset: d26fd3839761

fbshipit-source-id: c8c8113e2bff49ea93235732603e6ebc89356533
2019-02-15 16:38:12 -08:00
Elias Ellison
7caa21f5ca move prim::None to a case in prim::Constant (#16160)
Summary:
This change simplifies analysis done on constants since prim::None does not need to be handled separately now.  To check if a constant node is None, use node->isNone().

Next step will be to remove prim::Undefined.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16160

Differential Revision: D14109636

Pulled By: eellison

fbshipit-source-id: d26fd383976163a2ddd4c24984bd672a541cc876
2019-02-15 16:27:57 -08:00
Nikolay Korovaiko
82b269060c Add support for simpler for-in-list + tests (#16726)
Summary:
This PR add supports for simpler for-in-list loops such as the example below:

```python
torch.ji.python
def sum_list(a):
    # type: (List[int]) -> int
    sum = 0
    for i in a:
        sum += i

    return sum
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16726

Differential Revision: D14070007

Pulled By: ezyang

fbshipit-source-id: b4d971ee647729a6caa3099ceac34ec5c4f143de
2019-02-15 11:41:20 -08:00
Wanchao Liang
ac00e85e36 Remove undefined tensor in jit script (#16379)
Summary:
This PR is a follow up of #15460, it did the following things:

* remove the undefined tensor semantic in jit script/tracing mode
* change ATen/JIT schema for at::index and other index related ops with `Tensor?[]` to align with what at::index is really doing and to adopt `optional[tensor]` in JIT
* change python_print to correctly print the exported script
* register both TensorList and ListOfOptionalTensor in JIT ATen ops to support both
* Backward compatibility for `torch.jit.annotate(Tensor, None)`

List of follow ups:

* remove the undefined tensor semantic in jit autograd, autodiff and grad_of
* remove prim::Undefined fully

For easy reviews, please turn on `hide white space changes` in diff settings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16379

Differential Revision: D13855677

Pulled By: wanchaol

fbshipit-source-id: 0e21c14d7de250c62731227c81bfbfb7b7da20ab
2019-02-07 11:02:14 -08:00
Zachary DeVito
f34192db0f Rename DynamicType -> TensorType (#16787)
Summary:
```
import json
from subprocess import check_call
from pprint import pprint
renames = {
    'c10::TensorType': 'DimentionedTensorType',
    'c10::DynamicType': 'TensorType',
    'c10::TensorTypePtr': 'DimentionedTensorTypePtr',
    'c10::DynamicTypePtr': 'TensorTypePtr',
    'c10::TypeKind::DynamicType': 'TensorType',
    'c10::TypeKind::TensorType': 'DimentionedTensorType',
}

entries = json.loads(open('compile_commands.json', 'r').read())

build = None
sources = []

for e in entries:
    name = e['file']
    if not ('jit' in name or 'ATen/core' in name):
        continue
    build = e['directory']
    sources.append(name)

args = ['clang-rename', '-i', '-force', '-pl']
for name in sorted(renames.keys()):
    args += ['-qualified-name={}'.format(name), '-new-name={}'.format(renames[name])]

for source in sources:
    cmd = args + [source]
    pprint(args)
    check_call(cmd, cwd=build)
    check_call(['git', 'stash', 'push', '-m', 'rename'])
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16787

Differential Revision: D13974132

Pulled By: zdevito

fbshipit-source-id: 8368fd53e17cff83707bbe77f2d7aad74f8ce60e
2019-02-06 17:31:07 -08:00
David Riazati
e2d3a3fd6a dict values(), keys(), and len() (#16629)
Summary:
Adds some operations for dicts to match Python and tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16629

Differential Revision: D13961144

Pulled By: driazati

fbshipit-source-id: b31f27a4320ff62cd118b508fb0a13056535dc7c
2019-02-05 13:55:25 -08:00
David Riazati
3f8fd19a86 Add immutable dict support (#16208)
Summary:
This PR adds basic support (creation and indexing) for immutable dictionaries in Script. This includes Python/string frontend support and a `IValue::GenericDict` type backed by a `std::unordered_map`. Only `str`, `int`, and `float` are supported as keys, any type can be a value. Structure is pretty similar to list.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16208

Differential Revision: D13881686

Pulled By: driazati

fbshipit-source-id: 29ce9835b953c3456f57bcc2bbdf7fe0cbf941c0
2019-01-31 14:29:23 -08:00
James Reed
d1ed0176df Trace fork and join calls
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16232

Differential Revision: D13772974

Pulled By: jamesr66a

fbshipit-source-id: b2db370271809e26d3301f8cc98eec567db5e62b
2019-01-26 14:42:45 -08:00
Mikhail Zolotukhin
47bf30661f Directly include headers from ATen.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16287

Differential Revision: D13792949

Pulled By: ZolotukhinM

fbshipit-source-id: d627d8dc469df048063c70d0b5b8d33fede809a3
2019-01-24 11:22:27 -08:00
Elias Ellison
d4f6befc93 Add implicit optional unwrapping (#15587)
Summary:
Add support for type inference for optional type refinement.

If a conditional is of the form "x is None" or "x is not None", or is a boolean expression containing multiple none checks, the proper type refinements are inserted in each branch.

For example:
if optional_tensor is not None and len(optional_tensor) < 2:
	# optional_tensor is a Tensor

if optional_tensor1 is not None and optional_tensor2 is not None:
	# both optional_tensor1 and optional_tensor2 are Tensors

TODO:

- not run an op for unchecked unwrap optional in the interpreter

- potentially refine types to prim::None (omitted for now to simply things & because it's not an actual use cause).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15587

Differential Revision: D13733810

Pulled By: eellison

fbshipit-source-id: 57c32be9f5a09ab5542ba0144a6059b96de23d7a
2019-01-18 11:25:01 -08:00
Elias Ellison
bebf1f7463 Torch tensor (#15224)
Summary:
Support torch.tensor in script. Already been accepted, trying to reland
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15224

Differential Revision: D13466616

Pulled By: eellison

fbshipit-source-id: f7850da07b0eb11af98f255fc15bd3cf861f2a40
2019-01-03 17:35:17 -08:00
Zachary DeVito
b0cf780ecc Add min/max on numbers to JIT
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15680

Differential Revision: D13568806

Pulled By: zdevito

fbshipit-source-id: ef0f33cc12a057184293bc31d28cc7b24f73eb94
2019-01-02 20:10:38 -08:00
Michael Suo
f636dc9276 clang format world (#15524)
Summary:
The PR clang-formats everything in `torch/csrc/jit/` and adds it to the pre-commit hook.

Here is a list of non-mechanical changes:
- I went over each file and fixed up whenever I could tell that clang-format was clobbering comment formatting.
- Made the macros in register_prim_ops a little more clang-format friendly by omitting trailing commas
- Refactored autodiff.cpp to use a helper class with explicit state rather than a bunch of capturing lambdas
- Small improvements to the precommit hook clang-format
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15524

Differential Revision: D13547989

Pulled By: suo

fbshipit-source-id: 3ff1541bb06433ccfe6de6e33f29227a2b5bb493
2018-12-26 06:55:01 -08:00
Zachary DeVito
f3a588fede add len to nativeResolver (#15488)
Summary:
(otherwise len is not resolvable using torch::jit::compile)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15488

Differential Revision: D13539991

Pulled By: zdevito

fbshipit-source-id: 3ba85fa7b1adb163f9229c568f7997d22321903d
2018-12-21 16:47:15 -08:00
Zachary DeVito
6bf05bfde6 allow non-final returns (#15463)
Summary:
This PR allows a subclass of programs that have return statements that are not final in the graph.

`final_returns.h` contains the a comment describing how this is accomplished.
To minimize complexity in `compiler.cpp`, this pass is done as an AST-to-AST rewrite before the compiler runs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15463

Differential Revision: D13538962

Pulled By: zdevito

fbshipit-source-id: 67105ca873351825b4a364092ab1873779f3e462
2018-12-21 14:01:33 -08:00
Zachary DeVito
1a2ec10bd4 Support enough of closures to write autograd functions (#15411)
Summary:
This PR adds enough of the infra for supporting closures (inner script functions) in order to allow us to expression symbolic gradients using them. We do not actually ever run graphs that contain these closures. The symbolic_script infrastructure just extracts them out of the original forward graph and turns them into discrete forward/backward pairs. This cuts down on the type annotations necessary to write forward/backward pairs and aligns closely with the "differentiator" function approach to expression reverse-mode AD.

Example:

This code:
```
import torch

r = torch.jit.CompilationUnit(
'''
def mul_forward(self, other):
    def backward(grad_output):
        grad_self = (grad_output * other).sum_to_size(self.size())
        grad_other = (grad_output * self).sum_to_size(other.size())
        return grad_self, grad_other
    return self * other, backward
''')

print(r.module.code)
```

Will produce this graph (pretty printed for clarity):

```
def mul_forward(self,
    self: Tensor,
    other: Tensor) -> Tuple[Tensor, Tuple[None, Tuple[Tensor, Tensor]]]:
  backward = (self.__lambda, (other, self))
  return (torch.mul(self, other), backward)

def __lambda(self,
    context: Tuple[Tensor, Tensor],
    grad_output: Tensor) -> Tuple[Tensor, Tensor]:
  other, self, = context
  grad_self = torch.sum_to_size(torch.mul(grad_output, other), torch.size(self))
  grad_other = torch.sum_to_size(torch.mul(grad_output, self), torch.size(other))
  return (grad_self, grad_other)
```

symbolic_script will then do some modifications to remove the unsuppored prim::Function node, yielding:

```
def mul_forward(self,
    self: Tensor,
    other: Tensor) -> Tuple[Tensor, Tuple[None, Tuple[Tensor, Tensor]]]:
  return (torch.mul(self, other), (other, self))

def backward(self,
    context: Tuple[Tensor, Tensor],
    grad_output: Tensor) -> Tuple[Tensor, Tensor]:
  other, self, = context
  grad_self = torch.sum_to_size(torch.mul(grad_output, other), torch.size(self))
  grad_other = torch.sum_to_size(torch.mul(grad_output, self), torch.size(other))
  return (grad_self, grad_other)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15411

Differential Revision: D13523340

Pulled By: zdevito

fbshipit-source-id: 4d4a269460e595b16802c00ec55ae00e3e682d49
2018-12-20 14:39:11 -08:00
Zachary DeVito
0368054a6d Split up compiler.cpp (#15355)
Summary:
This separates the different parts of compiler.cpp to make their relationship more clear. In particular it adds:

* sugared_value.{h,cpp} - all the public SugaredValues that the compiler defines and a few that were inside compiler.cpp
* type_parser.{h, cpp} - Turns TreeRef's defining types into TypePtr
* schema_matching.{h, cpp} - infrastructure for matching arguments against overloaded schema and emitting builtin operators with a particular schema.
Retains:
* compiler.{h, cpp} - now responsible simply for the `defineMethodsInModule` infra structure.

Some utility functions like inlineCallTo have moved to ir.h.

Only thing that is not a move is some changes in module.h/cpp that remove multiple returns from `Method::emit_call_to`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15355

Reviewed By: suo, wanchaol

Differential Revision: D13507524

Pulled By: zdevito

fbshipit-source-id: 69ec936a9ff1a383c12a883616346b219c72e393
2018-12-18 19:43:35 -08:00
Zachary DeVito
056cfaf3ff Method returns a single argument (#15289)
Summary:
This PR changes Method (just Method not all graphs) to always have a single
return argument.

This is part 1 in a set of changes that will enable us to have better handling if early return statements.
The simplification that this change provides greatly reduces the work for the next step.

This change makes it so that Method and Python handle multiple returns in the same way:
* 0 - None
* 1 - <single value>
* many - Tuple[...]

The result is that a lot of special-case handling in compiler.cpp and its
bindings can be removed. It also fixes several bugs in return handling,
including one where return values were not always checked against their
attributed values.

Notes:
* inferTypeFrom is renamed to be more accurate and discourage use.
* This has uncovered some bugs in other components, which are noted in
  the diff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15289

Differential Revision: D13481649

Pulled By: zdevito

fbshipit-source-id: 0e2242a40bb28cca2d0e8be48bede96195e4858c
2018-12-18 10:44:09 -08:00