Commit Graph

302 Commits

Author SHA1 Message Date
Elias Ellison
e90adf59a0 Make assertions refine types (#23949)
Summary:
Make assertions like `x is not None` refine the type of x. This is easy to do now that typing understands [exits](https://github.com/pytorch/pytorch/pull/23565).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23949

Differential Revision: D16692772

Pulled By: eellison

fbshipit-source-id: 540f28e65a784c72c7c555e0aed0765d5035bc37
2019-08-07 13:06:52 -07:00
davidriazati
9d1acd6dc2 Disable optimizer for __setstate__ (#23698)
Summary:
Before calling `__setstate__` when loading a module, we need to disable
the optimizer since the module's type does not match the values on the
stack (all the tensors will be `UndefinedTensor`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23698

Pulled By: driazati

Differential Revision: D16690935

fbshipit-source-id: 71e2238fd25cd16271af478ef21a3cf4e514a462
2019-08-07 12:37:24 -07:00
Elias Ellison
ed4ee093cb Make typing understand exceptions (#23565)
Summary:
When we're emitting an if node, if one branch exits allow variables in the other branch to escape scope. This is using the same machinery that already exists for early returns so there are minimal changes to the compiler. Most of the changes are in the exit_transform pass so we don't create terrible graphs when exceptions exist. In a follow up PR i will add a writeup of the transform pass to docs since this should be the last change made to it for a while.

This will allow assertions to refine Optional types, as well as allow JIT to understand things like:
```
def foo(x):
    if x == 1:
        raise Exception()
    else:
        a = 1
    return a
```

If you look in nn/functional.py, like 3/4 of the TODOs are this issue. One note is that if a function always throws, I accepted whatever the annotation for the return type is if it exists and otherwise set it to None. This is consistent with what mypy does.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23565

Differential Revision: D16679572

Pulled By: eellison

fbshipit-source-id: e58c9e9ddaeb13144c803d90e2beae253c851f7f
2019-08-07 09:06:07 -07:00
Michael Suo
8fc349f7be fix some compiler warnings
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23816

Test Plan: Imported from OSS

Differential Revision: D16654126

Pulled By: suo

fbshipit-source-id: addf3d24df514a17a521f8584cd5e142c8a3aec4
2019-08-05 17:52:56 -07:00
Michael Suo
65a89472c4 Put all modules in the global Python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23154

Test Plan: Imported from OSS

Differential Revision: D16441913

Pulled By: suo

fbshipit-source-id: a79f2c3e06a33cbd79b2e3333f16c069f356f451
2019-07-29 16:38:20 -07:00
Elias Ellison
3497891c14 add sorted keyword for lists and dicts (#23274)
Summary:
Add `sorted` keyword to JIT for lists and dicts. This desugars to a list copy and a call to `list.sort()`. Since we don't have interfaces yet I implement it in terms of `list.sort()`. When we do we can re-visit implementing this op in a different manner.

The test fails bc of a fix to specialized lists which is landing here: https://github.com/pytorch/pytorch/pull/23267

Ignore the first commit because it is formatting, plz use clang_format ppl :'(
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23274

Differential Revision: D16527323

Pulled By: eellison

fbshipit-source-id: aed8faef23cb790b9af036cd6c1b9b1d7066345d
2019-07-26 17:44:15 -07:00
Elias Ellison
ca76c82ce3 Add early returns to JIT (#19179)
Summary:
Add early returns to JIT with minimal changes to compiler.cpp and an IR->IR pass that will transform the graph so that there is only one return value.

In compiler.cpp, record when a block will exit so that in the following example will work:
```
if cond:
    a = torch.zeros([2])
else:
    return 2
a += 2
...
```
To match block outputs with values that will not be used, like in the above example with `a`, I add a Bottom Type that subtypes everything else. This allows shape propagation to continue to work, and makes it so that we don't need many extra nodes filling up the graph.

The IR transform currently doesn't work on Loops, I didn't add that to this PR to avoid too much complexity, but will add it as a stack (and it should be very little extra code). the IR  transform is commented at the top of the file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19179

Differential Revision: D16519819

Pulled By: eellison

fbshipit-source-id: 322a27f69966d1fd074ebe723c3e948b458b0e68
2019-07-26 16:42:43 -07:00
Michael Suo
711be82951 Make optimize a thread_local flag
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23170

Test Plan: Imported from OSS

Differential Revision: D16441912

Pulled By: suo

fbshipit-source-id: a33485178a329d54e41e364c4f14950f88481c55
2019-07-24 23:09:21 -07:00
davidriazati
2915d53096 Move OptionalType wrapping out of constants.cpp
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23234

Pulled By: driazati

Differential Revision: D16460880

fbshipit-source-id: d4e6b747615dbfe73a92ce571d3b2aaae7179f1b
2019-07-24 14:35:26 -07:00
Elias Ellison
91bef6c168 format sugared_value & compiler.cpp (#23283)
Summary:
there are a lot of formatting changes which makes other diffs to these PRs noisy & hard to read.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23283

Differential Revision: D16453590

Pulled By: eellison

fbshipit-source-id: 97b4bf1dbbbfb09c44c57402f61ea27287060044
2019-07-23 22:29:22 -07:00
Michael Suo
2a37740a86 make RHS of assignment optional
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23033

Test Plan: Imported from OSS

Differential Revision: D16383330

Pulled By: suo

fbshipit-source-id: 63c55fae06f0cd534eb5053f91a773431ad052d4
2019-07-23 12:21:19 -07:00
Horace He
a24f6c13a3 Fix broken indexing when using None and ellipses indexing together (#22905)
Summary:
https://github.com/pytorch/pytorch/issues/20153

I believe you need 2 passes for this. Take this example
```python
torch.jit.script
def f():
    x = torch.ones(10, 9, 8, 7, 6)
    return x[..., None, None].shape
```
which results in `[10, 9, 8, 7, 6, 1, 1]`
vs
```
torch.jit.script
def f():
    x = torch.ones(10, 9, 8, 7, 6)
    return x[..., None, None, :].shape
```
which results in `[10, 9, 8, 7, 1, 1, 6]`
After only processing `x[..., None, None` we don't know whether we should be creating a new dimension at the end of the dimension list or somewhere in the middle. What we do depends on the elements to the right of it.

Thus, I do 2 passes - one to collect all the dimensions that the index operations operate on, and another that executes the index operations.

This still doesn't work for an ellipse index followed by a tensor index, but it wasn't working previously either.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22905

Differential Revision: D16433558

Pulled By: Chillee

fbshipit-source-id: c1b303cb97b1af8b6e405bad33495ef3b4c27c4a
2019-07-22 18:11:23 -07:00
davidriazati
fad3031b5c Fix type hints for None constants (#23029)
Summary:
The type hint was being ignored when emitting `None` constants, this also de-dups some testing code
](https://our.intern.facebook.com/intern/diff/16364572/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23029

Pulled By: driazati

Differential Revision: D16364572

fbshipit-source-id: 64f3abd3e37ee49c209480a85ed4f1b8802e5d93
2019-07-22 11:55:05 -07:00
davidriazati
79c4f83fbe Include module names in recursive error stacks (#22921)
Summary:
Following on to #22280, this adds module names so they're included in
the call stacks of an error message (e.g. so it appears as `M.forward`
instead of `forward`)
](https://our.intern.facebook.com/intern/diff/16287925/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22921

Pulled By: driazati

Differential Revision: D16287925

fbshipit-source-id: 6f31d72caa87ba2dc527805d36f7d62eb94c0808
2019-07-19 16:09:14 -07:00
davidriazati
ef36046ad7 Better error message for using Python builtin_function_or_method (#22935)
Summary:
* better error in `toSugaredValue`
* removes a bunch of periods on error messages, `ErrorReport` already adds a `:` at the end of the message](https://our.intern.facebook.com/intern/diff/16291079/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22935

Pulled By: driazati

Differential Revision: D16291079

fbshipit-source-id: 478724fc7d1ae79093f4ede18553ffeafa2c7964
2019-07-16 16:49:04 -07:00
Michael Suo
16aa235f43 _script_compile and _script_class_compile add to the python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22724

Differential Revision: D16197609

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: e12b31f8c8ce14b0968f4ac9445e7d225126b210
2019-07-15 13:13:08 -07:00
Elias Ellison
cf2889ad8f add support for breaks and continues (#21692)
Summary:
Add support for breaks and continues in the jit. We do with a Graph transform pre-SSA.

A graph of the form
```
def test():
    while i < 5:
        if i == 3:
            break
        i += 1
        print(i)
```
has the body of the loop transformed to
```
if i == 3:
    did_break = True
else:
    did_break = False
if did_break:
    loop_exit = True
else:
    i += 1
    print(i)
    loop_exit = i < 5
```

I am going to add more tests but I think it is ready for review now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21692

Differential Revision: D16215807

Pulled By: eellison

fbshipit-source-id: 365102f42de4861d9323caaeb39a96de7619a667
2019-07-12 15:02:44 -07:00
Michael Suo
291570e085 make CompilationUnit::define return defined functions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22723

Test Plan: Imported from OSS

Differential Revision: D16197604

Pulled By: suo

fbshipit-source-id: b22491a58aa9ea476acab06614093ff004291407
2019-07-11 14:55:43 -07:00
Michael Suo
de819be93e refactor self to be a class again
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22722

Test Plan: Imported from OSS

Differential Revision: D16197607

Pulled By: suo

fbshipit-source-id: b4dd96b3f9cc46b48678aab0ff89afc3666e2185
2019-07-11 14:55:39 -07:00
Michael Suo
22d70e0d4b Give functions qualified names
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22721

Test Plan: Imported from OSS

Differential Revision: D16197606

Pulled By: suo

fbshipit-source-id: 94718fcdb0d3b651f16674af3cfd6249ed4533ae
2019-07-11 14:55:34 -07:00
Karl Ostmo
1ecc945ab2 Revert D15998762: [jit] Give functions qualified names
Differential Revision:
D15998762

Original commit changeset: bc2b734f626a

fbshipit-source-id: a118cc4e9a34233279e8380529a8d8120a25839d
2019-07-10 16:10:28 -07:00
Karl Ostmo
a1ca32409f Revert D15998758: [jit] refactor self to be a class again
Differential Revision:
D15998758

Original commit changeset: 14bad87bb6e4

fbshipit-source-id: f2c29974d4afc4d8f88a36e9c266e6d5a22a6191
2019-07-10 16:10:24 -07:00
Karl Ostmo
e6eb17303f Revert D16184799: [jit] make CompilationUnit::define return defined functions
Differential Revision:
D16184799

Original commit changeset: 9f77a7ca2223

fbshipit-source-id: a0e08220d924a6ca55bf2f1f77754553d0133595
2019-07-10 16:10:20 -07:00
Michael Suo
c49a71f91f make CompilationUnit::define return defined functions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22667

Test Plan: Imported from OSS

Differential Revision: D16184799

Pulled By: suo

fbshipit-source-id: 9f77a7ca2223237fbcb4b12a4734b7d334f7be13
2019-07-10 15:19:11 -07:00
Michael Suo
ee9c8a75f4 refactor self to be a class again (#22207)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22207
ghimport-source-id: 36ee8bd17411a2e220665ad2a27364653061070e

Test Plan: Imported from OSS

Differential Revision: D15998758

Pulled By: suo

fbshipit-source-id: 14bad87bb6e44bf1a43ae86339d8cc7b311c76dd
2019-07-10 15:19:07 -07:00
Michael Suo
c0674cebf1 Give functions qualified names (#22206)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22206
ghimport-source-id: d453219d907e048f24eb7f63c096b2c300307c83

Test Plan: Imported from OSS

Differential Revision: D15998762

Pulled By: suo

fbshipit-source-id: bc2b734f626ab07f97dc50ddf1b021e8b46de312
2019-07-10 15:19:03 -07:00
Wanchao Liang
edeb4dbdcb register __getitem__ builtin
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22276

Test Plan: Imported from OSS

Differential Revision: D16060595

Pulled By: wanchaol

fbshipit-source-id: e1e27d6be8d62fc1a841860a783aff108980d9d3
2019-07-10 14:53:35 -07:00
davidriazati
8a233b99cb Report errors through call stack (#22280)
Summary:
The error for `test_error_stack_module`:

```
Traceback (most recent call last):
  File "../test.py", line 35, in <module>
    scripted = torch.jit.script(M())
  File "/home/davidriazati/other/pytorch/torch/jit/__init__.py", line 1119, in script
    return _convert_to_script_module(obj)
  File "/home/davidriazati/other/pytorch/torch/jit/__init__.py", line 1825, in _convert_to_script_module
    raise e
RuntimeError:

d(int x) -> int:
Expected a value of type 'int' for argument 'x' but instead found type 'str'.
:
at ../test.py:11:12
def c(x):
    return d("hello") + d(x)
           ~ <--- HERE

'c' is being compiled since it was called from 'b'
at ../test.py:14:12
def b(x):
    return c(x)
           ~~~ <--- HERE

'b' is being compiled since it was called from 'forward'
at ../test.py:22:16
    def forward(self, x):
        return b(x)
               ~~~ <--- HERE

'forward' is being compiled since it was called from 'forward'
at ../test.py:31:20
    def forward(self, x):
        return x + self.submodule(x)
                   ~~~~~~~~~~~~~~~~ <--- HERE
```

This also unifies our error reporting in the front end with `ErrorReport`

TODO
* Include module names in message, #22207 should make this easy

](https://our.intern.facebook.com/intern/diff/16060781/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22280

Pulled By: driazati

Differential Revision: D16060781

fbshipit-source-id: c42968b53aaddb774ac69d5abbf7e60c23df8eed
2019-07-09 16:41:22 -07:00
Michael Suo
3b2844eeea Make CompilationUnit own Functions (#22202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22202
ghimport-source-id: de6c963af1df76d2d6357155e64a5913ab879f76

Test Plan: Imported from OSS

Differential Revision: D15998761

Pulled By: suo

fbshipit-source-id: 5414a6424953738d823b265d20dc67dde6e5b2d8
2019-07-04 17:12:00 -07:00
Wanchao Liang
799633e4cd move casting ops from prim to aten
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22275

Test Plan: Imported from OSS

Differential Revision: D16060597

Pulled By: wanchaol

fbshipit-source-id: a11d8ad3b037e15bd670cc7cd3fefd4f0abd0bba
2019-07-03 22:22:28 -07:00
Horace He
c9626a11cc Made a += b for lists do an in place add (#21896)
Summary:
In talks with smessmer, we decided that it'd be better to put the logic in `list`, as optimal behavior requires knowing `.capacity()`

Results on my cpu (for the benchmark here: https://twitter.com/VahidK/status/1138674536679821312) now look like this:
```
Pytorch batch_gather took 0.018311 seconds.
Pytorch batch_gather jit took 0.013921 seconds.
Pytorch vectorized batch_gather took 0.001384 seconds.
```
Previously, `batch_gather jit` took 3x as long as `batch_gather`.

Some logic taken from https://github.com/pytorch/pytorch/pull/21690. Note that these two PR's are somewhat orthogonal. That PR handles this benchmark by looking at the alias analysis, while this PR specializes for `+=`.

Note that we can't jit the vectorized version as we think `torch.arange` returns a float tensor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21896

Differential Revision: D15998628

Pulled By: Chillee

fbshipit-source-id: b0085960da4613578b94deb98ac62c0a4532a8c3
2019-06-27 10:59:24 -07:00
davidriazati
2dc9643080 Better error message for mismatched dict key type (#22231)
Summary:
](https://our.intern.facebook.com/intern/diff/15993936/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22231

Pulled By: driazati

Differential Revision: D15993936

fbshipit-source-id: 6822ef01477a3b32beb8c037a621fa71abd022c8
2019-06-26 10:46:45 -07:00
Wanchao Liang
d96ce9b9fe add for in dict support (#22006)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22006
ghimport-source-id: d9686c0b61b0eea3787f48adce567249e4e8faf0

Test Plan: Imported from OSS

Differential Revision: D15948548

Pulled By: wanchaol

fbshipit-source-id: 4227502ca050099085ad481aef725ac2cab06d74
2019-06-23 20:49:35 -07:00
Wanchao Liang
c9344fc9c4 add for in string support (#21990)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21990
ghimport-source-id: 69b4882f8602c4088e7a833c43fd3cd37501a3c0

Test Plan: Imported from OSS

Differential Revision: D15948547

Pulled By: wanchaol

fbshipit-source-id: 057e7f4fb67c6dca98458ceb14414368e1a86260
2019-06-23 20:49:30 -07:00
Wanchao Liang
eab35756d8 support iteration tuple unpacking (#21985)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21985
ghimport-source-id: 1f20a8db7b6bad23b18ac1caefcb46b3fa141697

Test Plan: Imported from OSS

Differential Revision: D15948549

Pulled By: wanchaol

fbshipit-source-id: 758c9c3dfad40c4158aee21ddebcd25b711111d7
2019-06-23 20:49:26 -07:00
Wanchao Liang
e0f5ab2c2e Tree based Iterator infrastructure: for in range/list/tensor/zip/enumerate (#21801)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21801
ghimport-source-id: b019d3e9a6f9bf152991a01b40e424dff176ffaa

Test Plan: Imported from OSS

Differential Revision: D15948545

Pulled By: wanchaol

fbshipit-source-id: 6110a0f3ab08cbbb398441e8330f56083ecd2d99
2019-06-22 01:00:42 -07:00
James Reed
f7b2778cb1 s/uniqueName/debugName/ (#22096)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22096
ghimport-source-id: 8f1d994b98432942b5beeb10bf6d30e447d51997

Test Plan: Imported from OSS

Differential Revision: D15956004

Pulled By: jamesr66a

fbshipit-source-id: 319d2d20ef0863249a8a2bdd228b4f792d37bfab
2019-06-21 20:54:53 -07:00
Ailing Zhang
856268c716 Revert D15947873: [JIT] s/uniqueName/debugName
Differential Revision:
D15947873

Original commit changeset: 31a2b30d0ce9

fbshipit-source-id: ef1c0f120c1835184d8106d176cea58ec6ad40b7
2019-06-21 18:51:03 -07:00
James Reed
36e4b54420 s/uniqueName/debugName (#22048)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22048
ghimport-source-id: a82d80ceec1d8055ce4cf62df10ade4a224109f8

Test Plan: Imported from OSS

Differential Revision: D15947873

Pulled By: jamesr66a

fbshipit-source-id: 31a2b30d0ce911edf5791ca10040a1e968750b06
2019-06-21 17:59:38 -07:00
Wanchao Liang
5ff06a7b0b more complete tuple assignments (#21949)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21949
ghimport-source-id: 458793d74af3728bf0338867b081157905a7635a

Test Plan: Imported from OSS

Differential Revision: D15948550

Pulled By: wanchaol

fbshipit-source-id: 9ed69e0859e052816f06fc9c288b905551b2e48c
2019-06-21 14:49:38 -07:00
James Reed
74104f383e Some small fixes for NamedTuple (#21813)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21813
ghimport-source-id: a1edca8ad0384a9e493ef2f3b0aa5005a668a8f3

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D15860005

Pulled By: jamesr66a

fbshipit-source-id: 4a43432d2dacebde1a676a93ac57f675db857154
2019-06-19 10:43:43 -07:00
Michael Suo
a388c78350 fix bug in CompilationUnit::define (#21886)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21886
ghimport-source-id: fefbd758bbe2fbcaaad84a376ac5f69c40bccb80

Test Plan: Imported from OSS

Differential Revision: D15867647

Pulled By: suo

fbshipit-source-id: 3e0f5bbc98ec93ccf26442c4c574626e45e53888
2019-06-18 15:41:55 -07:00
davidriazati
5eb25c3704 Support in membership checks (#21527)
Summary:
This PR adds support for `in` checks like `key in my_dict`

For now it leaves lists as a follow up due to the changes around `IValue` lists and it needing an `IValue` equality op.

For objects it uses the magic method `__contains__(self, key)`
](https://our.intern.facebook.com/intern/diff/15811203/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21527

Pulled By: driazati

Differential Revision: D15811203

fbshipit-source-id: 95745060394f8a9450efaaf8ab09d9af83bea01e
2019-06-18 09:49:12 -07:00
Horace He
08a0ac84d7 Removed unused variable from closure in range (#21897)
Summary:
This was some code I added :^)

Time for me to remove it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21897

Differential Revision: D15873213

Pulled By: Chillee

fbshipit-source-id: 769c3bd71c542be4afddc02dc2f65aa5c751b10d
2019-06-18 02:21:50 -07:00
Zachary DeVito
972ec676b2 Remove lowered execution (#21674)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21674
ghimport-source-id: b8e27f0ce9b8b362daf73556ee67457fb5355062

Reviewed By: eellison

Differential Revision: D15777726

Pulled By: zdevito

fbshipit-source-id: 718ac676c9a1bcf99b856862fd29631d825645da
2019-06-16 14:29:18 -07:00
Ailing Zhang
ff1172d705 high pri Jit builtins (#21451)
Summary:
bin/hex/oct/round/chr
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21451

Differential Revision: D15702863

Pulled By: ailzhang

fbshipit-source-id: 9f69896b79e7584f12353e9f2ee2969dbe1ec6d6
2019-06-16 09:48:38 -07:00
James Reed
4bcc72fe95 Support for NamedTuple (#21428)
Summary:
Resolves https://github.com/pytorch/lockdown/issues/18

This implements NamedTuple by taking advantage of the existing `names` field in `TupleType`.

TODO: This currently doesn't retain the NamedTuple-ness through serialization. Discussed with suo offline, we can probably make a way to define an anonymous NamedTuple in script (e.g. `NamedTuple('Foo', [('a', int), ('b', float), ('c', List[float])])` and serialize that
TODO: implement support for calling the constructor with kwargs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21428

Differential Revision: D15741564

Pulled By: jamesr66a

fbshipit-source-id: c077cbcea1880675ca6deb340a9ec78f824a136c
2019-06-14 16:45:56 -07:00
David Riazati
0481a7710d Support for type annotations instead of torch.jit.annotate() (#21390)
Summary:
This adds support for PEP 526 style annotations on assignments in place of
`torch.jit.annotate()`, so

```python
a = torch.jit.annotate(List[int], [])
```

turns into

```python
a : List[int] = []
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21390

Differential Revision: D15790937

Pulled By: driazati

fbshipit-source-id: 0cc204f7209a79839d330663cc6ba8320d3a4120
2019-06-12 15:51:46 -07:00
Sebastian Messmer
b527e48588 Use c10::List (#21177)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21177

- Integrate c10::ListPtr into IValue and the c10 dispatcher.
- Streamline conversion to/from IValue. Before, we had IValue::to<> and kernel_functor.h had its own ivalue_to_arg_type and return_type_to_ivalue. They are now unified. Also, this means that nested types like Dicts of Lists of Optional of Dict of ... do work as expected now

Differential Revision: D15476433

fbshipit-source-id: bde9df80df20091aa8e6ae17ba7e90abd149b954
2019-06-12 13:58:24 -07:00
Elias Ellison
aa7e27fa70 Emit Loop Condition as Separate Block (#21611)
Summary:
Emit loop condition as a separate block in loops, then inline them before conversion to SSA. This is needed for breaks & continues where we will inline the condition block after the continue pass and before the break pass.

I also considered emitting a prim::For and a prim::While, but i think it's easier to just have one pathway.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21611

Differential Revision: D15775820

Pulled By: eellison

fbshipit-source-id: de17c5e65f6e4a0256a660948b1eb630e41b04fb
2019-06-11 22:03:26 -07:00