Commit Graph

339 Commits

Author SHA1 Message Date
Elias Ellison
451fc51d8d add support for overloading functions (#23886)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23886

This is a series of PRs that will allow us to support adding [padding to conv](https://github.com/pytorch/pytorch/pull/22484) and also reduce the friction of adding method overloads that was brought up in  https://github.com/pytorch/pytorch/pull/23266.

Support for overloaded functions following the specification in [PEP 484](https://www.python.org/dev/peps/pep-0484/#function-method-overloading).

The usage is:
```
torch.jit.overload
def add(x: int, y: int) -> int: ...
torch.jit.overload
def add(x: float, y: float) -> float: ...

def add:
    return x + y
```

Follow up PRs:

- Add same API for methods
- A couple of cleanups for functions:
     - don't require default params specified on the overload as well
     - potentially error if invocation could be matched to multiple overloads. now it just chooses the first one, mypy does the same thing currently

Test Plan: Imported from OSS

Differential Revision: D16694863

Pulled By: eellison

fbshipit-source-id: f94f2933bc1c97fa58f31846acfe962b0630068c
2019-08-07 19:18:19 -07:00
Wanchao Liang
c74216d396 add NotIn support in script
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23637

Test Plan: Imported from OSS

Differential Revision: D16683558

Pulled By: wanchaol

fbshipit-source-id: 27d79850d76506255ba954601fae751e07ad7cd1
2019-08-07 16:07:21 -07:00
Jianyu Huang
2635b6262e Remove K and N function arguments for fbgemm_pack_quantized_matrix (#22956)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22956

As Title says: remove the extra function arguments for better engineering.

Differential Revision: D16297724

fbshipit-source-id: a31be17708d13508c4ce9a3ce7eb5238e8d17984
2019-08-07 08:50:13 -07:00
Jianyu Huang
78cc9b92a5 Change fbgemm_linear_{int8,fp16}_weight to fbgemm_linear_{int8,fp16}_weight_fp32_activation (#22955)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22955

Following the comment in https://github.com/pytorch/pytorch/pull/22891, change the fbgemm wrapper function name to indicate whether it is dynamic quantization or static quantization.

Differential Revision: D16297512

fbshipit-source-id: 498678e2af27070628be11a6d724ce17c2a3cde5
2019-08-06 23:19:26 -07:00
davidriazati
b0a27278bd Recursive script migration guide
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23892

Pulled By: driazati

Differential Revision: D16677532

fbshipit-source-id: 40f506b1c770e60309c0628d4745047996a05295
2019-08-06 21:43:28 -07:00
Michael Suo
e2f5bc5c08 Properly mangle nn.Module.__construct (#23779)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23779

Mangling is two underscores, not one :(. We want this method to be
private so that inheritors who define a `__construct` do not interfere
with Module initialization

Test Plan: Imported from OSS

Differential Revision: D16645156

Pulled By: suo

fbshipit-source-id: b9060cb35bfaa0391ff200b63fb78b1ac15fee39
2019-08-05 17:58:34 -07:00
Wanchao Liang
8fb0d198e9 make nn.LSTM accept PackedSequence instead of Tuples
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23643

Differential Revision: D16615531

fbshipit-source-id: af508838cac21d271d3470f0f16fd75473a6e68d
2019-08-05 17:16:18 -07:00
Michael Suo
cbf05305c0 don't try to set training after ScriptModule has been initialized. (#23680)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23680

Now when initializing a ScriptModule during the torch.jit.load()
process, there is already a cpp module backing the thing. That means
that setting training will overwrite whatever the initialized
ScriptModule had.

This PR splits apart the common "set up internal state" part of the
Module __init__ and calls that from ScriptModule.__init__ and
Module.__init__, leaving the "nn.Module-specific" part (setting
`self.training`) for the nn.Module __init__

Test Plan: Imported from OSS

Differential Revision: D16606959

Pulled By: suo

fbshipit-source-id: f7ea6b36551ff4e4472b7685f65731d5cfab87fd
2019-08-04 15:04:55 -07:00
Mingzhe Li
29881c7f02 Fix LSTM int8 quantization model size issue (#23577)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23577

This diff is fixing a model size issue introduced in #23291. After that PR, the model size after in8 quantization is the same as that of the original unquantized model. The reason is that we save original weight for int8 quantization even when that's not needed anymore. This diff fixes that by only saving original weight for fp16 quantization path.

Reviewed By: llyfacebook

Differential Revision: D16557619

fbshipit-source-id: f924ae8d155a0d525b86a7440b3c7147d5bead0a
2019-08-02 13:38:30 -07:00
davidriazati
995920ae2c Fix frontend error message
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23576

Pulled By: driazati

Differential Revision: D16611640

fbshipit-source-id: 4a6937e779dc43b3f043aca33e66d2b84376501c
2019-08-02 11:37:21 -07:00
Nikolay Korovaiko
3d15ee1b34 Remove more uses of DimensionedTensorType
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23060

Differential Revision: D16460391

Pulled By: Krovatkin

fbshipit-source-id: b50ee87d22ad18b8cbfff719b199ea876ef172f1
2019-08-01 21:19:28 -07:00
Elias Ellison
029c8e7754 allow forward hooks in tracing (#23613)
Summary:
As far as I could tell forward hooks work out of the box, so allow them in the tracing. We don't have any way of supporting backward hooks though.

Fixes https://github.com/pytorch/pytorch/issues/20862 and fixes https://github.com/pytorch/pytorch/issues/17571
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23613

Differential Revision: D16601437

Pulled By: eellison

fbshipit-source-id: ecf5dc6201ca08b3b9afdb9fcdb0fda8741133a9
2019-08-01 09:51:19 -07:00
davidriazati
756bdcbca4 Include recursive class compilations in error call stack (#23454)
Summary:
Previously these were left out which would lead to confusing messages,
now it looks something like:

```
torch.jit.frontend.UnsupportedNodeError: import statements aren't
supported
:
at ../test.py:13:9
    def bad_fn(self):
        import pdb
        ~~~~~~ <--- HERE
'__torch__.X' is being compiled since it was called from 'fn'
at ../test.py:16:12
def fn(x):
    return X(10)
           ~~~~ <--- HERE
```

Fixes #23453

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23454

Pulled By: driazati

Differential Revision: D16567930

fbshipit-source-id: 251b6f91f37a2816e06bb4c803f9bc172fa1d91b
2019-07-30 17:29:54 -07:00
Michael Suo
c8817f9436 fix default value for script
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23542

Test Plan: Imported from OSS

Differential Revision: D16557122

Pulled By: suo

fbshipit-source-id: c86578aa2c55f44ed5d573d33874a82244df3d09
2019-07-29 19:51:26 -07:00
Michael Suo
6314af6e57 Revert D16526027: [jit] Include recursive class compilations in error call stack
Differential Revision:
D16526027

Original commit changeset: 109f2968430d

fbshipit-source-id: c27252540ec6b7da60739eb7dcc8b1650672c226
2019-07-29 19:02:39 -07:00
davidriazati
52b95fd4be Include recursive class compilations in error call stack (#23454)
Summary:
Previously these were left out which would lead to confusing messages,
now it looks something like:

```
torch.jit.frontend.UnsupportedNodeError: import statements aren't
supported
:
at ../test.py:13:9
    def bad_fn(self):
        import pdb
        ~~~~~~ <--- HERE
'__torch__.X' is being compiled since it was called from 'fn'
at ../test.py:16:12
def fn(x):
    return X(10)
           ~~~~ <--- HERE
```

Fixes #23453
](https://our.intern.facebook.com/intern/diff/16526027/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23454

Pulled By: driazati

Differential Revision: D16526027

fbshipit-source-id: 109f2968430dbf51ee91b1b3409badfd557d19a4
2019-07-29 18:00:05 -07:00
davidriazati
696642ae8d Change docs to use recursive script API (#21612)
Summary:
Use the recursive script API in the existing docs

TODO:
* Migration guide for 1.1 -> 1.2
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21612

Pulled By: driazati

Differential Revision: D16553734

fbshipit-source-id: fb6be81a950224390bd5d19b9b3de2d97b3dc515
2019-07-29 17:51:22 -07:00
Michael Suo
65a89472c4 Put all modules in the global Python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23154

Test Plan: Imported from OSS

Differential Revision: D16441913

Pulled By: suo

fbshipit-source-id: a79f2c3e06a33cbd79b2e3333f16c069f356f451
2019-07-29 16:38:20 -07:00
Wanchao Liang
c384fbf4c8 support torch._C._get_tracing_state in script
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23248

Test Plan: Imported from OSS

Differential Revision: D16466588

Pulled By: wanchaol

fbshipit-source-id: 3c3d5dec2cea2f9cb080eadaef457cc62ac3fbe0
2019-07-29 15:05:50 -07:00
Nikolay Korovaiko
d6ff78fd00 fix an over-indented return in trace_module
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23358

Differential Revision: D16519010

Pulled By: Krovatkin

fbshipit-source-id: a7e4225b70e915d91c74874e3eca9bcb87baf84c
2019-07-29 11:15:55 -07:00
Michael Suo
711be82951 Make optimize a thread_local flag
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23170

Test Plan: Imported from OSS

Differential Revision: D16441912

Pulled By: suo

fbshipit-source-id: a33485178a329d54e41e364c4f14950f88481c55
2019-07-24 23:09:21 -07:00
Mingzhe Li
b3980f46a2 Replace uint8 with int8 in Linear and LSTM quantization path (#23347)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23347

This diff replaces uint8 with int8 to match with the underlying kernel implementation.  When we do int8 quantization,  we are computing with uint8 (input activation) * int8 (weight) -> uint8 (output activation). The weight is quantized into int8.

Reviewed By: jianyuh

Differential Revision: D16469435

fbshipit-source-id: a697655b0e97833fc601e5980970aec4dba53c39
2019-07-24 22:25:12 -07:00
davidriazati
48ca64dbf7 Better error for compiling a module type
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23312

Pulled By: driazati

Differential Revision: D16461299

fbshipit-source-id: 11e56c44d561c3fbf70a96c22c5fd494eea0cf19
2019-07-24 14:24:50 -07:00
Mingzhe Li
8fdbe1e10b Support LSTM with FP16 weight (#23291)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23291

This diff implements LSTM with FP16 weights based on FBGEMM.

At a high level, here are the steps:
1. Quantize and pack weight in every layer of LSTM
2. Pass weights from step 1 to the ATen `quantized_lstm` function which does matrix multiplication with FP16 weight. The following code shows the dtype of each variable used in MM:
Y  =   X * W + B
(fp32, fp32, fp16, fp32)

Reviewed By: jianyuh

Differential Revision: D16389595

fbshipit-source-id: c26ae4e153c667a941f4af64e9d07fc251403cee
2019-07-24 12:40:11 -07:00
Michael Suo
017870a633 kill module_lookup
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23097

Test Plan: Imported from OSS

Differential Revision: D16383329

Pulled By: suo

fbshipit-source-id: 282f8bac2245d584b66139daf4e5ea7b2b317295
2019-07-23 12:21:23 -07:00
Michael Suo
3be0a2b4be Parse all stmts in class defs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23031

Test Plan: Imported from OSS

Differential Revision: D16383327

Pulled By: suo

fbshipit-source-id: 6485109a66e653b7f26d30b91a97af8d71594e22
2019-07-23 12:21:15 -07:00
davidriazati
2891784a72 Resolve with closed over variables instead of stack frame (#22270)
Summary:
Previously we looked at the stack frame of the function that called
`script` to resolve variables. This doesn't work if someone calls script
with a function defined somewhere else that references captured
variables. We already have a mechanism to look at the closed over
variables for a function, so this changes the `rcb` to use that.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/22270

Pulled By: driazati

Differential Revision: D16391346

fbshipit-source-id: ad9b314ae86c249251b106079e76a5d7cf6c04c2
2019-07-22 11:44:36 -07:00
Zachary DeVito
c09e92255c Add initial support for serializing classes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22953

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D16340214

Pulled By: zdevito

fbshipit-source-id: 70fb1968eca34e14492e0d2be52e28b27813f821
2019-07-19 14:51:59 -07:00
davidriazati
9897ec4701 Recursively compile class types (#22475)
Summary:
Try to compile for class types encountered in recursive script
](https://our.intern.facebook.com/intern/diff/16340717/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22475

Pulled By: driazati

Differential Revision: D16340717

fbshipit-source-id: 5e1a46db517be2412f57156efbc4eb3347b01a8a
2019-07-18 15:43:16 -07:00
Michael Suo
5911cb8e5c Make load() create only one CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22727

Differential Revision: D16197603

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: 3eaefe6f229032b109d63a151fe0a20268b5cf56
2019-07-16 20:08:10 -07:00
davidriazati
7a370dbb41 Enable recursive script mode as the default (#22887)
Summary:
This fixes up the test suite (mostly just adding `ignore` decorations
to tests that need to call Python function) so that it all passes with
recursive script enabled.

The main user-facing result of this change is that Python functions are
compiled without any decorators, so non-TorchScriptable code must be
decorated with `torch.jit.ignore` (or
`torch.jit.ignore(drop_on_export=True` to maintain the functionality of
the current `ignore`)

Details can be found in #20939
](https://our.intern.facebook.com/intern/diff/16277608/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22887

Pulled By: driazati

Differential Revision: D16277608

fbshipit-source-id: 0abd0dc4291cf40651a1719bff813abb2b559640
2019-07-16 13:00:08 -07:00
Michael Suo
b6a88b3344 Make traced fns also go into the global python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22901

Test Plan: Imported from OSS

Differential Revision: D16278160

Pulled By: suo

fbshipit-source-id: f3e7d83b48d5f5b5cb1548ccc5b9bd382a3c411a
2019-07-16 12:04:16 -07:00
Michael Suo
c5afdd0b55 Revert D16197605: [jit] Make traced fns also go into the global python CU
Differential Revision:
D16197605

Original commit changeset: d32c975486b0

fbshipit-source-id: a00f0490cc23824792f3e745d7b5a003b1a33d20
2019-07-15 22:31:33 -07:00
davidriazati
6ffacd5f02 Use original module's class name for ScriptModules (#22873)
Summary:
Since recursive script creates a ScriptModule from an `nn.Module`,
there's no ties to the original module to pull a type name from, so we
have to explicitly pass it in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22873

Pulled By: driazati

Differential Revision: D16268547

fbshipit-source-id: 902a30e6e36427c6ba7033ded027a29d9dcbc1ee
2019-07-15 15:27:29 -07:00
Michael Suo
5fc1260e0a Make traced fns also go into the global python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22725

Differential Revision: D16197605

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: d32c975486b0cb4808687f0aa89325571f2817c4
2019-07-15 13:13:12 -07:00
Michael Suo
16aa235f43 _script_compile and _script_class_compile add to the python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22724

Differential Revision: D16197609

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: e12b31f8c8ce14b0968f4ac9445e7d225126b210
2019-07-15 13:13:08 -07:00
Lingyi Liu
1a93b96815 Revert da315a4
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22837

Differential Revision: D16239667

Pulled By: llyfacebook

fbshipit-source-id: 1a625d78d633927129dd2791e65b333b3902f94f
2019-07-13 01:54:20 -07:00
Karl Ostmo
da315a4e2a Revert D16037021: Support GRU module quantization in Pytorch
Differential Revision:
D16037021

Original commit changeset: 71145c67d869

fbshipit-source-id: 33cd2e57eba30ea33cc4f3116732a721c26f6efb
2019-07-12 21:05:34 -07:00
Lingyi Liu
d8c1b86135 Support GRU module quantization in Pytorch
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22498

Reviewed By: BIT-silence

Differential Revision: D16037021

fbshipit-source-id: 71145c67d8696e525b686cd3313033e5b6771718
2019-07-12 18:31:08 -07:00
Mingzhe Li
9eb039334f Use Linear Operator with fp16 weights in JIT (#22323)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22323

This diff adds an interface to use quantized Linear op in JIT.

Reviewed By: jamesr66a

Differential Revision: D16040724

fbshipit-source-id: 90e90aff9973c96ea076ed6a21ae02c349ee2bcf
2019-07-12 15:59:17 -07:00
Elias Ellison
cf2889ad8f add support for breaks and continues (#21692)
Summary:
Add support for breaks and continues in the jit. We do with a Graph transform pre-SSA.

A graph of the form
```
def test():
    while i < 5:
        if i == 3:
            break
        i += 1
        print(i)
```
has the body of the loop transformed to
```
if i == 3:
    did_break = True
else:
    did_break = False
if did_break:
    loop_exit = True
else:
    i += 1
    print(i)
    loop_exit = i < 5
```

I am going to add more tests but I think it is ready for review now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21692

Differential Revision: D16215807

Pulled By: eellison

fbshipit-source-id: 365102f42de4861d9323caaeb39a96de7619a667
2019-07-12 15:02:44 -07:00
Michael Suo
de819be93e refactor self to be a class again
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22722

Test Plan: Imported from OSS

Differential Revision: D16197607

Pulled By: suo

fbshipit-source-id: b4dd96b3f9cc46b48678aab0ff89afc3666e2185
2019-07-11 14:55:39 -07:00
Michael Suo
22d70e0d4b Give functions qualified names
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22721

Test Plan: Imported from OSS

Differential Revision: D16197606

Pulled By: suo

fbshipit-source-id: 94718fcdb0d3b651f16674af3cfd6249ed4533ae
2019-07-11 14:55:34 -07:00
Karl Ostmo
1ecc945ab2 Revert D15998762: [jit] Give functions qualified names
Differential Revision:
D15998762

Original commit changeset: bc2b734f626a

fbshipit-source-id: a118cc4e9a34233279e8380529a8d8120a25839d
2019-07-10 16:10:28 -07:00
Karl Ostmo
a1ca32409f Revert D15998758: [jit] refactor self to be a class again
Differential Revision:
D15998758

Original commit changeset: 14bad87bb6e4

fbshipit-source-id: f2c29974d4afc4d8f88a36e9c266e6d5a22a6191
2019-07-10 16:10:24 -07:00
Michael Suo
ee9c8a75f4 refactor self to be a class again (#22207)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22207
ghimport-source-id: 36ee8bd17411a2e220665ad2a27364653061070e

Test Plan: Imported from OSS

Differential Revision: D15998758

Pulled By: suo

fbshipit-source-id: 14bad87bb6e44bf1a43ae86339d8cc7b311c76dd
2019-07-10 15:19:07 -07:00
Michael Suo
c0674cebf1 Give functions qualified names (#22206)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22206
ghimport-source-id: d453219d907e048f24eb7f63c096b2c300307c83

Test Plan: Imported from OSS

Differential Revision: D15998762

Pulled By: suo

fbshipit-source-id: bc2b734f626ab07f97dc50ddf1b021e8b46de312
2019-07-10 15:19:03 -07:00
davidriazati
8a233b99cb Report errors through call stack (#22280)
Summary:
The error for `test_error_stack_module`:

```
Traceback (most recent call last):
  File "../test.py", line 35, in <module>
    scripted = torch.jit.script(M())
  File "/home/davidriazati/other/pytorch/torch/jit/__init__.py", line 1119, in script
    return _convert_to_script_module(obj)
  File "/home/davidriazati/other/pytorch/torch/jit/__init__.py", line 1825, in _convert_to_script_module
    raise e
RuntimeError:

d(int x) -> int:
Expected a value of type 'int' for argument 'x' but instead found type 'str'.
:
at ../test.py:11:12
def c(x):
    return d("hello") + d(x)
           ~ <--- HERE

'c' is being compiled since it was called from 'b'
at ../test.py:14:12
def b(x):
    return c(x)
           ~~~ <--- HERE

'b' is being compiled since it was called from 'forward'
at ../test.py:22:16
    def forward(self, x):
        return b(x)
               ~~~ <--- HERE

'forward' is being compiled since it was called from 'forward'
at ../test.py:31:20
    def forward(self, x):
        return x + self.submodule(x)
                   ~~~~~~~~~~~~~~~~ <--- HERE
```

This also unifies our error reporting in the front end with `ErrorReport`

TODO
* Include module names in message, #22207 should make this easy

](https://our.intern.facebook.com/intern/diff/16060781/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22280

Pulled By: driazati

Differential Revision: D16060781

fbshipit-source-id: c42968b53aaddb774ac69d5abbf7e60c23df8eed
2019-07-09 16:41:22 -07:00
Michael Suo
3b2844eeea Make CompilationUnit own Functions (#22202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22202
ghimport-source-id: de6c963af1df76d2d6357155e64a5913ab879f76

Test Plan: Imported from OSS

Differential Revision: D15998761

Pulled By: suo

fbshipit-source-id: 5414a6424953738d823b265d20dc67dde6e5b2d8
2019-07-04 17:12:00 -07:00
David Riazati
10c4b98ade Remove weak script (#22212)
Summary:
* Deletes all weak script decorators / associated data structures / methods
   * In order to keep supporting the standard library in script, this enables recursive script on any function defined in `torch.nn`
   * Most changes in `torch/nn` are the result of `ag -Q "weak" torch/nn/ -l | xargs sed -i '/weak/d'`, only `rnn.py` needed manual editing to use the `ignore` and `export` to continue supporting the overloaded `forward` methods
* `Sequential`/`ModuleList` no longer need to be added to constants since they are compiled on demand

This should also fix https://github.com/pytorch/pytorch/issues/22212
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22212

Differential Revision: D15988346

Pulled By: driazati

fbshipit-source-id: af223e3ad0580be895377312949997a70e988e4f
2019-07-03 17:28:25 -07:00