Commit Graph

631 Commits

Author SHA1 Message Date
Elias Ellison
eb3e9872c9 [JIT] make torch.unique compilable (#38156)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/37986

Follows the stack in https://github.com/pytorch/pytorch/pull/33783 stack to make functions in `torch/functional.py` resolve to their python implementations. Because the return type of `torch.unique` depends on `return_inverse` and `return_counts` I had to refactor the implementation to use our boolean_dispatch mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38156

Differential Revision: D21504449

Pulled By: eellison

fbshipit-source-id: 7efb1dff3b5c00655da10168403ac4817286ff59
2020-05-12 18:37:53 -07:00
Elias Ellison
c2ac2127be [JIT] recursively compile class types (#38050)
Summary:
Make it so that non-nn Module classes do not need to be annotated with `torch.jit.script`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38050

Differential Revision: D21482654

Pulled By: eellison

fbshipit-source-id: 22689e4d7a33f6e1574b9495cff29a1fe6abb910
2020-05-12 17:16:28 -07:00
Meghan Lele
1456515f15 [JIT] Disallow plain List type annotation without arg (#38130)
Summary:
**Summary**
This commit detects and prohibits the case in which `typing.List` is
used as an annotation without a type argument (i.e. `typing.List[T]`).
At present, `typing.List` is always assumed to have one argument, and
when it is used without one, `typing.List.__args__[0]` is nonempty and
set to some `typing.TypeVar` instance, which has no JIT type equivalent.
Consequently, trying to convert `typing.List` to a JIT type results in
a `c10::ListType` with `nullptr` for its element type, which can cause
a segmentation fault.

This is fixed by returning a `ListType` from
`jit.annotations.try_ann_to_type` only if the element type is converted
successfully to a JIT type and returning `None` otherwise.

**Test Plan**
I ran the code from the issue (https://github.com/pytorch/pytorch/issues/37530) that reported this problem and also ran some unit tests.

*Before*
```
$ python3 segfault.py
Segmentation fault (core dumped)
```

*After*
```
$ python3 segfault.py
Traceback (most recent call last):
...
RuntimeError:
Unknown type name 'List':
  File "segfault.py", line 9
    classmethod
    def cat(cls, box_lists: List):
                            ~~~~ <--- HERE
        return cls(torch.cat([x for x in box_lists]))
'Boxes.cat' is being compiled since it was called from 'Boxes'
  File "segfault.py", line 13
def f(t: torch.Tensor):
    b = Boxes(t)
        ~~~~~ <--- HERE
    c = Boxes(torch.tensor([3, 4]))
    return Boxes.cat([b, c])
'Boxes' is being compiled since it was called from 'f'
  File "segfault.py", line 13
def f(t: torch.Tensor):
    b = Boxes(t)
    ~~~~~~~~~~~ <--- HERE
    c = Boxes(torch.tensor([3, 4]))
    return Boxes.cat([b, c])
```

**Fixes**
This pull request fixes https://github.com/pytorch/pytorch/issues/37530.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38130

Differential Revision: D21485284

Pulled By: SplitInfinity

fbshipit-source-id: 9b51ef6340485a24c8b7cfb85832d4668b8ac51a
2020-05-11 14:15:54 -07:00
ycao
ae534dc978 [TorchScript] Explicitly disallow del with more than 1 operand. (#38089)
Summary:
del in python supports multiple operands, but PyTorch c++ frontend doesn't support that. To be consistent across different frontends, we decided to throw an exception when finding del with multiple operands inside torchscript.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38089

Test Plan: Unit tests in test/jit/test_builtins.py

Differential Revision: D21478900

Pulled By: SplitInfinity

fbshipit-source-id: 1cbd61301680c5d6652ef104996178cefcdd3716
2020-05-08 17:56:36 -07:00
James Reed
c1e7758b5e Back out "Revert D20229168: [quantization] Use torchbind for Linear PackedParams" (#38101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38101

Original commit changeset: 29e8a4d3b8bf
ghstack-source-id: 103730417

Test Plan: waitforsadcastle

Differential Revision: D21471381

fbshipit-source-id: a922cdf31ba32021e7264ae1454c646c0bfd7ef4
2020-05-08 10:53:06 -07:00
Nikita Shulga
4bc0a7f86a Revert D20229168: [quantization] Use torchbind for Linear PackedParams
Test Plan: revert-hammer

Differential Revision:
D20229168

Original commit changeset: 3607cac9aa5b

fbshipit-source-id: 29e8a4d3b8bffd95ff6a58b46c4f1c1e23770304
2020-05-07 19:47:45 -07:00
James Reed
eaf9b28c55 [quantization] Use torchbind for Linear PackedParams (#34140)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34140

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D20229168

Pulled By: jamesr66a

fbshipit-source-id: 3607cac9aa5b4b044572329742baed03350491c6
2020-05-07 19:03:44 -07:00
Ralf Gommers
46ed3349f3 Add --check-untyped-defs to mypy.ini and test suite (#37594)
Summary:
Also move the ignores for imports to the bottom in `mypy.ini`, those are much less interesting - start with the stuff people want to work on.

Second commit tests the instructions: remove an ignore, fix the issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37594

Differential Revision: D21434858

Pulled By: ezyang

fbshipit-source-id: 4f1a6868cdb4cb59d072bcf105f48c3a5ba3ff98
2020-05-07 06:36:01 -07:00
Elias Ellison
28ed04c620 [JIT] remove list_with_default op (#37886)
Summary:
We can implement this as a builtin instead of as a registered op.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37886

Differential Revision: D21414329

Pulled By: eellison

fbshipit-source-id: 6e130fa83fbf7ba4d4601f509cb169a2fa804108
2020-05-06 17:32:11 -07:00
Ailing Zhang
dd618216c5 [JIT]Support adv indexing using list. (#37848)
Summary:
We used to only support indexing through
- numbers like `x[0, 1]`
- tuple like `x[(0, 1)]`
- tensor like `x[torch.tensor([0, 1])]`

This PR adds support for indexing through list which is equivalent to tensor.
- `x[[0, 1, 5]]`
- `x[[0, 1], [0, 1]]`
- `x[[[0, 1], [0, 1]], [[0, 1], [0, 1]]]`

Note for `x[[0, 1, 5]]` we had a bug in AST conversion code so we used to treat it like `x[0, 1, 5]` which means it might accidentally run and produce wrong result(fixes https://github.com/pytorch/pytorch/issues/37286 fixes https://github.com/pytorch/pytorch/issues/18616), now that it's fixed we probably want to mark it as BC breaking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37848

Reviewed By: suo

Differential Revision: D21409840

Pulled By: ailzhang

fbshipit-source-id: 6f2d962885c6dc009cb384d98be1822f5ca7a189
2020-05-06 10:44:48 -07:00
Michael Suo
bd220b336b [jit] fix trace checking reporting divergent names (#37842)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37842

Fixes https://github.com/pytorch/pytorch/issues/23993.

Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample1 = torch.ones(1)
sample2 = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample1, sample2,),))
> produces a graph with something like:
> %sample1, %sample2 = prim::TupleUnpack(%input)
```

This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
> produces a graph with something like
> %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.

Test Plan: Imported from OSS

Differential Revision: D21406478

Pulled By: suo

fbshipit-source-id: 3c7066b95d4a6e9b528888309954b02dadbc1a07
2020-05-05 13:39:41 -07:00
Edward Yang
4fef3763dd Revert "Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings" (#37778)
Summary:
Original PR: https://github.com/pytorch/pytorch/pull/37419

cc mattip suo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37778

Differential Revision: D21385774

Pulled By: ezyang

fbshipit-source-id: 5de532faab8bae132736b6b5189e0ee2ac9935be
2020-05-04 14:32:35 -07:00
Michael Suo
20f7e62b1d Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings
Test Plan: revert-hammer

Differential Revision:
D21337640

Original commit changeset: d4ad198780c3

fbshipit-source-id: fa9ba6ac542173a50bdb45bfa12f3fec0ed704fb
2020-05-04 10:57:55 -07:00
mattip
f10fbcc820 Split up documentation into subpages and clean up some warnings (#37419)
Summary:
xref gh-32838, gh-34032

This is a major refactor of parts of the documentation to split it up using sphinx's `autosummary` feature which will build out `autofuction` and `autoclass` stub files and link to them. The end result is that the top module pages like torch.nn.rst and torch.rst are now more like table-of-contents to the actual single-class or single-function documentations pages.

Along the way, I modified many of the docstrings to eliminate sphinx warnings when building. I think the only thing I changed from a non-documentation perspective is to add names to `__all__` when adding them to `globals()` in `torch.__init__.py`

I do not know the CI system: are the documentation build artifacts available after the build, so reviewers can preview before merging?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37419

Differential Revision: D21337640

Pulled By: ezyang

fbshipit-source-id: d4ad198780c3ae7a96a9f22651e00ff2d31a0c0f
2020-05-04 09:39:22 -07:00
Michael Voznesensky
91e74fd843 [JIT] Adds a code_with_constants method to module printing (#37586)
Summary:
Closes https://github.com/pytorch/pytorch/issues/36625
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37586

Differential Revision: D21331385

Pulled By: suo

fbshipit-source-id: 752e63eac8bdd06c6719efb972cdc832ad7c1535
2020-04-30 20:44:01 -07:00
Michael Suo
896f8130a6 Revert D21297549: [jit] fix trace checking reporting divergent names
Test Plan: revert-hammer

Differential Revision:
D21297549

Original commit changeset: 981d5879a4a2

fbshipit-source-id: 9be6e88007c644914973a305f9e7a961ef11a815
2020-04-29 16:16:44 -07:00
Michael Suo
4bfa51d405 [jit] fix trace checking reporting divergent names (#37464)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37464

Fixes https://github.com/pytorch/pytorch/issues/23993.

There are two fixes here:
1. Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample, sample,),))
# produces a graph with something like
# %sample, %sample = prim::TupleUnpack(%input)
```
This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
# produces a graph with something like
# %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.

2. Previously, our input cloning for trace checking didn't do a memoized
deep copy. So:
```
_clone_inputs((sample, sample, sample))
```
produces a tuple with three non-aliased tensors. That's wrong! Use
copy.deepcopy with a memoization argument to fix this.

Test Plan: Imported from OSS

Differential Revision: D21297549

Pulled By: suo

fbshipit-source-id: 981d5879a4a244520dd68489767129ff357f1497
2020-04-28 23:52:57 -07:00
James Reed
fd4a09ea73 [WIP] Bind in CellParams for RNN (#35787)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35787

Test Plan: Imported from OSS

Differential Revision: D20784118

Pulled By: jamesr66a

fbshipit-source-id: 5d8f7e1502f707bff9a9aefa90e3edfb3429549b
2020-04-28 21:47:06 -07:00
moto
5a27ec09b8 Add Inverse Short Time Fourier Transform in ATen native (#35569)
Summary:
Ported `torchaudio`'s implementation (test, and documentation as well) to ATen.

Note
 - Batch packing/unpacking is performed in Python. ATen implementation expects 4D input tensor.
 - The way `hop_length` is initialized in the same way as `stft` implementation. [The Torchaudio's version tried to mimic the same behavior but slightly different](7da61a4bee/torchaudio/functional.py (L152-L157)).

Closes https://github.com/pytorch/pytorch/issues/34827
Relates https://github.com/pytorch/pytorch/issues/3775
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35569

Differential Revision: D21178090

Pulled By: mthrok

fbshipit-source-id: 2701a8b241a36a6fb1b740c2fb2b07cb938185d4
2020-04-24 12:14:55 -07:00
David Reiss
e75fb4356b Remove (most) Python 2 support from Python code (#35615)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35615

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well (though using side-by-side view and ignoring
whitespace change might be helpful).

Test Plan: CI

Differential Revision: D20842886

Pulled By: dreiss

fbshipit-source-id: 8cad4e87c45895e7ce3938a88e61157a79504aed
2020-04-22 09:23:14 -07:00
Nikolay Korovaiko
b5483b8286 [pytorch][PR] Re-enable a failing test (#36763)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36763

Differential Revision: D21083309

Pulled By: Krovatkin

fbshipit-source-id: 4fb5b95bd3e01bd83a406d4394f266d7fd168f21
2020-04-16 22:21:47 -07:00
Sebastian Messmer
e9b4580411 Revert D20839674: [pytorch][PR] Re-enable a failing test
Test Plan: revert-hammer

Differential Revision:
D20839674

Original commit changeset: 68f41610a823

fbshipit-source-id: b69ccfd49bbde566870fa53cd3fe2931721db4ea
2020-04-16 15:26:34 -07:00
Nikolay Korovaiko
487dc0f961 Re-enable a failing test (#35847)
Summary:
This test was failing because caching resulted into a function with multiple execution plans rather than multiple functions with a single execution plan each as a test writer intended.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35847

Differential Revision: D20839674

Pulled By: Krovatkin

fbshipit-source-id: 68f41610a823d94c1e744c85ac72652c741d73ae
2020-04-16 11:46:02 -07:00
davidriazati
8d66f88eb1 [jit] Fix bound method copying (#36546)
Summary:
Previously we were copying the bound method of the original class to the
new script module class, which causes `self` to be wrong. This PR
changes it so we fetch the unbound function, then bind it to the new
script module, then attach it to the module.

Fixes #28280
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36546

Pulled By: driazati

Differential Revision: D21023329

fbshipit-source-id: 6b3f8404700860151792f669a9c02fbd13365272
2020-04-15 17:38:20 -07:00
Wanchao Liang
999d7f6ab2 [jit] tracer flag to guard risky behaivors (#36277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36277

This PR introduce a flag to the tracer that guard the risky behaviors
like adding list/dict as output of the tracer. Currently to ensure not
BC breaking user, we throw warning if the tracer output is list, and
will throw error when the tracer output is dict to enforce using this
flag (next PR)

Test Plan: Imported from OSS

Differential Revision: D20998157

Pulled By: wanchaol

fbshipit-source-id: 0d2c55f1a263a48b1b92dd6ad54407815e0a6f72
2020-04-13 22:35:03 -07:00
Zachary DeVito
967cdc2baf Simplify replicate logic (#36174)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36174

Test Plan: Imported from OSS

Differential Revision: D20903301

Pulled By: zdevito

fbshipit-source-id: 714a32fe417b7d1615886936c41505d1ba538f47
2020-04-13 11:21:43 -07:00
Orion Reblitz-Richardson
8240db11e1 [pytorch] Remove python2 support from tests and torch.jit (#35042)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35042

Removing python2 tests and some compat code in torch.jit. Check if dependent projects and external tests have any issues after these changes.

Test Plan: waitforsandcastle

Reviewed By: suo, seemethere

Differential Revision: D18942633

fbshipit-source-id: d76cc41ff20bee147dd8d44d70563c10d8a95a35
2020-03-26 21:29:51 -07:00
Wanchao Liang
4a84ac5f5d [jit] make Future type annotation available in Python (#27637)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27637

Fixes https://github.com/pytorch/pytorch/issues/26578

Test Plan: Imported from OSS

Differential Revision: D20626866

fbshipit-source-id: 20d6a3a719fddcb33e0e17a56d7123535fa20d65
2020-03-24 14:36:05 -07:00
davidriazati
44622bbda9 [jit] Add lazy script decorator (#34935)
Summary:
Stacked PRs
 * #34938 - [jit] Remove stray `script`
 * **#34935 - [jit] Add lazy script decorator**

Some users maintain libraries of code that is largely trace-able but not
script-able. However, some functions may need to be `torch.jit.script`ed if
they contain control flow so the tracer will use the compiler version.
This however impacts library start up time as in #33418, so this PR adds
a workaround in the form of a `torch.jit._lazy_script_while_tracing`
that will only initialize the compiler if the function is called while
actually tracing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/34935

Pulled By: driazati

Differential Revision: D20569778

fbshipit-source-id: d87c88c02b1abc86b283729ab8db94285d7d4853
2020-03-24 13:43:18 -07:00
davidriazati
e35dd4f603 [jit] Include call stack in OSError message (#34669)
Summary:
Previously there was no indication of why you would get an `OSError` for something (such as the generated methods of a `dataclass`).
](https://our.intern.facebook.com/intern/diff/20426570/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34669

Pulled By: driazati

Differential Revision: D20426570

fbshipit-source-id: 45d63631984fa26a87c03de5523fb10d8abbc6db
2020-03-18 15:10:23 -07:00
Michael Suo
a57f92e4de [jit] copy unused/ignored methods to ScriptModule during compilation (#33981)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33981

Okay it turns out that https://github.com/pytorch/pytorch/pull/29342
deletes actually useful things from the resulting Python module. In
particular, people like having `ignore`'d methods attached so that they
can invoke them from python.

Test Plan: Imported from OSS

Differential Revision: D20171650

Pulled By: suo

fbshipit-source-id: 71862e932c6a56cd055d0cff6657887ee0ceb9a8
2020-03-16 10:38:59 -07:00
Tugrul Ince
c9023e3b12 Support left and right shift operators in JIT (#34563)
Summary:
With this PR, we can now support left and right shift operators in the JIT engine for <int, int> and <Tensor, int>.

Updated tests pass as expected:
```
> python test/test_jit.py
...
Ran 2427 tests in 84.861s

OK (skipped=139, expected failures=1)
```

Running the following code with Python results in the output below:
```
> cat ~/expressions.py
import torch

torch.jit.script
def fn(a, b):
    # type: (int, int)
    return (
        a << b,  # supported
        b >> a,  # supported
        a & b,
        a | b,
        a ^ b
    )
print(fn.graph)
```

```
> python ~/expressions.py
graph(%a.1 : int,
      %b.1 : int):
  %4 : int = aten::leftshift(%a.1, %b.1) # /home/ince/expressions.py:7:8
  %7 : int = aten::rightshift(%b.1, %a.1) # /home/ince/expressions.py:8:8
  %10 : int = aten::__and__(%a.1, %b.1) # /home/ince/expressions.py:9:8
  %13 : int = aten::__or__(%a.1, %b.1) # /home/ince/expressions.py:10:8
  %16 : int = aten::__xor__(%a.1, %b.1) # /home/ince/expressions.py:11:8
  %17 : (int, int, int, int, int) = prim::TupleConstruct(%4, %7, %10, %13, %16)
  return (%17)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34563

Differential Revision: D20434209

Pulled By: tugrulince

fbshipit-source-id: 886386c59755106e17b84778b8e495b80a6269cd
2020-03-13 13:00:33 -07:00
Elias Ellison
514cba0661 [JIT] remove builtin interpolate functions (#34514)
Summary:
`torch.nn.functional.interpolate` was written as a builtin op when we scripted the standard library, because it has four possible overloads. As a result, whenever we make a change to `interpolate`, we need to make changes in two places, and it also makes it impossible to optimize the interpolate op. The builtin is tech debt.

I talked with ailzhang, and the symbolic script changes are good to remove (i guess that makes a third place we needed to re-implement interpolate).

I'm trying to get rid of unneccessary builtin operators because we're standardizing mobile bytecode soon, so we should try to get this landed as soon as possible.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34514

Differential Revision: D20391089

Pulled By: eellison

fbshipit-source-id: abc84cdecfac67332bcba6b308fca4db44303121
2020-03-12 09:21:33 -07:00
Michael Suo
c235be42dd [jit] kill script namespace (#34515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34515

Once upon a time we thought this was necessary. In reality it is not, so
removing it.

For backcompat, our public interface (defined in `api/`) still has
typedefs to the old `script::` names.

There was only one collision: `Pass` as a `Stmt` and `Pass` as a graph
transform. I renamed one of them.

Test Plan: Imported from OSS

Differential Revision: D20353503

Pulled By: suo

fbshipit-source-id: 48bb911ce75120a8c9e0c6fb65262ef775dfba93
2020-03-11 23:32:48 -07:00
davidriazati
23b2fba79a [jit] Add type tags to lists/dicts in pickle (#33255)
Summary:
Stacked PRs
 * #33474 - [jit] Remove list specializations from pickler
 * **#33255 - [jit] Add type tags to lists/dicts in pickle**

This adds a global call to `torch.jit._pickle.restore_type_tags` for
lists and dicts so that we can preserve their types after serialization.
](https://our.intern.facebook.com/intern/diff/20346780/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33255

Pulled By: driazati

Differential Revision: D20346780

fbshipit-source-id: c8534954ef4adb2e3c880401acbee30cd284f3db
2020-03-10 19:17:01 -07:00
James Reed
2de4fa702b [JIT] Preserve qualified names on traced modules (#34395)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34395

fixes: https://github.com/pytorch/pytorch/issues/33913

Test Plan: Imported from OSS

Differential Revision: D20347778

Pulled By: jamesr66a

fbshipit-source-id: 7b5a35b6f9678c34cb6127d531fa3bfe65703116
2020-03-09 19:23:53 -07:00
Elias Ellison
fea618b524 [JIT] remove list with default builtin (#34171)
Summary:
I think this was added when we couldn't compile the function itself. now we can.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34171

Differential Revision: D20269960

Pulled By: eellison

fbshipit-source-id: 0a60458d639995d9448789c249d405343881b304
2020-03-09 16:02:26 -07:00
Leah Dickstein
c5e822b7bb Back out "[jit] Add type tags to lists/dicts in pickle" (#34406)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34406

Pull Request resolved: https://github.com/pytorch/pytorch/pull/34405

Original commit changeset: 2f1826e6679a

Test Plan: reverting, see S197156

Reviewed By: akyrola, volkhin

Differential Revision: D20317456

fbshipit-source-id: 89298a9c022edba1d54bcdc7541804cb919e33f5
2020-03-06 20:02:16 -08:00
Elias Ellison
479c3b0aa5 [JIT] add support for torch.norm (#33783)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33783

Fix for https://github.com/pytorch/pytorch/issues/20113

Test Plan: Imported from OSS

Differential Revision: D20121917

Pulled By: eellison

fbshipit-source-id: ffedcc40678cd80f5529ff9323088eed544e5158
2020-03-05 14:46:24 -08:00
Adam Paszke
3a4bac5c76 Throw a proper error when parsing local variable annotations without assignments (#34133)
Summary:
Currently, putting `outputs: List[Tensor]` instead of `outputs: List[Tensor] = []` in your JITed code results in:
```
Traceback (most recent call last):
  File "custom_lstms.py", line 453, in <module>
    test_script_stacked_bidir_rnn(5, 2, 3, 7, 4)
  File "custom_lstms.py", line 404, in test_script_stacked_bidir_rnn
    rnn = script_lstm(input_size, hidden_size, num_layers, bidirectional=True)
  File "custom_lstms.py", line 62, in script_lstm
    other_layer_args=[LSTMCell, hidden_size * dirs, hidden_size]))
  File "/home/apaszke/pytorch/torch/jit/__init__.py", line 1267, in script
    return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 305, in create_script_module
    return create_script_module_impl(nn_module, concrete_type, stubs_fn)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 348, in create_script_module_impl
    script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
  File "/home/apaszke/pytorch/torch/jit/__init__.py", line 1612, in _construct
    init_fn(script_module)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 340, in init_fn
    scripted = create_script_module_impl(orig_value, sub_concrete_type, infer_methods_to_compile)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 348, in create_script_module_impl
    script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
  File "/home/apaszke/pytorch/torch/jit/__init__.py", line 1612, in _construct
    init_fn(script_module)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 340, in init_fn
    scripted = create_script_module_impl(orig_value, sub_concrete_type, infer_methods_to_compile)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 348, in create_script_module_impl
    script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
  File "/home/apaszke/pytorch/torch/jit/__init__.py", line 1612, in _construct
    init_fn(script_module)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 340, in init_fn
    scripted = create_script_module_impl(orig_value, sub_concrete_type, infer_methods_to_compile)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 348, in create_script_module_impl
    script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
  File "/home/apaszke/pytorch/torch/jit/__init__.py", line 1612, in _construct
    init_fn(script_module)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 340, in init_fn
    scripted = create_script_module_impl(orig_value, sub_concrete_type, infer_methods_to_compile)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 317, in create_script_module_impl
    stubs = stubs_fn(nn_module)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 511, in infer_methods_to_compile
    stubs.append(make_stub_from_method(nn_module, method))
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 41, in make_stub_from_method
    return make_stub(func)
  File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 34, in make_stub
    ast = torch.jit.get_jit_def(func, self_name="RecursiveScriptModule")
  File "/home/apaszke/pytorch/torch/jit/frontend.py", line 173, in get_jit_def
    return build_def(ctx, py_ast.body[0], type_line, self_name)
  File "/home/apaszke/pytorch/torch/jit/frontend.py", line 206, in build_def
    build_stmts(ctx, body))
  File "/home/apaszke/pytorch/torch/jit/frontend.py", line 129, in build_stmts
    stmts = [build_stmt(ctx, s) for s in stmts]
  File "/home/apaszke/pytorch/torch/jit/frontend.py", line 129, in <listcomp>
    stmts = [build_stmt(ctx, s) for s in stmts]
  File "/home/apaszke/pytorch/torch/jit/frontend.py", line 181, in __call__
    return method(ctx, node)
  File "/home/apaszke/pytorch/torch/jit/frontend.py", line 294, in build_AnnAssign
    rhs = build_expr(ctx, stmt.value)
  File "/home/apaszke/pytorch/torch/jit/frontend.py", line 180, in __call__
    raise UnsupportedNodeError(ctx, node)
  File "/home/apaszke/pytorch/torch/jit/frontend.py", line 116, in __init__
    source_range = ctx.make_range(offending_node.lineno,
AttributeError: 'NoneType' object has no attribute 'lineno'
```

This patch makes the error message more reasonable:
```
torch.jit.frontend.UnsupportedNodeError: annotated assignments without assigned value aren't supported:
  File "custom_lstms.py", line 221
        # type: (Tensor, Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]
        inputs = reverse(input.unbind(0))
        outputs: List[Tensor]
        ~ <--- HERE
        for i in range(len(inputs)):
            out, state = self.cell(inputs[i], state)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34133

Differential Revision: D20249076

Pulled By: ezyang

fbshipit-source-id: 40ec34ad38859f9fe56f379d3f8d08644b00fab9
2020-03-05 11:23:07 -08:00
davidriazati
99e211e661 [jit] Add type tags to lists/dicts in pickle (#33255)
Summary:
Stacked PRs
 * #33474 - [jit] Remove list specializations from pickler
 * **#33255 - [jit] Add type tags to lists/dicts in pickle**

This adds a global call to `torch.jit._pickle.restore_type_tags` for
lists and dicts so that we can preserve their types after serialization.
](https://our.intern.facebook.com/intern/diff/19868637/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33255

Pulled By: driazati

Reviewed By: xman1979, Tianshu-Bao

Differential Revision: D19868637

fbshipit-source-id: 2f1826e6679a786ca209198690269f399a542c04
2020-03-03 16:48:21 -08:00
davidriazati
2f6ffe8c39 [jit] Resolve type annotation names to types (#29623)
Summary:
This adds some machinery so that we use Python to resolve types to a value and the corresponding resolution logic in `annotations.py` instead of using the string.

This PR also `slowTests` a random test since it was taking > 1 min whereas all the other tests take < 10 seconds.

Fixes #31864
Fixes #31950
](https://our.intern.facebook.com/intern/diff/20144407/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29623

Pulled By: driazati

Differential Revision: D20144407

fbshipit-source-id: ef3699f6b86039d8b4646ffc42c21bd1132d1681
2020-02-28 18:35:10 -08:00
Michael Suo
dbe850af5b [jit] do the code reorg (#33851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33851

Rationale and context described in #33828.

Script to reproduce the move:
https://gist.github.com/suo/16cbefaaeb67ca5a7c6caffd49b7f6e9
ghstack-source-id: 99079645

Test Plan: Make sure CI passes

Reviewed By: jamesr66a

Differential Revision: D20133869

fbshipit-source-id: 390e9241a9c85366d9005c492ac31f10aa96488e
2020-02-27 13:02:51 -08:00
Wanchao Liang
d494986171 [jit] make RRef type annotation available in Python (#33526)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33526

Test Plan: Imported from OSS

Differential Revision: D19988848

Pulled By: wanchaol

fbshipit-source-id: aeebc946d08b38dac0b656617bf395e86bcea558
2020-02-26 18:44:35 -08:00
Elias Ellison
857eb4145e [JIT] add support for torch.cdist (#33737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33737

Test Plan: Imported from OSS

Differential Revision: D20121916

Pulled By: eellison

fbshipit-source-id: b0427bbfd3ade1f3129c4a95a542fbc32c3abd76
2020-02-26 18:37:37 -08:00
Elias Ellison
f31b1d3453 [JIT] add support for lu_unpack (#33736)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33736

Test Plan: Imported from OSS

Differential Revision: D20121914

Pulled By: eellison

fbshipit-source-id: 1136f4d7678a2233129aefe3e30234af385b8353
2020-02-26 18:37:33 -08:00
Elias Ellison
4543cf4eb1 [JIT] add support for torch.lu to torchscript (#33724)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33724

Fix for https://github.com/pytorch/pytorch/issues/33381, partial fix of https://github.com/pytorch/pytorch/issues/30786

Test Plan: Imported from OSS

Differential Revision: D20077321

Pulled By: eellison

fbshipit-source-id: a1e6a0370712b36c9f66979098ac2f9d500ca5f6
2020-02-26 18:37:28 -08:00
Elias Ellison
fddf73250d [JIT] fix resolving of functions in torch/functional. fix compilation of torch.stft (#33504)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33504

Fix resolution fo functions that are bound onto torch in torch/functional.py. This does not fix compilation of all of those functions, those will be done in follow ups. Does torch.stft as a start.

Fixes #21478

Test Plan: Imported from OSS

Differential Revision: D20014591

Pulled By: eellison

fbshipit-source-id: bb362f1b5479adbb890e72a54111ef716679d127
2020-02-26 18:35:43 -08:00
Jerry Zhang
8527ba8b70 [jit] Add None parameter as parameter instead of attributes (#32964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32964

att

Test Plan:
.

Imported from OSS

Differential Revision: D19913188

fbshipit-source-id: 9cdd93cbaf9892f4311656c786637765a675a68c
2020-02-19 16:06:56 -08:00
Michael Suo
416413dec4 [jit] add inlined_graph method to ScriptFunctions (#33508)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33508

Ever since we switched to not inlining by default, some users have
complained since they relied on inlining occuring to, e.g. process the
graph with some other tool. Add an inlined_graph for convenience in
those cases.

Test Plan: Imported from OSS

Differential Revision: D19977638

Pulled By: suo

fbshipit-source-id: fe1fa92ff888959203d5d1995930d488b5f9e24c
2020-02-19 15:41:25 -08:00
davidriazati
53ad596342 [jit] Remove `torch.jit._dump_trace (#33453)
Summary:
This was old code that isn't tested and is broken, it should have been
deleted in #24874

Pull Request resolved: https://github.com/pytorch/pytorch/pull/33453

Pulled By: driazati

Differential Revision: D19961403

fbshipit-source-id: 94c52360460194d279dad5b0ea756ee366f525e1
2020-02-19 07:49:44 -08:00
Mikhail Zolotukhin
806e7daa1f Rename TorchScript compiler to IR emitter to better reflect its function. (#33127)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33127

Test Plan: Imported from OSS

Differential Revision: D19806503

Pulled By: ZolotukhinM

fbshipit-source-id: ab78bdbbac5f12dbcc6c2e2573f5862a16ffcf3d
2020-02-12 18:45:13 -08:00
Zafar Takhirov
acd51e13f7 TorchScript add check if quantized
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32890

Test Plan: Imported from OSS

Differential Revision: D19673463

Pulled By: z-a-f

fbshipit-source-id: 453ff662810845fcaeb8e6d5919afa8e2d395768
2020-02-11 17:38:49 -08:00
James Reed
bc4790b3aa [JIT] Trace uses of torchbind classes as module attributes (#32833)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32833
ghstack-source-id: 97736046

Test Plan: Imported from OSS

Differential Revision: D19645714

fbshipit-source-id: 10a7271f13c3588aea666b44b916e90ba7b3c666
2020-02-04 19:28:37 -08:00
Pierre Fenoll
37953d92d1 raise when jit-load.ing a folder (#27836)
Summary:
Very similar to https://github.com/pytorch/pytorch/issues/16267 but handling directories.

Stoked to contribute!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27836

Differential Revision: D19698398

Pulled By: ezyang

fbshipit-source-id: eabc3a44d258124f860babb47ab91e22c2c3d6cc
2020-02-03 11:19:57 -08:00
Martin Yuan
03557a9838 Make save_for_lite_interpreter private (#32771)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32771

It's a patch to #32621, make the api private.

Test Plan: Imported from OSS

Differential Revision: D19657307

Pulled By: iseeyuan

fbshipit-source-id: e604a0cbed6a1e61413daaafc65bea92b90f1f5d
2020-01-30 21:01:54 -08:00
Michael Suo
3552be1090 [jit] fix the NoneType param/buffer hack (#32745)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32745

Some parameters (like `bias` in conv) are optional. To achieve this
previously, you had to add `bias` as a constant, which would invoke some
pretty weird behavior in the frontend, summarized as:
```
if bias is not None:
  add it as a parameter normally
else: # bias is None
  add it as a constant with the value None
```

There are several things bad about this:
1. Bias is not a constant. Marking it `__constants__` is confusing.
2. It basically relies on an implementation detail (the frontend
processes parameters before constants) to work.

Okay, whatever. I don't even know why we did this originally, but
getting rid of it doesn't break anything, so I assume improved NoneType
refinement has made this a non-issue.

Note on perf: this will make no difference; if bias was `None` it's still
folded out today, if bias is a Tensor it would be added as a parameter
both before and after this change

Test Plan: Imported from OSS

Differential Revision: D19628634

Pulled By: suo

fbshipit-source-id: d9128a09c5d096b938fcf567b8c23b09ac9ab37f
2020-01-29 17:04:39 -08:00
Martin Yuan
c64dec1993 Python binding to export bytecode format for lite interpreter (#32621)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32621

Export the "_save_for_mobile" method to Python so that the bytecode format for lite interpreter can be added or updated to the original script model.

It's the first step of python binding for lite interpreter, as discussed in this [internal post](https://fb.workplace.com/groups/1144215345733672/permalink/1478900738931796/) and offline.

Next step is to export the load_for_mobile and run method of mobile module, so that users could verify the mobile model from Python.

Test: use the following python script to display the bytecode part of the updated model file.
```
#!/usr/bin/env python3
import sys
import pickle
import pprint
import zipfile

class FakeObject(object):
    def __init__(self, module, name, args):
        self.module = module
        self.name = name
        self.args = args
        self.state = None

    def __repr__(self):
        state_str = "" if self.state is None else f"(state={self.state!r})"
        return f"{self.module}.{self.name}{self.args!r}{state_str}"

    def __setstate__(self, state):
        self.state = state

class FakeClass(object):
    def __init__(self, module, name):
        self.module = module
        self.name = name
        self.__new__ = self.fake_new

    def __repr__(self):
        return f"{self.module}.{self.name}"

    def __call__(self, *args):
        return FakeObject(self.module, self.name, args)

    def fake_new(self, *args):
        return FakeObject(self.module, self.name, args)

class DumpUnpickler(pickle._Unpickler):
    def find_class(self, module, name):
        return FakeClass(module, name)

    def persistent_load(self, pid):
        return FakeObject("pers", "obj", (pid,))

def main(argv):
    zfile = zipfile.ZipFile(argv[1])
    names = [i for i in zfile.namelist() if "bytecode.pkl" in i]
    if not names:
        print("bytecode.pkl not found.")
        return
    with zfile.open(names[0], "r") as handle:
        value = DumpUnpickler(handle).load()
    pprint.pprint(value)

if __name__ == "__main__":
    sys.exit(main(sys.argv))

```

Test Plan: Imported from OSS

Differential Revision: D19596359

Pulled By: iseeyuan

fbshipit-source-id: 19a4a771320f95217f5b0f031c2c04db7b4079a8
2020-01-28 08:30:20 -08:00
James Reed
d68592a440 [JIT] Fix classes as attributes in recursive scripting
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32594

Test Plan: Imported from OSS

Differential Revision: D19562951

Pulled By: jamesr66a

fbshipit-source-id: 3d5491c1c23456f107390a78be16da687de951e6
2020-01-27 20:37:48 -08:00
Michael Suo
ef5637f85e [jit] allow compilation using optional modules (#32539)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32539

Before: if something in `_modules` was `None`, we would barf. This is
incorrect because it's allowed for users to put `None` there, in case a
module is optional.

This case ought to be handled correctly during scripting. Fixes https://github.com/pytorch/pytorch/issues/32469

Test Plan: Imported from OSS

Differential Revision: D19552346

Pulled By: suo

fbshipit-source-id: aba7fdc19fd84d195c81cdaca8a75013a8626a8b
2020-01-24 11:51:47 -08:00
davidriazati
53429680d5 Remove stray @script (#32235)
Summary:
This should be covered under recursive script now
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32235

Pulled By: driazati

Differential Revision: D19414889

fbshipit-source-id: 85f8132401dbe44c9dbaef7c0350110f90eb9843
2020-01-17 19:22:09 -08:00
davidriazati
f3b67bf750 Fix frontend kwarg defualts error (#32146)
Summary:
This was not tested before, fixes #32139 (which was actually a false positive, functions with kwargs but without defaults on those kwargs are supported). This PR adds testing for both cases and cleans up the error reporting.
](https://our.intern.facebook.com/intern/diff/19385828/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32146

Pulled By: driazati

Differential Revision: D19385828

fbshipit-source-id: 5eab74df6d02f8e1d7ec054cafb44f909f9d637e
2020-01-14 14:59:36 -08:00
Alban Desmaison
c036fbdc5c remove .data from torch/jit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31480

Test Plan: Imported from OSS

Differential Revision: D19303244

Pulled By: albanD

fbshipit-source-id: ec66b32353f2f9b16072185ecde3ae8abbe09a35
2020-01-14 07:30:37 -08:00
davidriazati
883fb5434a Use real argument names for Python functions (#29300)
Summary:
This hooks up `inspect` so that Python functions get their parameters
names attached instead of naming them `0, 1, 2, ...`. This also fixes
issue #28537 where `ignore` functions were improperly typing `self`.
](https://our.intern.facebook.com/intern/diff/19256434/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29300

Pulled By: driazati

Differential Revision: D19256434

fbshipit-source-id: 6a1fe7bd0afab708b8439517798955d0abfeb44c
2020-01-08 15:41:28 -08:00
Martin Yuan
11854bcd38 Add test to torch.jit.export_opnames, make the _C function private
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31446

Test Plan: Imported from OSS

Differential Revision: D19172851

Pulled By: iseeyuan

fbshipit-source-id: f06d8766ed73c9abe4ebf41c402ee64880d745be
2019-12-20 13:38:43 -08:00
davidriazati
06dbef663d Add support for del (#31273)
Summary:
Adds the `del` keyword to the parser and corresponding `aten::Delete` op for lists and dicts

Fixes #20615
](https://our.intern.facebook.com/intern/diff/19181473/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31273

Pulled By: driazati

Differential Revision: D19181473

fbshipit-source-id: c42a2d43ec361a98e0c425232981edc9c39388c4
2019-12-19 21:48:11 -08:00
davidriazati
57caeb3fc1 Fix builtins table (#31492)
Summary:
Fixes a bad merge that is breaking distributed tests on master
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31492

Pulled By: driazati

Differential Revision: D19180978

fbshipit-source-id: f69f525e2c7f61194686f07cf75db00eb642882f
2019-12-19 13:33:15 -08:00
davidriazati
28376e826d Fix lint
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31463

Pulled By: driazati

Differential Revision: D19173580

fbshipit-source-id: 6e5bb24949ec357c4d5b29a16d1733b664f21e05
2019-12-19 10:17:01 -08:00
David Riazati
1e116a5089 Revert D19054937: Add support for del
Test Plan: revert-hammer

Differential Revision:
D19054937

Original commit changeset: c535ea16a9e6

fbshipit-source-id: e57d31811441947b7ee38c8c2b16eecde5005792
2019-12-18 22:39:41 -08:00
davidriazati
e1509cb468 Add support for del (#31273)
Summary:
Adds the `del` keyword to the parser and corresponding `aten::Delete` op for lists and dicts

Fixes #20615
](https://our.intern.facebook.com/intern/diff/19054937/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31273

Pulled By: driazati

Differential Revision: D19054937

fbshipit-source-id: c535ea16a9e62d176f8ad45947670fc3535af77c
2019-12-18 18:19:22 -08:00
Michael Suo
e7d25a3e4d add a suggested alternative to _get_trace_graph
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31441

Test Plan: Imported from OSS

Differential Revision: D19165646

Pulled By: suo

fbshipit-source-id: 96a264bc55ceafd798d92b986d319cddbb0d9c69
2019-12-18 17:34:25 -08:00
davidriazati
148bcd3ee5 Add support for builtins as attributes (#31269)
Summary:
Fixes #27495

This adds builtins as another piece of a concrete type. They're separate from normal functions since they represent the `BuiltinFunction` sugared value (which is a direct call to a builtin op). It also moves the builtins related logic from `jit/__init__.py` to `jit/_builtins.py` so it can be used from `jit/_recursive.py` to look up functions in the builtins table.
](https://our.intern.facebook.com/intern/diff/19149779/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31269

Pulled By: driazati

Differential Revision: D19149779

fbshipit-source-id: d4e5e5d7d7d528b75a2f503e6004394251a4e82d
2019-12-18 15:24:45 -08:00
davidriazati
503a4e9019 Cleanup after moving language reference (#31146)
Summary:
Stacked PRs
 * **#31146 - [jit] Cleanup after moving language reference**
 * #31138 - [jit] Move TorchScript language reference to its own page

Preview: https://driazati.github.io/pytorch_doc_previews/jit.html#torchscript-language

Pull Request resolved: https://github.com/pytorch/pytorch/pull/31146

Pulled By: driazati

Differential Revision: D19167390

fbshipit-source-id: f28daed36754a553264fc8ac142ed22c3e26d63e
2019-12-18 15:09:35 -08:00
Elias Ellison
fb30a48b4e add unsupported section (#31329)
Summary:
Add a section for unsupported ops, and modules. Automatically generate the properties and attributes that aren't bound, and for ops that have semantic mismatches set up tests so the docs stay up to date.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31329

Differential Revision: D19164472

Pulled By: eellison

fbshipit-source-id: 46290bb8a64d9de928cfb1eda5ff4558c3799c88
2019-12-18 13:56:02 -08:00
Michael Suo
3c8892aa0c avoid doing quadratic work in concrete type inference (#31020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31020

Before, the recursive scripting process re-did the concrete type
inference process for every submodule call. This changes things so that
the concrete type inference process only occurs once (at the top level),
and we re-use all the inferred concrete types while recursively
compiling submodules.

This is both more efficient (we don't do n^2 work inferring concrete
types) and less bug-prone (since we infer the concrete type only once,
there is no possibility of a mismatch).

Test Plan: Imported from OSS

Differential Revision: D18904110

Pulled By: suo

fbshipit-source-id: 6560b85ae29fe5e9db1ee982dbf8bc222614b8d8
2019-12-17 21:55:55 -08:00
Michael Suo
878b0e35f7 Simplify recursive script compilation flow. (#31019)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31019

No more `recurisve_script`, just direct calls to `create_script_module`.
This reduces the number of pathways through the frontend, and the
uniformity is useful for a future PR.

Test Plan: Imported from OSS

Differential Revision: D18904113

Pulled By: suo

fbshipit-source-id: 7de061dfef0cbdfc9376408fc6c1167b81803f01
2019-12-17 21:55:50 -08:00
Michael Suo
82d52bc718 remove remnants of properties hack (#31018)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31018

Properties are now disallowed so this hack is no longer necessary

Test Plan: Imported from OSS

Differential Revision: D18904112

Pulled By: suo

fbshipit-source-id: 83448da677082d59355729bb72d9f9f4c31ea756
2019-12-17 21:55:45 -08:00
Michael Suo
7e81d72d12 remove unnecessary arg from create_script_module (#31017)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31017

This arg is now derivable from another one. So we don't need to pass
both

Test Plan: Imported from OSS

Differential Revision: D18904111

Pulled By: suo

fbshipit-source-id: ea74ea9c2ae83d9e0e6977b0eb6629f53545e2e4
2019-12-17 21:55:41 -08:00
Wanchao Liang
e3fecabdcb Setup operator registration for distributed package (#31214)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31214

This set up the basic infrastructure for distributed autograd and rpc to
bind their operators to TorchScript, since the whole distributed package
is builtin behind the `USE_DISTRIBUTED` flag, we separate the
registration and build it only when the flag is on.

Test Plan: Imported from OSS

Differential Revision: D19137160

fbshipit-source-id: ff47dc4c380ebe273fe0eea9e5e3fccfbd6466d7
2019-12-17 17:26:43 -08:00
Alexander Stante
f30b14dead Fix handling of type comments in body (#30590)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/30477. Any type comment after `# type: (...) -> ` is ignored.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30590

Differential Revision: D18887351

Pulled By: driazati

fbshipit-source-id: 162c652f6d7610d14609bbcb25aaa27cdd947a76
2019-12-12 18:19:30 -08:00
Elias Ellison
56de8853da Resubmit overload v2 (#31123)
Summary:
Resubmit of https://github.com/pytorch/pytorch/pull/30356 and https://github.com/pytorch/pytorch/pull/31014 :'(

The last commit contains the fix. There was an internal FBcode error not able to compile the previous `impl_default->second.equal(default_val.second))` line. I tried various fixes in C++ internally but couldn't figure anything out. This is a good example of the programming costs of going from python -> c++ for different types of objects, because the conceptual overhead has expanded in scope from (python) -> (python, c++, pybind).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31123

Differential Revision: D18936128

Pulled By: eellison

fbshipit-source-id: 7d8fd66a6dd4a3e9838f3a0b68c219b6565a9462
2019-12-12 07:54:23 -08:00
David Riazati
1f87e823b8 Make nn.Transformer TorchScript compatible (#28561)
Summary:
This makes `nn.Transformer` usable from TorchScript. It preserves backwards compatibility via `__setstate__` on the encoder/decoder.

Fixes https://github.com/pytorch/pytorch/issues/24173
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28561

Differential Revision: D18124753

Pulled By: driazati

fbshipit-source-id: 7314843e5aa9c9bf974c4672e4edb24ed8ef4a6f
2019-12-11 10:57:31 -08:00
Pieter Noordhuis
78a00d72b4 Revert D18899127: resubmit polish up overloads on free functions
Test Plan: revert-hammer

Differential Revision:
D18899127

Original commit changeset: 9049b8718926

fbshipit-source-id: c70a8aa4120aa757dce0926a8ab3cc5c92cd6041
2019-12-10 10:51:07 -08:00
Elias Ellison
af4040d808 resubmit polish up overloads on free functions (#31014)
Summary:
Resubmitting https://github.com/pytorch/pytorch/pull/30356

Second commit has reintroduces deleted function which caused revert previously.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31014

Differential Revision: D18899127

Pulled By: eellison

fbshipit-source-id: 9049b8718926c329d9cb46bb96eac6c278e9b866
2019-12-10 07:57:47 -08:00
Wanchao Liang
73dd8c005a Revert D18864774: polish up overloads on free functions
Test Plan: revert-hammer

Differential Revision:
D18864774

Original commit changeset: 6c566738bd6f

fbshipit-source-id: 669192605a3bc1a6ba06bbb5cae54f61637a45ae
2019-12-09 15:41:45 -08:00
Elias Ellison
446488960a polish up overloads on free functions (#30356)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30356

This finishes up the `torch.jit.overload` api for free-functions.
- defaults now required on the implementation function itself
- fully follows [overload spec](https://mypy.readthedocs.io/en/latest/more_types.html#function-overloading) such that the following is supported

```
overload
def mouse_event(x1: int, y1: int) -> ClickEvent: ...
def mouse_event(x1: int,
                y1: int,
                x2: Optional[int] = None,
                y2: Optional[int] = None): ...
```

Note: `jit.overload` isn't supported yet for UDT, but is support for modules. This PR doesn't make the same changes for modules, if reviewers think I should include them then I could do so in a follow up PR or wait to land this. Since that's still an internal api I think it's fine, and the changes here would allow us to expose `torch.jit.overload` on free functions.

Test Plan: Imported from OSS

Differential Revision: D18864774

Pulled By: eellison

fbshipit-source-id: 6c566738bd6f0551a000a9ea8d56e403636b7856
2019-12-09 15:12:18 -08:00
Michael Suo
62b10721fb Actually make flake8 do something (#30892)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30892

Fixes all outstanding lints and actually installs a properly configured
flake8

Test Plan: Imported from OSS

Differential Revision: D18862825

Pulled By: suo

fbshipit-source-id: 08e9083338a7309272e17bb803feaa42e348aa85
2019-12-06 17:50:50 -08:00
davidriazati
2308a0ec1b Improve documentation around builtin functions (#30347)
Summary:
This breaks the builtins page into some more sections and adds details about Python built-in functions
](https://our.intern.facebook.com/intern/diff/18718166/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30347

Pulled By: driazati

Reviewed By: wanchaol

Differential Revision: D18718166

fbshipit-source-id: bf43260ab7bcf92cccef684a5ce68cb16020771d
2019-12-04 13:50:40 -08:00
Elias Ellison
d38f9117fd Cache compilation of free functions (#30503)
Summary:
We don't have to recompile free functions if we've already compiled them.

Improved compilation of resnet18 by 27%.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30503

Differential Revision: D18796501

Pulled By: eellison

fbshipit-source-id: 2dee0fc5fcf9adc5b92213f8cb813730d71b376f
2019-12-04 12:45:35 -08:00
Martin Yuan
b26401f965 Dump operator names of a script module (#30467)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30467

Introduce function jit.export_opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.

Example:
import torch
m = torch.jit.load("example.pt")
print(torch.jit.export_opnames(m))

The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']

Test Plan: Imported from OSS

Differential Revision: D18801619

Pulled By: iseeyuan

fbshipit-source-id: f9b198d3e82b095daf704ee595d8026ad889bb13
2019-12-03 20:20:33 -08:00
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
davidriazati
9c02b88791 Add pickler support for Device (#30131)
Summary:
This PR adds (un)pickling support for `c10::Device`. It also adds `torch.device` as a type annotation for device attributes.
](https://our.intern.facebook.com/intern/diff/18664421/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30131

Pulled By: driazati

Differential Revision: D18664421

fbshipit-source-id: 64378fb42b2d1bbe2bd86259e5ed10f24b5d1e49
2019-12-02 17:43:08 -08:00
Jerry Zhang
1bba0eb35b Add clone_instance for Module (#30168)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30168

Previous implementation of `clone` in `script::Module` copies both the module instance and the
class type, after we enabled type sharing https://github.com/pytorch/pytorch/pull/26666 we also
need to have a function to clone instance only and share the underlying class type.

Test Plan:
tbd

Imported from OSS

Differential Revision: D18631324

fbshipit-source-id: dbadcf19695faee0f755f45093b24618c047b9d1
2019-11-21 13:00:34 -08:00
Mikhail Zolotukhin
2c1c6de122 Represent the original python name the same way in traced and scripted modules.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29912

Test Plan: Imported from OSS

Differential Revision: D18533135

Pulled By: ZolotukhinM

fbshipit-source-id: 080dbafa5dcd8c1fb12fec0c956e52fceec430e7
2019-11-21 11:55:40 -08:00
James Reed
449828378d Serialize ClassType as its qualname
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30058

Test Plan: Imported from OSS

Differential Revision: D18584269

Pulled By: jamesr66a

fbshipit-source-id: 5f1d0142bd7cd94eecbd2ed9250a0de47639040b
2019-11-20 16:17:26 -08:00
Michael Suo
93db2b86d1 Fix type sharing on loaded ScriptModules (#29826)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29826

After save/load, we lose concrete type information. So if you tried to
script something that contained a loaded ScriptModule as a submodule,
the following sequence happened:
1. During ConcreteType inference, the loaded submodule got a new
inferred type.
2. But it already has a type! So there was a type mismatch.

To fix this, we should generate a ConcreteType directly from the loaded
submodule type (similar to what we do for interfaces). This makes sense
too--the ConcreteModuleType should be empty, since all the "sugaredness"
was stripped out during the save/load process.

Test Plan: Imported from OSS

Differential Revision: D18575009

Pulled By: suo

fbshipit-source-id: 4d329b7e9b7e7624f459e50092e35ab0ab813791
2019-11-20 01:13:09 -08:00
Michael Suo
558a777615 Re-unify module and interface in ConcreteModuleType (#29825)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29825

We made `ModuleInfo` a union initially to represent the idea that a
submodule could either be a regular module or a module interface.

This PR represents module interfaces as a ConcreteModuleType with no
info (e.g.  no "sugaredness"), and with the interface type as the
underlying `jitType_`. This has the effect of reducing the special
casing around adding/maintaining module info.

Test Plan: Imported from OSS

Differential Revision: D18575011

Pulled By: suo

fbshipit-source-id: 53e297b39aa1a03bcdadd795ff225aa68fec9d70
2019-11-20 01:13:06 -08:00
Michael Suo
63e66fd267 Split ConcreteModuleType into two types (#29824)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29824

We have two distinct phases/uses for ConcreteModuleType:
1. We are building it up and using it to check whether we can
reuse JIT types. (RawConcreteModuleType)
2. We are using it to satisfy ModuleValue::attr queries.
(ConcreteModuleType)

These types share an underlying `ConcreteModuleTypeData` which
actually stores the relevant info.

Previously they were the same type because I was lazy, but it's been the
source of a bug. So split them to formalize the differing invariants for
the two phases.

Test Plan: Imported from OSS

Differential Revision: D18575010

Pulled By: suo

fbshipit-source-id: 3e4ebcd36e78b947150d8f0dbb74ecccad23e7c4
2019-11-20 01:13:02 -08:00
James Reed
18bdf97dbb Factor Module into Object and Module
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29500

Test Plan: Imported from OSS

Differential Revision: D18463064

Pulled By: jamesr66a

fbshipit-source-id: d37bef242a8626593d4b8754042152cfc0f0acb2
2019-11-17 22:58:50 -08:00
Elias Ellison
681b610f35 use new overload mechanism for rnns (#29614)
Summary:
Uses new overload mechanism for rnns, making it so that python & torchscript go through the same path and using an API that is in line with the one specified
in https://docs.python.org/3/library/typing.html#typing.overload

This brings the TorchScriptable rnns closer to the base implementation; unifying them should be done in a follow up PR but there are still a few limitations that make it difficult to do so.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29614

Differential Revision: D18486982

Pulled By: eellison

fbshipit-source-id: aaaea66a4a7f12d2e46199ca254f9e8f7475500e
2019-11-13 15:44:25 -08:00
Edward Yang
715e951e3c Revert D18458751: use new overload mechanism for rnns
Test Plan: revert-hammer

Differential Revision:
D18458751

Original commit changeset: 07c71838f21c

fbshipit-source-id: 86acb02f3e022e93ea6c1ef23fe39c80ad43978f
2019-11-13 07:21:31 -08:00
Elias Ellison
8e7b406773 use new overload mechanism for rnns (#29614)
Summary:
Uses new overload mechanism for rnns, making it so that python & torchscript go through the same path and using an API that is in line with the one specified
in https://docs.python.org/3/library/typing.html#typing.overload

This brings the TorchScriptable rnns closer to the base implementation; unifying them should be done in a follow up PR but there are still a few limitations that make it difficult to do so.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29614

Differential Revision: D18458751

Pulled By: eellison

fbshipit-source-id: 07c71838f21cb5425e8d6dbd4a512f774c8c2970
2019-11-12 16:12:04 -08:00
Elias Ellison
fbe90b65fa Cleanup special handling of Containers, allowing custom forwards (#28988)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28988

Make ModuleList, Sequential, ModuleDict go through the same pathway as other modules, cleaning up a bunch of code and allowing them to define custom forwards and other methods.

EDIT: Previously, we would ignore an nn.Sequential attribute if it was not in `__constants__` ("did you forget to add it to Constants"). This PR scripts it even if it is not in `__constants__`. Is that what we want?

Test Plan: Imported from OSS

Differential Revision: D18402821

Pulled By: eellison

fbshipit-source-id: dd4f28fb0df0d1ba4ad1b3bc34ba141959a433f7
2019-11-12 14:10:38 -08:00
Zachary DeVito
627f2823e0 remove _register_* bindings from python (#29499)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29499

This changes how DataParallel and trace module creation works so that
we no longer need to mutate Module class after it has been created.

The only remaining usage of register_* functions are now inside C++
tests.

Test Plan: Imported from OSS

Differential Revision: D18413652

Pulled By: zdevito

fbshipit-source-id: f039e5400cd016632768be4547892f6a69645c20
2019-11-11 13:52:46 -08:00
Zachary DeVito
4e4e29a511 Simplify ScriptModule bindings. (#29432)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29432

This removes a lot of the private methods on torch._C.ScriptModule,
and instead implements functionality in terms of slot_dict_impl views
to implement _parameter, _buffers, and _modules in nn.Module.

A followup PR should also remove the _register_attribute,
_register_module, and _register_parameter methods, but this requires
more refactoring of the way tracing creates modules and replication
for data parallel works.

Test Plan: Imported from OSS

Differential Revision: D18387963

Pulled By: zdevito

fbshipit-source-id: f10d47afeb30c1e05d704ae5ac4166830933125c
2019-11-11 13:52:36 -08:00
Michael Suo
5b1a1a17ed remove FunctionType as an allowed constant (#29405)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29405

We never actually used this (function attributes are a separate pathway
in ConcreteModuleType).

Test Plan: Imported from OSS

Differential Revision: D18378392

Pulled By: suo

fbshipit-source-id: b06c4b6d70f0b2534be78a215125cffd22ab44f0
2019-11-08 13:38:02 -08:00
Michael Suo
255b2340fc don't copy ignored/unused methods to ScriptModule (#29342)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29342

This is not necessary, as we use `lazy_bind` to retrieve those methods
from the class anyway.

Test Plan: Imported from OSS

Differential Revision: D18383381

Pulled By: suo

fbshipit-source-id: e8b7c9e696087cc1e707ac38f7ae85f569f08371
2019-11-07 22:54:29 -08:00
James Reed
782e80e6e7 Make jit.trace_module reentrant (#29411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29411

Fixes https://github.com/pytorch/pytorch/issues/29367

Test Plan: Imported from OSS

Differential Revision: D18380559

Pulled By: jamesr66a

fbshipit-source-id: 5caf606ccbc5dc79dac14e3c28cc02dec19ce695
2019-11-07 16:29:06 -08:00
Igor Fedan
75309b45f3 explicitly provide memory format when calling to clone() at Indexing.cpp
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28660

Test Plan: Imported from OSS

Differential Revision: D18333346

Pulled By: ifedan

fbshipit-source-id: 06590205d883a5096388a4ae318389244130972d
2019-11-07 05:38:32 -08:00
Michael Suo
58005382c8 fix @property (#28395)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28395

Currently property methods are broken in TorchScript because we
basically treat it as an attribute in the existing path: we'll evaluate
the method once and store that as the value forever.

Since lack of property support is easily worked around (just make it
a method), I've opted to just explicitly error to avoid confusion. If
people want it, they can file an issue and we can look at their use
case.

This also helps us nicely clean up some parts of the ScriptModule conversion
path.

Test Plan: Imported from OSS

Reviewed By: shannonzhu

Differential Revision: D18054946

Pulled By: suo

fbshipit-source-id: 7e927836ae687cd2f13a94b9f0af399437fae422
2019-11-06 23:51:07 -08:00
Zachary DeVito
796363147f Implement more of of the nn.Module API (#28828)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28828

This updates torch::script::Module to more closely match the behavior
of nn.Module. In particular, it implements the (optionally recurisive)
iterators that retrieve submodules, parameters, and buffers and makes
their names match the python versions.

This also removes the individual accessors for Parameter, Module, Buffer, etc.
and replaces them with a single `attr` function which is equivalent to
writing `a.foo` in Python (`setattr` emulates `a.foo = v`).
As we build out the user-facing API for TorchScript values this will end
up matching how an  attribute is accessed on general objects.

This PR preservers the python bindings for script::Module by emulating the
old API at the binding level. A followup will clean up the usage to more
directly match the C++ API.

Test Plan: Imported from OSS

Differential Revision: D18197611

Pulled By: zdevito

fbshipit-source-id: 7ee4dcbb258605d1c988314b05d938423f1ccee5
2019-11-06 22:58:25 -08:00
James Reed
309b28ee3a Trace module calls
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29261

Test Plan: Imported from OSS

Differential Revision: D18343363

Pulled By: jamesr66a

fbshipit-source-id: 0c6394205e2c0ea8708028d20df83fe17b466ff4
2019-11-06 15:05:49 -08:00
James Reed
6e38c3b89e Make get_trace_graph private
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29149

Test Plan: Imported from OSS

Differential Revision: D18307559

Pulled By: jamesr66a

fbshipit-source-id: 0b6aec2a1d10810d4e7f6b30b256cca79fc4e854
2019-11-05 17:04:36 -08:00
Wanchao Liang
9492994feb submodule swapping via module interface (#28409)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28409

This PR enables submodule swapping via module interface. User could
declare a submodule as an module interface type in the ScriptModule,
during compilation we will record the module interface type in
ModuleInfo of ConcreteModuleType, the JIT type associated will have the
correct ModuleInterfaceType, and CppModule will get the correct module list

Given that we still keep the module interface type in the type system,
the graph is not inlined when we call Module::Attr and it will use
prim::CallMethod to call the method, this allow us to do module swapping
for the ScriptModule that also meet the same module interface type, and
    we only allow the module swapping through the module interface
    approach.

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D18284309

fbshipit-source-id: 2cb843e4b75fa3fcd8c6020832a81014dbff4f03
2019-11-05 11:31:40 -08:00
Zak Hassan
a02681f804 Cleaned up func removed unused variable (#29179)
Summary:
I don't see `_frames_up` being used anywhere. Just to clean up the code thought it should be removed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29179

Differential Revision: D18319876

Pulled By: suo

fbshipit-source-id: 5e612ff94ccc88fc85288ffc26213e1d11580c36
2019-11-05 08:48:45 -08:00
Zak Hassan
7434da2c3f value assigned but never used in _recursive.py (#29181)
Summary:
# Description
I'm new to this project just wanted to start with small bug fixes. I found some unused local variables and I've removed them in this pr.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29181

Differential Revision: D18319893

Pulled By: suo

fbshipit-source-id: e4f9f13b6db2ca213015569deb12d3fd9beb74a8
2019-11-05 08:48:41 -08:00
Wanchao Liang
e95dc9814e introduce module interface declaration (#28408)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28408

This enable interface to defined on a nn.Module, and the InterfaceType
now have a field of is_module_ to distinguish if it's a module interface
or a normal interface (This is similar to what ClassType distinguish on
module and torchscript classes).

The module interface can be assigned with any ScriptModule that has the
compatible signatures on schemas. A normal object that is not a
ScriptModule will not be able to assigned to an module interface and
will error out when user explicitly doing so. Assigning a ScriptModule
to class interface will make it only available in attribute_list, not
module_list. More details on subtyping relationship documented in the
jit_type.h

If you declare an module interface inside an nn.Module that is being
compiled to a ScriptModule, behavior to our internal compilation will
be:

1. ConcreteModuleType will record it as an module attribute and add to
   the attributes_ list.
2. JitType that is created from the ConcreteModuleType will record it as
   an attribute and pre-genenerate the slot. The slot will be marked as
   EntityType::MODULE still to make sure JitType record it as a Module
   slot
3. cpp_module will also register it as a Module as the Slot type is the
   source of truth

Since JitType will record it as attribute as store its type, it will
behave normally as the class interface attribute behave now. This means
the submodule assigned to this module interface is not getting inlined
into the graph as the normal `Module::attr` behave, it will generate
interface callMethod and allow us to later swap this with another
ScriptModule that implicitly implements this module interface.

Test Plan: Imported from OSS

Differential Revision: D18284311

fbshipit-source-id: e0b8f6e8c34b2087fab337a969e5ea3fb37ec209
2019-11-02 16:39:00 -07:00
Wanchao Liang
1e904049ca guard against inheritance on torchscript classes (#28407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28407

Given that we do not have support for inheitance or any polymorphism
strategy yet, we should guard against user from using it until we get
the full support so that user won't confuse by the weird behaviors.

Test Plan: Imported from OSS

Differential Revision: D18284310

fbshipit-source-id: f55a224f4190d57926d91ed98f6168d787387eb8
2019-11-02 16:38:56 -07:00
Jianyu Huang
9e64c54c01 Add the warning message for API with linear modules (#28766)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28766

Add the warning message to explicitly ask the users to upgrade the deprecated `torch.jit.quantized` API to the new `torch.quantization.quantize_dynamic` API.
ghstack-source-id: 92711620

Test Plan: CI

Differential Revision: D18164903

fbshipit-source-id: e6aff2527f335c2d9f362e6856ce8597edb52aaa
2019-10-28 14:24:44 -07:00
James Reed
f782500ee0 Abstract tracer::enter and tracer::exit into a function
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28473

Test Plan: Imported from OSS

Differential Revision: D18121007

Pulled By: jamesr66a

fbshipit-source-id: 4c4a4344ad9bcc4630b945d2a645a0b05928933c
2019-10-26 18:41:14 -07:00
Michael Suo
2181dd516e fix handling of function attributes. (#28569)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28569

Previously, the inclusion of function attributes would "poison" a
ConcreteModuleType, because we did not have a way of checking whether
they are actually the same function. This PR uses the Python function
object to perform that check. This improves our ability to reuse JIT
types between modules.

Also this PR fixes a bug where we weren't properly adding modules as
attributes when converting from ConcreteType -> JIT type (we were adding
them after the fact--another reason to switch from using `register_x` to
`set_x` during module construction, which is on my to-do list after
this).

Fixes https://github.com/pytorch/pytorch/issues/28559

Test Plan: Imported from OSS

Differential Revision: D18111331

Pulled By: suo

fbshipit-source-id: ec2cccf832d3ddd4cd4d28fe19cb265f1275325a
2019-10-24 22:23:37 -07:00
なるみ
d83389d327 Ignore F401 in all __init__.py without putting noqa (#25823)
Summary:
By adding `per-file-ignores = __init__.py: F401` into `.flake8` with `flake8>=3.7`, we can ignore F410 in all `__init__.py` without putting `# noqa: F401` line by line.

http://flake8.pycqa.org/en/latest/user/options.html?highlight=per-file-ignores#cmdoption-flake8-per-file-ignores
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25823

Differential Revision: D17252182

Pulled By: soumith

fbshipit-source-id: 87b174075b79e4078953a7521bd1a8f82405646b
2019-10-23 15:28:13 -07:00
davidriazati
618cb40e30 Add doc copy-edits from review (#26322)
Summary:
Add edits from doc review
](https://our.intern.facebook.com/intern/diff/17859654/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26322

Pulled By: driazati

Differential Revision: D17859654

fbshipit-source-id: f3a116cddb5393bdfbef670c56efb2ee62ccf252
2019-10-17 11:12:35 -07:00
Zachary DeVito
fb4517132f Allow 'Any' to appear as a type argument. (#26572)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26572

Combined with isinstance specialization this allows a degree of polymorphic
functions to work without needing to use our weirder overload hacks.

We do not define any operators on Any, so the only thing you can do with it
is to put it in containers or type refine it using an isinstance check.
Any is restricted from appearing in non-argument position because we
cannot restore type tags if it ends up as a field in a class.

Test Plan: Imported from OSS

Differential Revision: D17530643

Pulled By: zdevito

fbshipit-source-id: f06f78ce84819f7773953a492f3d4c49219ee94c
2019-10-16 11:07:08 -07:00
Hiroshi Ogawa
97b39a296f Fix error report highlight for unmatched type annotation (#27195)
Summary:
This PR fixes https://github.com/pytorch/pytorch/issues/25801 (see there for my verbose analysis).

As an example, for the following code:

```
import torch

torch.jit.script
def f1(x):
    # type: (int, int) -> None
    pass
```

this PR will change error message from this:

```
RuntimeError:
Number of type annotations (2) did not match the number of function parameters (1):
# type: (int, int) -> None
```

to this:

```
RuntimeError:
Number of type annotations (2) did not match the number of function parameters (1):
at __scratch__/example.py:4:0
torch.jit.script
def f1(x):
~~~~~~~~ <--- HERE
    # type: (int, int) -> None
    pass
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27195

Differential Revision: D17910902

Pulled By: driazati

fbshipit-source-id: af5c6353069d005752d6c7f0bd6a0c6db8437e55
2019-10-16 10:39:36 -07:00
zou3519
e5d6b75319 Bag of documentation fixes; fix more sphinx warnings (#27850)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27850

Many of these are real problems in the documentation (i.e., link or
bullet point doesn't display correctly).

Test Plan: - built and viewed the documentation for each change locally.

Differential Revision: D17908123

Pulled By: zou3519

fbshipit-source-id: 65c92a352c89b90fb6b508c388b0874233a3817a
2019-10-15 07:31:14 -07:00
Michael Suo
a4a5b6fcaa Revert D17913708: [pytorch][PR] [JIT] throw on custom forward for module containers
Test Plan: revert-hammer

Differential Revision:
D17913708

Original commit changeset: 1cc2a8a4b573

fbshipit-source-id: 19ad68a1b0fd8e0f17e1b7ab92879106517e13d2
2019-10-14 17:48:31 -07:00
Elias Ellison
cd6b37afa7 throw on custom forward for module containers (#27763)
Summary:
Custom forwards of containers would silently not be compiled previously. Throw an error now instead.

Fix for https://github.com/pytorch/pytorch/issues/26671
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27763

Differential Revision: D17913708

Pulled By: eellison

fbshipit-source-id: 1cc2a8a4b57356ba7f007a95ede0a31e5d61aa82
2019-10-14 13:08:10 -07:00
Michael Suo
341262754f module dedupe (#26666)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26666

Changes:
- Introduce a `ConcreteModuleType` concept. This acts both as the key into the type
  cache, and as the source of truth for `ModuleValue::attr` queries. It needs
  to do both jobs because that's how we ensure correctness (if the types are
  different, it's because `ModuleValue::attr` would return different things).
- Now `recursive_script` will first construct a `ConcreteModuleType` and search for a
  pre-existing type before starting compilation.
- All previous paths to creating a `ScriptModule` (including inheriting from
  `ScriptModule`) are now rewritten to go through `create_script_module`, so
  that we have only a single place where construction happens.

Behavioral changes:
- Big change to `torch.jit.ScriptModule` inheritance: all attributes are now
  recursively scripted if possible, matching recursive scripting semantics.
  This makes it hard to keep something from being scripted (for example, a
  Python submodule). Possibly we'll need an `ignore()` type thing for
  attributes. In particular, this adds `self.training` to *every* ScriptModule, since
  it's present on every `nn.Module`.
- I believe this change to be transparent to existing users of the inheritance API, since if you had an attribute that is unscriptable that you never used, there is no error. In some cases, we will create new attributes (even if they are unused), which will increase serialized model size from before.

Test Plan: Imported from OSS

Differential Revision: D17551196

Pulled By: suo

fbshipit-source-id: b476d1c9feb3ddfd63406d90989aaf9dfe890591
2019-10-12 09:51:57 -07:00
Michael Suo
ffa422a8b3 kill _parameter_list (#27399)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27399

This was devised in a time when we didn't have module attributes. They
are essentially just tensor lists, so represent them that way. This has
the additional benefit of making the RNN forward pass faster because we
effectively cache the flattened weights.

The only complication part is that someone may come along and do:
```
my_rnn_mod.w_ih_l0 = torch.nn.Parameter(...)
```

This means we need to override setattr to keep the flattened weights
cache up to date.

Test Plan: Imported from OSS

Differential Revision: D17785658

Pulled By: suo

fbshipit-source-id: 7789cd1d0d4922bfd5eba1716976442fbf150766
2019-10-12 09:51:53 -07:00
Elias Ellison
5d495a11cb add unused and is_scripting to docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27630

Differential Revision: D17868856

Pulled By: eellison

fbshipit-source-id: 7cf183d5c0d5436fbaa549a02e6b8fd47fa15b67
2019-10-10 17:02:17 -07:00
Michael Suo
971f773886 Revert D17750005: [jit] Add doc copy-edits from review
Test Plan: revert-hammer

Differential Revision:
D17750005

Original commit changeset: 230d1d33efb0

fbshipit-source-id: 12d22567b99286a8c4f719c3a384cb3665f7ba54
2019-10-09 19:12:58 -07:00
davidriazati
e7c9c8098a Add doc copy-edits from review (#26322)
Summary:
Add edits from doc review
](https://our.intern.facebook.com/intern/diff/17750005/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26322

Pulled By: driazati

Differential Revision: D17750005

fbshipit-source-id: 230d1d33efb015e40327373a05a1d3eced7c5c00
2019-10-09 14:16:48 -07:00
Zachary DeVito
eb9000be4e always use the closure to resolve variable names (#27515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27515

Resoving variable names using the local activation frames does not work
when using recursive scripting, but our current code tries to do it
(incorrectly) anyway. The reason it works is only because the script
call is in the same local frame as the definition. This will not be
true in practice and makes it seem like the API works in more cases
than it really does. This forces us to always use closure-based annotations,
documents it, and it fixes the tests so that they still pass.

Test Plan: Imported from OSS

Differential Revision: D17803403

Pulled By: zdevito

fbshipit-source-id: e172559c655b05f0acf96c34f5bdc849f4e09ce2
2019-10-09 12:16:15 -07:00
davidriazati
725810f42c Set existing attributes under recursive script (#27514)
Summary:
This is related to #27109, `training` was being skipped since modules
have it as an attribute by default, but it should be copied anyways.
](https://our.intern.facebook.com/intern/diff/17802544/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27514

Pulled By: driazati

Differential Revision: D17802544

fbshipit-source-id: 9e8f068903b67073c509c2c598b27622fcada2d7
2019-10-08 10:12:04 -07:00
davidriazati
0046092178 Reduce special casing around 'training' (#27109)
Summary:
Most of this was old cruft left over from special handling of `training` before we had a `bool` type. This makes all modules have a `training` attribute that is true by default and removes all other special handling.

Fixes #26884
](https://our.intern.facebook.com/intern/diff/17728129/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27109

Pulled By: driazati

Differential Revision: D17728129

fbshipit-source-id: 8ddc9fbb07a953dd05529538bfdd01ed88b5cb57
2019-10-07 13:52:59 -07:00
Wanchao Liang
827a00cf63 Support interface python assignment as an attribute (#26734)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26734

This PR added the python assignment for interface as an attribute in the
module, it enables any object that implicitly inheriting the specific
interface to be able to be assigned to the interface type in python.

Serialization support for interface/class assignment will be done in the
follow up PR

Test Plan: Imported from OSS

Differential Revision: D17742708

Pulled By: wanchaol

fbshipit-source-id: a0a2d8c74b60ed3fa6c05e1b0d49b7ad1abc670b
2019-10-03 17:18:37 -07:00
albanD
5b5f398dd4 Make cpp-backed jit classes appear as being in torch.jit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27220

Test Plan: Imported from OSS

Differential Revision: D17715305

Pulled By: albanD

fbshipit-source-id: 574704ad23ece6da7aa2780b78867307bef523cc
2019-10-03 08:28:36 -07:00
albanD
17b1faa2bf Rename jit Function to ScriptFunction
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27219

Test Plan: Imported from OSS

Differential Revision: D17715306

Pulled By: albanD

fbshipit-source-id: d11a7634dbee6a885c7177b240958e5aed2544f3
2019-10-03 08:28:32 -07:00
Lara Haidar
614edfce81 Add Support to Dicts and Strings in ONNX for Inputs and Outputs (#25889)
Summary:
ONNX does not support dictionaries for inputs and output. The reason is that the arg flattening and unflattening does not handle Dictionary types.
This PR adds flattening/unflattening support for dictionaries and strings.
However this feature should be handled with caution for input dictionaries; and users need to verify their dict inputs carefully, and keep in mind that dynamic lookups are not available.

This PR will allow exporting cases where models have dictionnary outputs (detection and segmentation models in torchvision), and where dictionary inputs are used for model configurations (MultiScaleRoiAlign in torchvision).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25889

Reviewed By: hl475

Differential Revision: D17613605

Pulled By: houseroad

fbshipit-source-id: c62da4f35e5dc2aa23a85dfd5e2e11f63e9174db
2019-09-26 22:31:09 -07:00
davidriazati
ef8d1c50c4 Fix builtin lookup for Python functions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26688

Pulled By: driazati

Differential Revision: D17560634

fbshipit-source-id: e1c50d1ca24e0313c2b7d704c488a29ef6a47cad
2019-09-24 18:02:36 -07:00
davidriazati
4c40dbcb75 Resolve NamedTuple types in Python (#26443)
Summary:
When used as annotations on Python functions, `NamedTuple`s go through our Python annotation -> type mapping which previously had no way of lookup up `NamedTuple`s (which are created lazily by checking if the type has certain properties, so the lookup is creating the `TupleType` from scratch). This PR threads through the necessary data to make them work.

Fixes #26437
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26443

Pulled By: driazati

Differential Revision: D17486441

fbshipit-source-id: a6bbb543ff05a5abe61f1a7f68db9ecdb652b358
2019-09-20 10:53:25 -07:00
Elias Ellison
1f2fa8d4d8 Make jit dicts ordered (#26465)
Summary:
Makes c10::Dict Ordered and bins binds the OrderedDict() and dict() constructor into torchscript. For the case of the empty constructor dict() i typed it as [str, Tensor] because:
• we're almost dropping support for python 2, at which point all dicts are ordered
• then it's more conventional to write x : Dict[int, int] = {} which is already supported
• It is possible to construct an arbitrarily typed empty OrderedDict through
OrderedDict(torch.jit.annotate(List[Tuple[key, value]], [])

We could consider dropping the no inputs aten::dict constructor since then the types would be more explicit.

This replaces https://github.com/pytorch/pytorch/issues/26170 and https://github.com/pytorch/pytorch/pull/26372 b/c ghstack was poisioned and i had to resubmit
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26465

Differential Revision: D17481604

Pulled By: eellison

fbshipit-source-id: d2d49795a518c3489881afac45d070e5262c5849
2019-09-19 15:09:02 -07:00
Halil Akin
31960e8872 Add missing argument for failing function call (#26311)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26311

We are currently unable to deploy models due to D16955662 changing function signature of ```quantized_lstm(``` but the function call here (https://fburl.com/diffusion/e4wrmx83) not passing the newly added ```use_dynamic``` param.

Here is the details of the error: P111215482

```
E0916 12:36:16.423516 1149877 ExceptionTracer.cpp:214] exception stack complete
terminate called after throwing an instance of 'torch::jit::script::ErrorReport'
  what():
Arguments for call are not valid.
The following operator variants are available:

  aten::quantized_lstm(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first, *, int? dtype=None) -> (Tensor, Tensor, Tensor):
  Keyword argument use_dynamic unknown.
```

This diff fixes that.

Test Plan:
Running quantization tests after.

```buck test mode/dev caffe2/test:jit -- 'test_quantization_modules \(test_jit\.TestScript\)'```

https://our.intern.facebook.com/intern/testinfra/testrun/5910974518872494

Also, currently building a package (language_technology.translation.jedi.scripts:35c3643) and testing this (f138747078).

f138771702

Reviewed By: jhcross

Differential Revision: D17404451

fbshipit-source-id: 390d2ce1ecbdd63a07a8f16c80e4c3ac25ab0a99
2019-09-16 17:04:14 -07:00
Dmytro Dzhulgakov
df338f80a6 Add a wrapper for inspect in JIT to produce better error message (#25415)
Summary:
If source code is not available due to packaging (e.g. sources are compiled to .pyc), TorchScript produces very obscure error message. This tries to make it nicer and allow to customize message by overriding _utils_internal.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25415

Test Plan: Really hard to unittest properly. Did one off testing by compiling to .pyc and checking the message.

Differential Revision: D17118238

Pulled By: dzhulgakov

fbshipit-source-id: 3cbfee0abddc8613000680548bfe0b8ed52a36b0
2019-09-14 21:27:51 -07:00
davidriazati
d546c069a4 Preserve module names in recursive script (#24505)
Summary:
Turns
```
ScriptModule(
  (conv): ScriptModule()
  (lin): ScriptModule()
  (sub): ScriptModule()
)
```

into

```
ScriptModule(
  original=MyModule
  (conv): ScriptModule(original=Conv2d)
  (lin): ScriptModule(original=Linear)
  (sub): ScriptModule(original=Submodule)
)
```
](https://our.intern.facebook.com/intern/diff/16862032/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24505

Pulled By: driazati

Differential Revision: D16862032

fbshipit-source-id: 76dc4e5252bbf746f5cc26450b577dab10477732
2019-09-11 14:07:04 -07:00
Elias Ellison
8f7020bbdb add support for ModuleDict (#25715)
Summary:
Add support for nn.ModuleDict in script. This is needed to support torchvision.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25715

Differential Revision: D17301826

Pulled By: eellison

fbshipit-source-id: 541b5477e980f519a8c3bbb1be91dac227f6d00f
2019-09-10 18:43:49 -07:00
Elias Ellison
1897440e02 add torch.jit.is_scripting api (#25955)
Summary:
The PR https://github.com/pytorch/pytorch/pull/25263 was based on got reverted and ghimport got confused. Relanding here.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25955

Differential Revision: D17296727

Pulled By: eellison

fbshipit-source-id: 96200d3ef4c86f0d9907dc41b05619cb33bf2bab
2019-09-10 17:28:59 -07:00
Elias Ellison
7ab4ad7b6d add torch.jit.is_scripting() api (#25263)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25263

This adds an api to return true in script and false in eager, which together with ignore allows guarding of not yet supported JIT features. Bikeshedding requested please.

cc zou3519

```
def foo():
   if not torch.jit.is_scripting():
      return torch.linear(...)
   else:
      return addmm(...)
```

Test Plan: Imported from OSS

Differential Revision: D17272443

Pulled By: eellison

fbshipit-source-id: de0f769c7eaae91de0007b98969183df93a91f42
2019-09-09 20:24:36 -07:00
davidriazati
0be29ee2ba Finish testing code examples in the docs (#25668)
Summary:
All of the code examples should now run as unit tests, save for those
that require interaction (i.e. show `pdb` usage) and those that use
CUDA.

`save` had to be moved before `load` in `jit/__init__.py` so `load`
could use the file generated by `save`
](https://our.intern.facebook.com/intern/diff/17192417/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25668

Pulled By: driazati

Differential Revision: D17192417

fbshipit-source-id: 931b310ae0c3d2cc6affeabccae5296f53fe42bc
2019-09-05 16:13:37 -07:00