Commit Graph

77 Commits

Author SHA1 Message Date
Yanan Cao
bdcf320bed Support custom exception message (#41907)
Summary:
Raise and assert used to have a hard-coded error message "Exception". User provided error message was ignored. This PR adds support to represent user's error message in TorchScript.

This breaks backward compatibility because now we actually need to script the user's error message, which can potentially contain unscriptable expressions. Such programs can break when scripting, but saved models can still continue to work.

Increased an op count in test_mobile_optimizer.py because now we need aten::format to form the actual exception message.

This is built upon an WIP PR:  https://github.com/pytorch/pytorch/pull/34112 by driazati

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41907

Reviewed By: ngimel

Differential Revision: D22778301

Pulled By: gmagogsfm

fbshipit-source-id: 2b94f0db4ae9fe70c4cd03f4048e519ea96323ad
2020-08-01 13:03:45 -07:00
Yanan Cao
4a3aad354a [1/N] Implement Enum JIT support (#41390)
Summary:
* Add EnumType and AnyEnumType as first-class jit type
* Add Enum-typed IValue
* Enhanced aten::eq to support Enum

Supported:
Enum-typed function targuments
using Enum type and comparing them

TODO:
Add PyThon sugared value for Enum
Support getting name/value attrs of enums
Support Enum-typed return values
Support enum values of different types in same Enum class
Support serialization and deserialization

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41390

Reviewed By: eellison

Differential Revision: D22524388

Pulled By: gmagogsfm

fbshipit-source-id: 1627154a64e752d8457cd53270f3d14aea4b1150
2020-07-18 22:15:06 -07:00
Michael Suo
ca1b8ebbcb move misc implementation out of jit/__init__.py (#41154)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41154

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22445213

Pulled By: suo

fbshipit-source-id: 200545715c5ef13beb1437f49e01efb21498ddb7
2020-07-13 16:59:55 -07:00
Shihao Xu
0ecea2d64d [JIT x RPC] Consolidate Future type class and Future impl class (#40406)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40406

Same motivation for https://github.com/pytorch/pytorch/issues/35110.

`Future` and `RRef` are two important types for `rpc` module, should make users feel easy to use.

Reference, https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#directive-autoclass

Follow https://github.com/pytorch/pytorch/pull/35694.
ghstack-source-id: 106484664

Test Plan:
```
buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_fork

buck build mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_fork && \
buck-out/gen/caffe2/test/distributed/rpc/jit/rpc_fork\#binary.par \
-r test_rref_local_value
```

```
buck test mode/dev-nosan //caffe2/test/distributed/rpc/tensorpipe:rpc_fork_tensorpipe
```

pyre -l caffe2/torch/fb/training_toolkit
pyre -l caffe2/torch/fb/distributed
pyre -l aiplatform

Differential Revision: D7722176

fbshipit-source-id: f3b9ccd7bccb233b2b33ad59dd65e178ba34d67f
2020-06-24 01:44:49 -07:00
Lu Fang
8315bb2359 Back out "[pytorch][PR] [JIT] Infer NamedTuple type attributes of nn.Modules correctly" (#40270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40270

Original commit changeset: 1227e243ab94

D22082806 (1e03d603c6) broke the model generation of pyper models. We trace the namedtuple as input. To unblock the development of PyPer project, let's revert the diff first.

Sorry about the inconvenience, SplitInfinity
ghstack-source-id: 106217609

Test Plan: buck run dper3/dper3_models/experimental/pytorch/feed:feed_generation_script -- --model_files_dir=/tmp/

Reviewed By: alyssawangqq

Differential Revision: D22132960

fbshipit-source-id: ce9278c8462602a341e231ea890e46f74e743ddf
2020-06-19 02:58:31 -07:00
Shihao Xu
f3f30d4354 [JIT x RPC] Consolidate RRef type class and RRef impl class (#35694)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35694

close https://github.com/pytorch/pytorch/issues/35110

Differential Revision: D7881729

fbshipit-source-id: eedda8f1b7510491886d469efeed4e002bb8b991
2020-06-18 07:46:38 -07:00
Meghan Lele
1e03d603c6 [JIT] Infer NamedTuple type attributes of nn.Modules correctly (#39116)
Summary:
**Summary**
This commit modifies type inference for `nn.Module` instance attributes
such that the type of a `NamedTuple` attribute is inferred correctly and
such that the field names of this `NamedTuple` instance can be used in
scripted methods. At present, the type of this attribute is inferred to be
`Tuple[T, U, ..., V]`, so the field must be referred to by index and
cannot be referred to by name.

**Test Plan**
This commit adds a unit test to test that a field of a `NamedTuple`
attribute can be referred to by name in a scripted method.

**Fixes**
This commit fixes https://github.com/pytorch/pytorch/issues/37668.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39116

Differential Revision: D22082806

Pulled By: SplitInfinity

fbshipit-source-id: 1227e243ab941376cd5e382fb093751e88dc8846
2020-06-17 13:58:15 -07:00
Will Constable
a1071e5d36 Fix parsing of subscript expressions using python resolver (#39269)
Summary:
- add call out to python resolver in parseArgsFromDecl, parserReturnFromDecl
- add support in python resolver for nested subexpressions
- wrap python resolver call in exception handling to fall back to c++ path
- add tests for newly resolvable types
- closes https://github.com/pytorch/pytorch/issues/38728

Fixes bug where SourceRange objects did not include the final closing ']' for a subscript expression.  E.g. range for 'List[int]' previously included only 'List[int'.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39269

Differential Revision: D21956402

Pulled By: wconstab

fbshipit-source-id: 5d783260322eb1e04e20bc931a8e9d9179765f13
2020-06-10 13:30:15 -07:00
James Reed
56289ba31f [JIT] Improve error message when type annotation Future without a contained type (#39751)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39751

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D21960368

Pulled By: jamesr66a

fbshipit-source-id: 8650d31ff8070b12672c8d4b0224d4e69f619938
2020-06-09 16:55:13 -07:00
Michael Voznesensky
960f4b51e3 [JIT] Fix @staticmethod access from self on modules (#37702)
Summary:
Closes https://github.com/pytorch/pytorch/issues/30755
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37702

Differential Revision: D21389989

Pulled By: voznesenskym

fbshipit-source-id: f9b7e26a9eab7dc3d7762a5a28f85424dac5fbb3
2020-05-14 21:12:10 -07:00
James Reed
3228939f23 [JIT] Fix fake_range() (#36083)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36083

Test Plan: Imported from OSS

Differential Revision: D20874745

Pulled By: jamesr66a

fbshipit-source-id: fc57defefbc8e9840b8d5bac89b4146179e00b06
2020-04-06 14:12:35 -07:00
Wanchao Liang
238903b7be [jit] Delete polyfill typing (#27510)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27510

We could delete polyfill typing bc requirements.txt require user to
install typing as a dependency whether in py2 or py3, so those typing
actually not getting used either ways.

Test Plan: Imported from OSS

Differential Revision: D20673393

fbshipit-source-id: ea5276824c6e275c1f991f8c12329040b0058d2b
2020-03-27 18:20:53 -07:00
Wanchao Liang
4a84ac5f5d [jit] make Future type annotation available in Python (#27637)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27637

Fixes https://github.com/pytorch/pytorch/issues/26578

Test Plan: Imported from OSS

Differential Revision: D20626866

fbshipit-source-id: 20d6a3a719fddcb33e0e17a56d7123535fa20d65
2020-03-24 14:36:05 -07:00
James Reed
2de4fa702b [JIT] Preserve qualified names on traced modules (#34395)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34395

fixes: https://github.com/pytorch/pytorch/issues/33913

Test Plan: Imported from OSS

Differential Revision: D20347778

Pulled By: jamesr66a

fbshipit-source-id: 7b5a35b6f9678c34cb6127d531fa3bfe65703116
2020-03-09 19:23:53 -07:00
davidriazati
2f6ffe8c39 [jit] Resolve type annotation names to types (#29623)
Summary:
This adds some machinery so that we use Python to resolve types to a value and the corresponding resolution logic in `annotations.py` instead of using the string.

This PR also `slowTests` a random test since it was taking > 1 min whereas all the other tests take < 10 seconds.

Fixes #31864
Fixes #31950
](https://our.intern.facebook.com/intern/diff/20144407/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29623

Pulled By: driazati

Differential Revision: D20144407

fbshipit-source-id: ef3699f6b86039d8b4646ffc42c21bd1132d1681
2020-02-28 18:35:10 -08:00
Wanchao Liang
d494986171 [jit] make RRef type annotation available in Python (#33526)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33526

Test Plan: Imported from OSS

Differential Revision: D19988848

Pulled By: wanchaol

fbshipit-source-id: aeebc946d08b38dac0b656617bf395e86bcea558
2020-02-26 18:44:35 -08:00
davidriazati
503a4e9019 Cleanup after moving language reference (#31146)
Summary:
Stacked PRs
 * **#31146 - [jit] Cleanup after moving language reference**
 * #31138 - [jit] Move TorchScript language reference to its own page

Preview: https://driazati.github.io/pytorch_doc_previews/jit.html#torchscript-language

Pull Request resolved: https://github.com/pytorch/pytorch/pull/31146

Pulled By: driazati

Differential Revision: D19167390

fbshipit-source-id: f28daed36754a553264fc8ac142ed22c3e26d63e
2019-12-18 15:09:35 -08:00
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
Elias Ellison
681b610f35 use new overload mechanism for rnns (#29614)
Summary:
Uses new overload mechanism for rnns, making it so that python & torchscript go through the same path and using an API that is in line with the one specified
in https://docs.python.org/3/library/typing.html#typing.overload

This brings the TorchScriptable rnns closer to the base implementation; unifying them should be done in a follow up PR but there are still a few limitations that make it difficult to do so.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29614

Differential Revision: D18486982

Pulled By: eellison

fbshipit-source-id: aaaea66a4a7f12d2e46199ca254f9e8f7475500e
2019-11-13 15:44:25 -08:00
Edward Yang
715e951e3c Revert D18458751: use new overload mechanism for rnns
Test Plan: revert-hammer

Differential Revision:
D18458751

Original commit changeset: 07c71838f21c

fbshipit-source-id: 86acb02f3e022e93ea6c1ef23fe39c80ad43978f
2019-11-13 07:21:31 -08:00
Elias Ellison
8e7b406773 use new overload mechanism for rnns (#29614)
Summary:
Uses new overload mechanism for rnns, making it so that python & torchscript go through the same path and using an API that is in line with the one specified
in https://docs.python.org/3/library/typing.html#typing.overload

This brings the TorchScriptable rnns closer to the base implementation; unifying them should be done in a follow up PR but there are still a few limitations that make it difficult to do so.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29614

Differential Revision: D18458751

Pulled By: eellison

fbshipit-source-id: 07c71838f21cb5425e8d6dbd4a512f774c8c2970
2019-11-12 16:12:04 -08:00
Elias Ellison
fbe90b65fa Cleanup special handling of Containers, allowing custom forwards (#28988)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28988

Make ModuleList, Sequential, ModuleDict go through the same pathway as other modules, cleaning up a bunch of code and allowing them to define custom forwards and other methods.

EDIT: Previously, we would ignore an nn.Sequential attribute if it was not in `__constants__` ("did you forget to add it to Constants"). This PR scripts it even if it is not in `__constants__`. Is that what we want?

Test Plan: Imported from OSS

Differential Revision: D18402821

Pulled By: eellison

fbshipit-source-id: dd4f28fb0df0d1ba4ad1b3bc34ba141959a433f7
2019-11-12 14:10:38 -08:00
Michael Suo
2fb4059652 change drop_on_export warning category (#29610)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29610

`DeprecationWarning` is intended for developers (and so is ignored in
certain circumstances). `FutureWarning` is the user-facing deprecation
warning. This fixes fbcode failures.

Test Plan: Imported from OSS

Differential Revision: D18446393

Pulled By: suo

fbshipit-source-id: ded11a007f0a62132a9839b733157a97cf9006e9
2019-11-11 23:24:27 -08:00
Elias Ellison
a5aeb37493 Don't throw when type is used in TorchScript (#28053)
Summary:
Type objects in python have an attribute `__abstractmethods__` that throws when it is accessed, so we were failing with an AttributeError whenever a type was used in TorchScript.

This pr prevents that error from happening. We can't just throw when a type is used because it could be used to access a static method: https://github.com/pytorch/pytorch/pull/27163
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28053

Differential Revision: D18332347

Pulled By: eellison

fbshipit-source-id: 9c7f2220f92674ad4d903621d9762cecc566ab0d
2019-11-05 15:15:12 -08:00
davidriazati
618cb40e30 Add doc copy-edits from review (#26322)
Summary:
Add edits from doc review
](https://our.intern.facebook.com/intern/diff/17859654/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26322

Pulled By: driazati

Differential Revision: D17859654

fbshipit-source-id: f3a116cddb5393bdfbef670c56efb2ee62ccf252
2019-10-17 11:12:35 -07:00
Zachary DeVito
fb4517132f Allow 'Any' to appear as a type argument. (#26572)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26572

Combined with isinstance specialization this allows a degree of polymorphic
functions to work without needing to use our weirder overload hacks.

We do not define any operators on Any, so the only thing you can do with it
is to put it in containers or type refine it using an isinstance check.
Any is restricted from appearing in non-argument position because we
cannot restore type tags if it ends up as a field in a class.

Test Plan: Imported from OSS

Differential Revision: D17530643

Pulled By: zdevito

fbshipit-source-id: f06f78ce84819f7773953a492f3d4c49219ee94c
2019-10-16 11:07:08 -07:00
Michael Suo
341262754f module dedupe (#26666)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26666

Changes:
- Introduce a `ConcreteModuleType` concept. This acts both as the key into the type
  cache, and as the source of truth for `ModuleValue::attr` queries. It needs
  to do both jobs because that's how we ensure correctness (if the types are
  different, it's because `ModuleValue::attr` would return different things).
- Now `recursive_script` will first construct a `ConcreteModuleType` and search for a
  pre-existing type before starting compilation.
- All previous paths to creating a `ScriptModule` (including inheriting from
  `ScriptModule`) are now rewritten to go through `create_script_module`, so
  that we have only a single place where construction happens.

Behavioral changes:
- Big change to `torch.jit.ScriptModule` inheritance: all attributes are now
  recursively scripted if possible, matching recursive scripting semantics.
  This makes it hard to keep something from being scripted (for example, a
  Python submodule). Possibly we'll need an `ignore()` type thing for
  attributes. In particular, this adds `self.training` to *every* ScriptModule, since
  it's present on every `nn.Module`.
- I believe this change to be transparent to existing users of the inheritance API, since if you had an attribute that is unscriptable that you never used, there is no error. In some cases, we will create new attributes (even if they are unused), which will increase serialized model size from before.

Test Plan: Imported from OSS

Differential Revision: D17551196

Pulled By: suo

fbshipit-source-id: b476d1c9feb3ddfd63406d90989aaf9dfe890591
2019-10-12 09:51:57 -07:00
Michael Suo
ffa422a8b3 kill _parameter_list (#27399)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27399

This was devised in a time when we didn't have module attributes. They
are essentially just tensor lists, so represent them that way. This has
the additional benefit of making the RNN forward pass faster because we
effectively cache the flattened weights.

The only complication part is that someone may come along and do:
```
my_rnn_mod.w_ih_l0 = torch.nn.Parameter(...)
```

This means we need to override setattr to keep the flattened weights
cache up to date.

Test Plan: Imported from OSS

Differential Revision: D17785658

Pulled By: suo

fbshipit-source-id: 7789cd1d0d4922bfd5eba1716976442fbf150766
2019-10-12 09:51:53 -07:00
Michael Suo
971f773886 Revert D17750005: [jit] Add doc copy-edits from review
Test Plan: revert-hammer

Differential Revision:
D17750005

Original commit changeset: 230d1d33efb0

fbshipit-source-id: 12d22567b99286a8c4f719c3a384cb3665f7ba54
2019-10-09 19:12:58 -07:00
davidriazati
e7c9c8098a Add doc copy-edits from review (#26322)
Summary:
Add edits from doc review
](https://our.intern.facebook.com/intern/diff/17750005/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26322

Pulled By: driazati

Differential Revision: D17750005

fbshipit-source-id: 230d1d33efb015e40327373a05a1d3eced7c5c00
2019-10-09 14:16:48 -07:00
Zachary DeVito
eb9000be4e always use the closure to resolve variable names (#27515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27515

Resoving variable names using the local activation frames does not work
when using recursive scripting, but our current code tries to do it
(incorrectly) anyway. The reason it works is only because the script
call is in the same local frame as the definition. This will not be
true in practice and makes it seem like the API works in more cases
than it really does. This forces us to always use closure-based annotations,
documents it, and it fixes the tests so that they still pass.

Test Plan: Imported from OSS

Differential Revision: D17803403

Pulled By: zdevito

fbshipit-source-id: e172559c655b05f0acf96c34f5bdc849f4e09ce2
2019-10-09 12:16:15 -07:00
albanD
5b5f398dd4 Make cpp-backed jit classes appear as being in torch.jit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27220

Test Plan: Imported from OSS

Differential Revision: D17715305

Pulled By: albanD

fbshipit-source-id: 574704ad23ece6da7aa2780b78867307bef523cc
2019-10-03 08:28:36 -07:00
albanD
17b1faa2bf Rename jit Function to ScriptFunction
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27219

Test Plan: Imported from OSS

Differential Revision: D17715306

Pulled By: albanD

fbshipit-source-id: d11a7634dbee6a885c7177b240958e5aed2544f3
2019-10-03 08:28:32 -07:00
davidriazati
d0fff0ebc8 Make is_optional check more robust (#26312)
Summary:
If the `Union` contains a non-class type, `issubclass` would fail, this
adds a check for that case
](https://our.intern.facebook.com/intern/diff/17505206/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26312

Pulled By: driazati

Differential Revision: D17505206

fbshipit-source-id: 1331e412f938e2f08ecb079972147f11e3ec77cd
2019-09-24 10:44:40 -07:00
Edward Yang
b59e856517 Revert D17486465: [jit] Make is_optional check more robust
Test Plan: revert-hammer

Differential Revision:
D17486465

Original commit changeset: c513cef3bbc0

fbshipit-source-id: 567311c001d7dd0b7ab9ffe8bb894954bea583c9
2019-09-20 11:06:19 -07:00
davidriazati
9a5b784eac Make is_optional check more robust (#26312)
Summary:
If the `Union` contains a non-class type, `issubclass` would fail, this
adds a check for that case
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26312

Pulled By: driazati

Differential Revision: D17486465

fbshipit-source-id: c513cef3bbc038f15c021eb0c1bf36be0df1eb90
2019-09-20 10:50:00 -07:00
Dmytro Dzhulgakov
df338f80a6 Add a wrapper for inspect in JIT to produce better error message (#25415)
Summary:
If source code is not available due to packaging (e.g. sources are compiled to .pyc), TorchScript produces very obscure error message. This tries to make it nicer and allow to customize message by overriding _utils_internal.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25415

Test Plan: Really hard to unittest properly. Did one off testing by compiling to .pyc and checking the message.

Differential Revision: D17118238

Pulled By: dzhulgakov

fbshipit-source-id: 3cbfee0abddc8613000680548bfe0b8ed52a36b0
2019-09-14 21:27:51 -07:00
Elias Ellison
7ab4ad7b6d add torch.jit.is_scripting() api (#25263)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25263

This adds an api to return true in script and false in eager, which together with ignore allows guarding of not yet supported JIT features. Bikeshedding requested please.

cc zou3519

```
def foo():
   if not torch.jit.is_scripting():
      return torch.linear(...)
   else:
      return addmm(...)
```

Test Plan: Imported from OSS

Differential Revision: D17272443

Pulled By: eellison

fbshipit-source-id: de0f769c7eaae91de0007b98969183df93a91f42
2019-09-09 20:24:36 -07:00
Michael Suo
11eb8ac2a9 Revert D17199043: [JIT] preserve ignored function return value type
Test Plan: revert-hammer

Differential Revision:
D17199043

Original commit changeset: 1196fd94c207

fbshipit-source-id: 49789ae1f128262bc40a9d5b0d2b7bfbbf0b7e1e
2019-09-05 15:51:06 -07:00
Elias Ellison
df043cd49d preserve ignored function return value type (#25262)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25262

Preserve the type of ignore'd functions on serialization. Currently we first compile an ignore'd function with it's annotated type when first compiling, but do not preserve it. This is important for being able to compile models with not-yet-supported features in JIT.

```
torch.jit.ignore
def unsupported(x):
    return x

def foo():
   if not torch.jit._is_scripting():
      return torch.linear(...)
   else:
      return unsupported(...)
```

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D17199043

Pulled By: eellison

fbshipit-source-id: 1196fd94c207b9fbee1087e4b2ef7d4656a6647f
2019-09-05 11:21:55 -07:00
davidriazati
1c4495d8ac Clean up after running doc tests (#25036)
Summary:
](https://our.intern.facebook.com/intern/diff/16965612/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25036

Pulled By: driazati

Differential Revision: D16965612

fbshipit-source-id: 494a734c27c1330ea0917397efbad6bc4f40be73
2019-08-23 12:52:48 -07:00
davidriazati
6dca147946 Misc doc updates #2 (#24445)
Summary:
Another pass over the docs, this covers most of the remaining stuff

* content updates for new API
* adds links to functions instead of just names
* removes some useless indentations
* some more code examples + `testcode`s
](https://our.intern.facebook.com/intern/diff/16847964/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24445

Pulled By: driazati

Differential Revision: D16847964

fbshipit-source-id: cd0b403fe4a89802ce79289f7cf54ee0cea45073
2019-08-21 16:45:19 -07:00
Elias Ellison
8e3c0210a5 extend torch.jit._overload to module methods (#24259)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24259

Follow up to https://github.com/pytorch/pytorch/pull/23886, add the same overload api specified in PEP 484 to module methods to reduce the friction of adding method overloads that was brought up in #23266.

The usage is:
```
torch.jit.overload
def add(self, y: int) -> int: ...
torch.jit.overload
def add(self, y: float) -> float: ...
def add():
   ...
```

Test Plan: Imported from OSS

Differential Revision: D16921304

Pulled By: eellison

fbshipit-source-id: 784e2f26f7ca9a330a434a603c86b53725c3dc71
2019-08-20 16:47:35 -07:00
Michael Suo
755f91b400 serializing function calls (#23799)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23799

Before, we inlined as part of the initial IR generation process, which
has a few disadvantages:

1. It loses information about what nodes came from which function/method
calls. Other parties who want to implement transformations on the
function/module level don't have a reliable way of doing so.
2. It duplicates a ton of code if we are inlining the same
function/method a tons of times.

After this PR: inline is deferred to the optimization stage, so
optimizations that rely on inlining will still work. But things get
serialized with the function/method calls in.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23799

Differential Revision: D16652819

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Pulled By: suo

fbshipit-source-id: a11af82aec796487586f81f5a9102fefb6c246db
2019-08-19 18:42:43 -07:00
davidriazati
aed306dcf7 Add @ignore for script classes (#23614)
Summary:
This lets you mark a class so that it won't be recursively compiled.

This also runs up against a weird thing on the UX side, that to ignore a
module you have to `ignore` its `forward()` method but to ignore a
class you use `ignore` on the class declaration. The `ignore` on the
class declaration matches the use of `script` for script classes but is
confusing to those that don't know the difference between script classes
/ modules.
](https://our.intern.facebook.com/intern/diff/16770068/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23614

Pulled By: driazati

Differential Revision: D16770068

fbshipit-source-id: bee9a9e88b6c798ce779f622c4f929adae4eaf45
2019-08-16 16:34:22 -07:00
James Reed
0619b57c4c Add the ability to compile exports on traced modules (#24298)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24298

This helps in situations like when you have `__{g,s}etstate__` on an `nn.Module` and you'd like to trace the module, but still preserve the serialization methods to make the module semantically correct

Test Plan: Imported from OSS

Differential Revision: D16799800

Pulled By: jamesr66a

fbshipit-source-id: 91c2957c94c9ec97a486ea376b2a3e3a821270af
2019-08-14 13:51:22 -07:00
Michael Suo
5ec1c293eb Revert D16552212: [jit] fix py-compat fbcode lint warnings
Differential Revision:
D16552212

Original commit changeset: 7c7de5a096ad

fbshipit-source-id: b5ea5f626883e2b213b9d02875e83e64ed206e58
2019-08-10 21:58:25 -07:00
Michael Suo
f45ec71c4e fix py-compat fbcode lint warnings
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23530

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D16552212

Pulled By: suo

fbshipit-source-id: 7c7de5a096ad9a125976e4710d3660294d3991c5
2019-08-09 12:06:21 -07:00
Elias Ellison
7d8dfd6f76 make _overloads importable in nn/functional (#24049)
Summary:
Move `_overload` to `_jit_internal.py` so that it can be imported in nn/functional.py for `conv`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24049

Differential Revision: D16723339

Pulled By: eellison

fbshipit-source-id: 527e6069dbfa81f8133c405be5350a8c76873a12
2019-08-08 16:57:50 -07:00
Horace He
f81db8afb8 Initial torchbind prototype (#21098)
Summary:
I have some test code in there as well, along with a script "test_libtorch" to run it. You'll need to modify `test_libtorch` to point to where you have `pytorch` built. I currently require that `pybind11` is included as a subdirectory of the test, but added it to the `.gitignore` to make this reviewable.

Currently, something like this works:
```cpp
struct Foo {
  int x, y;
  Foo(): x(2), y(5){}
  Foo(int x_, int y_) : x(x_), y(y_) {}
  void display() {
    cout<<"x: "<<x<<' '<<"y: "<<y<<endl;
  }
  int64_t add(int64_t z) {
    return (x+y)*z;
  }
};
static auto test = torch::jit::class_<Foo>("Foo")
                    .def(torch::jit::init<int64_t, int64_t>())
                    .def("display", &Foo::display)
                    .def("add", &Foo::add)
                    .def("combine", &Foo::combine);

```
with
```py
torch.jit.script
def f(x):
    val = torch._C.Foo(5, 3)
    val.display()
    print(val.add(3))
```
results in
```
x: 5 y: 3
24
```

Current issues:
- [x] The python class created by torchscript doesn't interactly properly with the surrounding code.
```
torch.jit.script
def f(x):
    val = torch._C.Foo(5, 3)
    return val
```
- [x] Doesn't properly take in non-pointer classes. Can't define this function signature in cpp (We don't want to support this I believe).
```cpp
  void combine(Foo x) {
```

- [x] Has some issues with memory for blobs when constructing multiple objects (fix constant propagation pass to not treat capsules as the same object).
```py
torch.jit.script
def f(x):
    val = torch._C.Foo(5, 3)
    val2 = torch._C.Foo(100, 0)
    val.display()
    print(val.add(3))
```
- [ ] Can't define multiple constructors (need to define overload string. Currently not possible since we don't support overloaded methods).
- [x] `init` is a little bit different syntax than `pybind`. `.init<...>()` instead of `.def(py::init<>())`
- [x] I couldn't figure out how to add some files into the build so they'd be copied to the `include/` directories, so I symlinked them manually.
- [ ] Currently, the conversion from Python into Torchscript doesn't work.
- [ ] Torchbind also currently requires Python/Pybind dependency. Fixing this would probably involve some kind of macro to bind into Python when possible.
- [ ] We pass back into Python by value, currently. There's no way of passing by reference.
- [x] Currently can only register one method with the same type signature. This is because we create a `static auto opRegistry`, and the function is templated on the type signature.

Somewhat blocked on https://github.com/pytorch/pytorch/pull/21177. We currently use some structures that will be refactored by his PR (namely `return_type_to_ivalue` and `ivalue_to_arg_type`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21098

Differential Revision: D16634872

Pulled By: Chillee

fbshipit-source-id: 1408bb89ea649c27d560df59e2cf9920467fe1de
2019-08-02 18:45:15 -07:00