Commit Graph

631 Commits

Author SHA1 Message Date
Edward Yang
715e951e3c Revert D18458751: use new overload mechanism for rnns
Test Plan: revert-hammer

Differential Revision:
D18458751

Original commit changeset: 07c71838f21c

fbshipit-source-id: 86acb02f3e022e93ea6c1ef23fe39c80ad43978f
2019-11-13 07:21:31 -08:00
Elias Ellison
8e7b406773 use new overload mechanism for rnns (#29614)
Summary:
Uses new overload mechanism for rnns, making it so that python & torchscript go through the same path and using an API that is in line with the one specified
in https://docs.python.org/3/library/typing.html#typing.overload

This brings the TorchScriptable rnns closer to the base implementation; unifying them should be done in a follow up PR but there are still a few limitations that make it difficult to do so.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29614

Differential Revision: D18458751

Pulled By: eellison

fbshipit-source-id: 07c71838f21cb5425e8d6dbd4a512f774c8c2970
2019-11-12 16:12:04 -08:00
Elias Ellison
fbe90b65fa Cleanup special handling of Containers, allowing custom forwards (#28988)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28988

Make ModuleList, Sequential, ModuleDict go through the same pathway as other modules, cleaning up a bunch of code and allowing them to define custom forwards and other methods.

EDIT: Previously, we would ignore an nn.Sequential attribute if it was not in `__constants__` ("did you forget to add it to Constants"). This PR scripts it even if it is not in `__constants__`. Is that what we want?

Test Plan: Imported from OSS

Differential Revision: D18402821

Pulled By: eellison

fbshipit-source-id: dd4f28fb0df0d1ba4ad1b3bc34ba141959a433f7
2019-11-12 14:10:38 -08:00
Zachary DeVito
627f2823e0 remove _register_* bindings from python (#29499)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29499

This changes how DataParallel and trace module creation works so that
we no longer need to mutate Module class after it has been created.

The only remaining usage of register_* functions are now inside C++
tests.

Test Plan: Imported from OSS

Differential Revision: D18413652

Pulled By: zdevito

fbshipit-source-id: f039e5400cd016632768be4547892f6a69645c20
2019-11-11 13:52:46 -08:00
Zachary DeVito
4e4e29a511 Simplify ScriptModule bindings. (#29432)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29432

This removes a lot of the private methods on torch._C.ScriptModule,
and instead implements functionality in terms of slot_dict_impl views
to implement _parameter, _buffers, and _modules in nn.Module.

A followup PR should also remove the _register_attribute,
_register_module, and _register_parameter methods, but this requires
more refactoring of the way tracing creates modules and replication
for data parallel works.

Test Plan: Imported from OSS

Differential Revision: D18387963

Pulled By: zdevito

fbshipit-source-id: f10d47afeb30c1e05d704ae5ac4166830933125c
2019-11-11 13:52:36 -08:00
Michael Suo
5b1a1a17ed remove FunctionType as an allowed constant (#29405)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29405

We never actually used this (function attributes are a separate pathway
in ConcreteModuleType).

Test Plan: Imported from OSS

Differential Revision: D18378392

Pulled By: suo

fbshipit-source-id: b06c4b6d70f0b2534be78a215125cffd22ab44f0
2019-11-08 13:38:02 -08:00
Michael Suo
255b2340fc don't copy ignored/unused methods to ScriptModule (#29342)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29342

This is not necessary, as we use `lazy_bind` to retrieve those methods
from the class anyway.

Test Plan: Imported from OSS

Differential Revision: D18383381

Pulled By: suo

fbshipit-source-id: e8b7c9e696087cc1e707ac38f7ae85f569f08371
2019-11-07 22:54:29 -08:00
James Reed
782e80e6e7 Make jit.trace_module reentrant (#29411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29411

Fixes https://github.com/pytorch/pytorch/issues/29367

Test Plan: Imported from OSS

Differential Revision: D18380559

Pulled By: jamesr66a

fbshipit-source-id: 5caf606ccbc5dc79dac14e3c28cc02dec19ce695
2019-11-07 16:29:06 -08:00
Igor Fedan
75309b45f3 explicitly provide memory format when calling to clone() at Indexing.cpp
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28660

Test Plan: Imported from OSS

Differential Revision: D18333346

Pulled By: ifedan

fbshipit-source-id: 06590205d883a5096388a4ae318389244130972d
2019-11-07 05:38:32 -08:00
Michael Suo
58005382c8 fix @property (#28395)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28395

Currently property methods are broken in TorchScript because we
basically treat it as an attribute in the existing path: we'll evaluate
the method once and store that as the value forever.

Since lack of property support is easily worked around (just make it
a method), I've opted to just explicitly error to avoid confusion. If
people want it, they can file an issue and we can look at their use
case.

This also helps us nicely clean up some parts of the ScriptModule conversion
path.

Test Plan: Imported from OSS

Reviewed By: shannonzhu

Differential Revision: D18054946

Pulled By: suo

fbshipit-source-id: 7e927836ae687cd2f13a94b9f0af399437fae422
2019-11-06 23:51:07 -08:00
Zachary DeVito
796363147f Implement more of of the nn.Module API (#28828)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28828

This updates torch::script::Module to more closely match the behavior
of nn.Module. In particular, it implements the (optionally recurisive)
iterators that retrieve submodules, parameters, and buffers and makes
their names match the python versions.

This also removes the individual accessors for Parameter, Module, Buffer, etc.
and replaces them with a single `attr` function which is equivalent to
writing `a.foo` in Python (`setattr` emulates `a.foo = v`).
As we build out the user-facing API for TorchScript values this will end
up matching how an  attribute is accessed on general objects.

This PR preservers the python bindings for script::Module by emulating the
old API at the binding level. A followup will clean up the usage to more
directly match the C++ API.

Test Plan: Imported from OSS

Differential Revision: D18197611

Pulled By: zdevito

fbshipit-source-id: 7ee4dcbb258605d1c988314b05d938423f1ccee5
2019-11-06 22:58:25 -08:00
James Reed
309b28ee3a Trace module calls
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29261

Test Plan: Imported from OSS

Differential Revision: D18343363

Pulled By: jamesr66a

fbshipit-source-id: 0c6394205e2c0ea8708028d20df83fe17b466ff4
2019-11-06 15:05:49 -08:00
James Reed
6e38c3b89e Make get_trace_graph private
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29149

Test Plan: Imported from OSS

Differential Revision: D18307559

Pulled By: jamesr66a

fbshipit-source-id: 0b6aec2a1d10810d4e7f6b30b256cca79fc4e854
2019-11-05 17:04:36 -08:00
Wanchao Liang
9492994feb submodule swapping via module interface (#28409)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28409

This PR enables submodule swapping via module interface. User could
declare a submodule as an module interface type in the ScriptModule,
during compilation we will record the module interface type in
ModuleInfo of ConcreteModuleType, the JIT type associated will have the
correct ModuleInterfaceType, and CppModule will get the correct module list

Given that we still keep the module interface type in the type system,
the graph is not inlined when we call Module::Attr and it will use
prim::CallMethod to call the method, this allow us to do module swapping
for the ScriptModule that also meet the same module interface type, and
    we only allow the module swapping through the module interface
    approach.

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D18284309

fbshipit-source-id: 2cb843e4b75fa3fcd8c6020832a81014dbff4f03
2019-11-05 11:31:40 -08:00
Zak Hassan
a02681f804 Cleaned up func removed unused variable (#29179)
Summary:
I don't see `_frames_up` being used anywhere. Just to clean up the code thought it should be removed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29179

Differential Revision: D18319876

Pulled By: suo

fbshipit-source-id: 5e612ff94ccc88fc85288ffc26213e1d11580c36
2019-11-05 08:48:45 -08:00
Zak Hassan
7434da2c3f value assigned but never used in _recursive.py (#29181)
Summary:
# Description
I'm new to this project just wanted to start with small bug fixes. I found some unused local variables and I've removed them in this pr.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29181

Differential Revision: D18319893

Pulled By: suo

fbshipit-source-id: e4f9f13b6db2ca213015569deb12d3fd9beb74a8
2019-11-05 08:48:41 -08:00
Wanchao Liang
e95dc9814e introduce module interface declaration (#28408)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28408

This enable interface to defined on a nn.Module, and the InterfaceType
now have a field of is_module_ to distinguish if it's a module interface
or a normal interface (This is similar to what ClassType distinguish on
module and torchscript classes).

The module interface can be assigned with any ScriptModule that has the
compatible signatures on schemas. A normal object that is not a
ScriptModule will not be able to assigned to an module interface and
will error out when user explicitly doing so. Assigning a ScriptModule
to class interface will make it only available in attribute_list, not
module_list. More details on subtyping relationship documented in the
jit_type.h

If you declare an module interface inside an nn.Module that is being
compiled to a ScriptModule, behavior to our internal compilation will
be:

1. ConcreteModuleType will record it as an module attribute and add to
   the attributes_ list.
2. JitType that is created from the ConcreteModuleType will record it as
   an attribute and pre-genenerate the slot. The slot will be marked as
   EntityType::MODULE still to make sure JitType record it as a Module
   slot
3. cpp_module will also register it as a Module as the Slot type is the
   source of truth

Since JitType will record it as attribute as store its type, it will
behave normally as the class interface attribute behave now. This means
the submodule assigned to this module interface is not getting inlined
into the graph as the normal `Module::attr` behave, it will generate
interface callMethod and allow us to later swap this with another
ScriptModule that implicitly implements this module interface.

Test Plan: Imported from OSS

Differential Revision: D18284311

fbshipit-source-id: e0b8f6e8c34b2087fab337a969e5ea3fb37ec209
2019-11-02 16:39:00 -07:00
Wanchao Liang
1e904049ca guard against inheritance on torchscript classes (#28407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28407

Given that we do not have support for inheitance or any polymorphism
strategy yet, we should guard against user from using it until we get
the full support so that user won't confuse by the weird behaviors.

Test Plan: Imported from OSS

Differential Revision: D18284310

fbshipit-source-id: f55a224f4190d57926d91ed98f6168d787387eb8
2019-11-02 16:38:56 -07:00
Jianyu Huang
9e64c54c01 Add the warning message for API with linear modules (#28766)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28766

Add the warning message to explicitly ask the users to upgrade the deprecated `torch.jit.quantized` API to the new `torch.quantization.quantize_dynamic` API.
ghstack-source-id: 92711620

Test Plan: CI

Differential Revision: D18164903

fbshipit-source-id: e6aff2527f335c2d9f362e6856ce8597edb52aaa
2019-10-28 14:24:44 -07:00
James Reed
f782500ee0 Abstract tracer::enter and tracer::exit into a function
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28473

Test Plan: Imported from OSS

Differential Revision: D18121007

Pulled By: jamesr66a

fbshipit-source-id: 4c4a4344ad9bcc4630b945d2a645a0b05928933c
2019-10-26 18:41:14 -07:00
Michael Suo
2181dd516e fix handling of function attributes. (#28569)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28569

Previously, the inclusion of function attributes would "poison" a
ConcreteModuleType, because we did not have a way of checking whether
they are actually the same function. This PR uses the Python function
object to perform that check. This improves our ability to reuse JIT
types between modules.

Also this PR fixes a bug where we weren't properly adding modules as
attributes when converting from ConcreteType -> JIT type (we were adding
them after the fact--another reason to switch from using `register_x` to
`set_x` during module construction, which is on my to-do list after
this).

Fixes https://github.com/pytorch/pytorch/issues/28559

Test Plan: Imported from OSS

Differential Revision: D18111331

Pulled By: suo

fbshipit-source-id: ec2cccf832d3ddd4cd4d28fe19cb265f1275325a
2019-10-24 22:23:37 -07:00
なるみ
d83389d327 Ignore F401 in all __init__.py without putting noqa (#25823)
Summary:
By adding `per-file-ignores = __init__.py: F401` into `.flake8` with `flake8>=3.7`, we can ignore F410 in all `__init__.py` without putting `# noqa: F401` line by line.

http://flake8.pycqa.org/en/latest/user/options.html?highlight=per-file-ignores#cmdoption-flake8-per-file-ignores
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25823

Differential Revision: D17252182

Pulled By: soumith

fbshipit-source-id: 87b174075b79e4078953a7521bd1a8f82405646b
2019-10-23 15:28:13 -07:00
davidriazati
618cb40e30 Add doc copy-edits from review (#26322)
Summary:
Add edits from doc review
](https://our.intern.facebook.com/intern/diff/17859654/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26322

Pulled By: driazati

Differential Revision: D17859654

fbshipit-source-id: f3a116cddb5393bdfbef670c56efb2ee62ccf252
2019-10-17 11:12:35 -07:00
Zachary DeVito
fb4517132f Allow 'Any' to appear as a type argument. (#26572)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26572

Combined with isinstance specialization this allows a degree of polymorphic
functions to work without needing to use our weirder overload hacks.

We do not define any operators on Any, so the only thing you can do with it
is to put it in containers or type refine it using an isinstance check.
Any is restricted from appearing in non-argument position because we
cannot restore type tags if it ends up as a field in a class.

Test Plan: Imported from OSS

Differential Revision: D17530643

Pulled By: zdevito

fbshipit-source-id: f06f78ce84819f7773953a492f3d4c49219ee94c
2019-10-16 11:07:08 -07:00
Hiroshi Ogawa
97b39a296f Fix error report highlight for unmatched type annotation (#27195)
Summary:
This PR fixes https://github.com/pytorch/pytorch/issues/25801 (see there for my verbose analysis).

As an example, for the following code:

```
import torch

torch.jit.script
def f1(x):
    # type: (int, int) -> None
    pass
```

this PR will change error message from this:

```
RuntimeError:
Number of type annotations (2) did not match the number of function parameters (1):
# type: (int, int) -> None
```

to this:

```
RuntimeError:
Number of type annotations (2) did not match the number of function parameters (1):
at __scratch__/example.py:4:0
torch.jit.script
def f1(x):
~~~~~~~~ <--- HERE
    # type: (int, int) -> None
    pass
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27195

Differential Revision: D17910902

Pulled By: driazati

fbshipit-source-id: af5c6353069d005752d6c7f0bd6a0c6db8437e55
2019-10-16 10:39:36 -07:00
zou3519
e5d6b75319 Bag of documentation fixes; fix more sphinx warnings (#27850)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27850

Many of these are real problems in the documentation (i.e., link or
bullet point doesn't display correctly).

Test Plan: - built and viewed the documentation for each change locally.

Differential Revision: D17908123

Pulled By: zou3519

fbshipit-source-id: 65c92a352c89b90fb6b508c388b0874233a3817a
2019-10-15 07:31:14 -07:00
Michael Suo
a4a5b6fcaa Revert D17913708: [pytorch][PR] [JIT] throw on custom forward for module containers
Test Plan: revert-hammer

Differential Revision:
D17913708

Original commit changeset: 1cc2a8a4b573

fbshipit-source-id: 19ad68a1b0fd8e0f17e1b7ab92879106517e13d2
2019-10-14 17:48:31 -07:00
Elias Ellison
cd6b37afa7 throw on custom forward for module containers (#27763)
Summary:
Custom forwards of containers would silently not be compiled previously. Throw an error now instead.

Fix for https://github.com/pytorch/pytorch/issues/26671
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27763

Differential Revision: D17913708

Pulled By: eellison

fbshipit-source-id: 1cc2a8a4b57356ba7f007a95ede0a31e5d61aa82
2019-10-14 13:08:10 -07:00
Michael Suo
341262754f module dedupe (#26666)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26666

Changes:
- Introduce a `ConcreteModuleType` concept. This acts both as the key into the type
  cache, and as the source of truth for `ModuleValue::attr` queries. It needs
  to do both jobs because that's how we ensure correctness (if the types are
  different, it's because `ModuleValue::attr` would return different things).
- Now `recursive_script` will first construct a `ConcreteModuleType` and search for a
  pre-existing type before starting compilation.
- All previous paths to creating a `ScriptModule` (including inheriting from
  `ScriptModule`) are now rewritten to go through `create_script_module`, so
  that we have only a single place where construction happens.

Behavioral changes:
- Big change to `torch.jit.ScriptModule` inheritance: all attributes are now
  recursively scripted if possible, matching recursive scripting semantics.
  This makes it hard to keep something from being scripted (for example, a
  Python submodule). Possibly we'll need an `ignore()` type thing for
  attributes. In particular, this adds `self.training` to *every* ScriptModule, since
  it's present on every `nn.Module`.
- I believe this change to be transparent to existing users of the inheritance API, since if you had an attribute that is unscriptable that you never used, there is no error. In some cases, we will create new attributes (even if they are unused), which will increase serialized model size from before.

Test Plan: Imported from OSS

Differential Revision: D17551196

Pulled By: suo

fbshipit-source-id: b476d1c9feb3ddfd63406d90989aaf9dfe890591
2019-10-12 09:51:57 -07:00
Michael Suo
ffa422a8b3 kill _parameter_list (#27399)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27399

This was devised in a time when we didn't have module attributes. They
are essentially just tensor lists, so represent them that way. This has
the additional benefit of making the RNN forward pass faster because we
effectively cache the flattened weights.

The only complication part is that someone may come along and do:
```
my_rnn_mod.w_ih_l0 = torch.nn.Parameter(...)
```

This means we need to override setattr to keep the flattened weights
cache up to date.

Test Plan: Imported from OSS

Differential Revision: D17785658

Pulled By: suo

fbshipit-source-id: 7789cd1d0d4922bfd5eba1716976442fbf150766
2019-10-12 09:51:53 -07:00
Elias Ellison
5d495a11cb add unused and is_scripting to docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27630

Differential Revision: D17868856

Pulled By: eellison

fbshipit-source-id: 7cf183d5c0d5436fbaa549a02e6b8fd47fa15b67
2019-10-10 17:02:17 -07:00
Michael Suo
971f773886 Revert D17750005: [jit] Add doc copy-edits from review
Test Plan: revert-hammer

Differential Revision:
D17750005

Original commit changeset: 230d1d33efb0

fbshipit-source-id: 12d22567b99286a8c4f719c3a384cb3665f7ba54
2019-10-09 19:12:58 -07:00
davidriazati
e7c9c8098a Add doc copy-edits from review (#26322)
Summary:
Add edits from doc review
](https://our.intern.facebook.com/intern/diff/17750005/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26322

Pulled By: driazati

Differential Revision: D17750005

fbshipit-source-id: 230d1d33efb015e40327373a05a1d3eced7c5c00
2019-10-09 14:16:48 -07:00
Zachary DeVito
eb9000be4e always use the closure to resolve variable names (#27515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27515

Resoving variable names using the local activation frames does not work
when using recursive scripting, but our current code tries to do it
(incorrectly) anyway. The reason it works is only because the script
call is in the same local frame as the definition. This will not be
true in practice and makes it seem like the API works in more cases
than it really does. This forces us to always use closure-based annotations,
documents it, and it fixes the tests so that they still pass.

Test Plan: Imported from OSS

Differential Revision: D17803403

Pulled By: zdevito

fbshipit-source-id: e172559c655b05f0acf96c34f5bdc849f4e09ce2
2019-10-09 12:16:15 -07:00
davidriazati
725810f42c Set existing attributes under recursive script (#27514)
Summary:
This is related to #27109, `training` was being skipped since modules
have it as an attribute by default, but it should be copied anyways.
](https://our.intern.facebook.com/intern/diff/17802544/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27514

Pulled By: driazati

Differential Revision: D17802544

fbshipit-source-id: 9e8f068903b67073c509c2c598b27622fcada2d7
2019-10-08 10:12:04 -07:00
davidriazati
0046092178 Reduce special casing around 'training' (#27109)
Summary:
Most of this was old cruft left over from special handling of `training` before we had a `bool` type. This makes all modules have a `training` attribute that is true by default and removes all other special handling.

Fixes #26884
](https://our.intern.facebook.com/intern/diff/17728129/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27109

Pulled By: driazati

Differential Revision: D17728129

fbshipit-source-id: 8ddc9fbb07a953dd05529538bfdd01ed88b5cb57
2019-10-07 13:52:59 -07:00
Wanchao Liang
827a00cf63 Support interface python assignment as an attribute (#26734)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26734

This PR added the python assignment for interface as an attribute in the
module, it enables any object that implicitly inheriting the specific
interface to be able to be assigned to the interface type in python.

Serialization support for interface/class assignment will be done in the
follow up PR

Test Plan: Imported from OSS

Differential Revision: D17742708

Pulled By: wanchaol

fbshipit-source-id: a0a2d8c74b60ed3fa6c05e1b0d49b7ad1abc670b
2019-10-03 17:18:37 -07:00
albanD
5b5f398dd4 Make cpp-backed jit classes appear as being in torch.jit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27220

Test Plan: Imported from OSS

Differential Revision: D17715305

Pulled By: albanD

fbshipit-source-id: 574704ad23ece6da7aa2780b78867307bef523cc
2019-10-03 08:28:36 -07:00
albanD
17b1faa2bf Rename jit Function to ScriptFunction
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27219

Test Plan: Imported from OSS

Differential Revision: D17715306

Pulled By: albanD

fbshipit-source-id: d11a7634dbee6a885c7177b240958e5aed2544f3
2019-10-03 08:28:32 -07:00
Lara Haidar
614edfce81 Add Support to Dicts and Strings in ONNX for Inputs and Outputs (#25889)
Summary:
ONNX does not support dictionaries for inputs and output. The reason is that the arg flattening and unflattening does not handle Dictionary types.
This PR adds flattening/unflattening support for dictionaries and strings.
However this feature should be handled with caution for input dictionaries; and users need to verify their dict inputs carefully, and keep in mind that dynamic lookups are not available.

This PR will allow exporting cases where models have dictionnary outputs (detection and segmentation models in torchvision), and where dictionary inputs are used for model configurations (MultiScaleRoiAlign in torchvision).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25889

Reviewed By: hl475

Differential Revision: D17613605

Pulled By: houseroad

fbshipit-source-id: c62da4f35e5dc2aa23a85dfd5e2e11f63e9174db
2019-09-26 22:31:09 -07:00
davidriazati
ef8d1c50c4 Fix builtin lookup for Python functions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26688

Pulled By: driazati

Differential Revision: D17560634

fbshipit-source-id: e1c50d1ca24e0313c2b7d704c488a29ef6a47cad
2019-09-24 18:02:36 -07:00
davidriazati
4c40dbcb75 Resolve NamedTuple types in Python (#26443)
Summary:
When used as annotations on Python functions, `NamedTuple`s go through our Python annotation -> type mapping which previously had no way of lookup up `NamedTuple`s (which are created lazily by checking if the type has certain properties, so the lookup is creating the `TupleType` from scratch). This PR threads through the necessary data to make them work.

Fixes #26437
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26443

Pulled By: driazati

Differential Revision: D17486441

fbshipit-source-id: a6bbb543ff05a5abe61f1a7f68db9ecdb652b358
2019-09-20 10:53:25 -07:00
Elias Ellison
1f2fa8d4d8 Make jit dicts ordered (#26465)
Summary:
Makes c10::Dict Ordered and bins binds the OrderedDict() and dict() constructor into torchscript. For the case of the empty constructor dict() i typed it as [str, Tensor] because:
• we're almost dropping support for python 2, at which point all dicts are ordered
• then it's more conventional to write x : Dict[int, int] = {} which is already supported
• It is possible to construct an arbitrarily typed empty OrderedDict through
OrderedDict(torch.jit.annotate(List[Tuple[key, value]], [])

We could consider dropping the no inputs aten::dict constructor since then the types would be more explicit.

This replaces https://github.com/pytorch/pytorch/issues/26170 and https://github.com/pytorch/pytorch/pull/26372 b/c ghstack was poisioned and i had to resubmit
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26465

Differential Revision: D17481604

Pulled By: eellison

fbshipit-source-id: d2d49795a518c3489881afac45d070e5262c5849
2019-09-19 15:09:02 -07:00
Halil Akin
31960e8872 Add missing argument for failing function call (#26311)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26311

We are currently unable to deploy models due to D16955662 changing function signature of ```quantized_lstm(``` but the function call here (https://fburl.com/diffusion/e4wrmx83) not passing the newly added ```use_dynamic``` param.

Here is the details of the error: P111215482

```
E0916 12:36:16.423516 1149877 ExceptionTracer.cpp:214] exception stack complete
terminate called after throwing an instance of 'torch::jit::script::ErrorReport'
  what():
Arguments for call are not valid.
The following operator variants are available:

  aten::quantized_lstm(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first, *, int? dtype=None) -> (Tensor, Tensor, Tensor):
  Keyword argument use_dynamic unknown.
```

This diff fixes that.

Test Plan:
Running quantization tests after.

```buck test mode/dev caffe2/test:jit -- 'test_quantization_modules \(test_jit\.TestScript\)'```

https://our.intern.facebook.com/intern/testinfra/testrun/5910974518872494

Also, currently building a package (language_technology.translation.jedi.scripts:35c3643) and testing this (f138747078).

f138771702

Reviewed By: jhcross

Differential Revision: D17404451

fbshipit-source-id: 390d2ce1ecbdd63a07a8f16c80e4c3ac25ab0a99
2019-09-16 17:04:14 -07:00
Dmytro Dzhulgakov
df338f80a6 Add a wrapper for inspect in JIT to produce better error message (#25415)
Summary:
If source code is not available due to packaging (e.g. sources are compiled to .pyc), TorchScript produces very obscure error message. This tries to make it nicer and allow to customize message by overriding _utils_internal.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25415

Test Plan: Really hard to unittest properly. Did one off testing by compiling to .pyc and checking the message.

Differential Revision: D17118238

Pulled By: dzhulgakov

fbshipit-source-id: 3cbfee0abddc8613000680548bfe0b8ed52a36b0
2019-09-14 21:27:51 -07:00
davidriazati
d546c069a4 Preserve module names in recursive script (#24505)
Summary:
Turns
```
ScriptModule(
  (conv): ScriptModule()
  (lin): ScriptModule()
  (sub): ScriptModule()
)
```

into

```
ScriptModule(
  original=MyModule
  (conv): ScriptModule(original=Conv2d)
  (lin): ScriptModule(original=Linear)
  (sub): ScriptModule(original=Submodule)
)
```
](https://our.intern.facebook.com/intern/diff/16862032/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24505

Pulled By: driazati

Differential Revision: D16862032

fbshipit-source-id: 76dc4e5252bbf746f5cc26450b577dab10477732
2019-09-11 14:07:04 -07:00
Elias Ellison
8f7020bbdb add support for ModuleDict (#25715)
Summary:
Add support for nn.ModuleDict in script. This is needed to support torchvision.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25715

Differential Revision: D17301826

Pulled By: eellison

fbshipit-source-id: 541b5477e980f519a8c3bbb1be91dac227f6d00f
2019-09-10 18:43:49 -07:00
Elias Ellison
1897440e02 add torch.jit.is_scripting api (#25955)
Summary:
The PR https://github.com/pytorch/pytorch/pull/25263 was based on got reverted and ghimport got confused. Relanding here.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25955

Differential Revision: D17296727

Pulled By: eellison

fbshipit-source-id: 96200d3ef4c86f0d9907dc41b05619cb33bf2bab
2019-09-10 17:28:59 -07:00
Elias Ellison
7ab4ad7b6d add torch.jit.is_scripting() api (#25263)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25263

This adds an api to return true in script and false in eager, which together with ignore allows guarding of not yet supported JIT features. Bikeshedding requested please.

cc zou3519

```
def foo():
   if not torch.jit.is_scripting():
      return torch.linear(...)
   else:
      return addmm(...)
```

Test Plan: Imported from OSS

Differential Revision: D17272443

Pulled By: eellison

fbshipit-source-id: de0f769c7eaae91de0007b98969183df93a91f42
2019-09-09 20:24:36 -07:00
davidriazati
0be29ee2ba Finish testing code examples in the docs (#25668)
Summary:
All of the code examples should now run as unit tests, save for those
that require interaction (i.e. show `pdb` usage) and those that use
CUDA.

`save` had to be moved before `load` in `jit/__init__.py` so `load`
could use the file generated by `save`
](https://our.intern.facebook.com/intern/diff/17192417/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25668

Pulled By: driazati

Differential Revision: D17192417

fbshipit-source-id: 931b310ae0c3d2cc6affeabccae5296f53fe42bc
2019-09-05 16:13:37 -07:00
Michael Suo
11eb8ac2a9 Revert D17199043: [JIT] preserve ignored function return value type
Test Plan: revert-hammer

Differential Revision:
D17199043

Original commit changeset: 1196fd94c207

fbshipit-source-id: 49789ae1f128262bc40a9d5b0d2b7bfbbf0b7e1e
2019-09-05 15:51:06 -07:00
Elias Ellison
df043cd49d preserve ignored function return value type (#25262)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25262

Preserve the type of ignore'd functions on serialization. Currently we first compile an ignore'd function with it's annotated type when first compiling, but do not preserve it. This is important for being able to compile models with not-yet-supported features in JIT.

```
torch.jit.ignore
def unsupported(x):
    return x

def foo():
   if not torch.jit._is_scripting():
      return torch.linear(...)
   else:
      return unsupported(...)
```

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D17199043

Pulled By: eellison

fbshipit-source-id: 1196fd94c207b9fbee1087e4b2ef7d4656a6647f
2019-09-05 11:21:55 -07:00
Horace He
f3f83ccb23 Added invert bitwise operation to JIT (#22324)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/25360
Fixes https://github.com/pytorch/pytorch/issues/22124
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22324

Differential Revision: D17140477

Pulled By: yf225

fbshipit-source-id: f42aec5e688fe079d9e79726b7a6c345da94ae2e
2019-09-03 11:16:30 -07:00
Michael Suo
60f6cc9d59 Emit script function calls during tracing. (#25089)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25089

Previously, when the tracer encountered a scripted function (or method), it
inlined the function into the graph. Now, we emit a CallFunction or
CallMethod node instead.

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D16987936

Pulled By: suo

fbshipit-source-id: a3e38a4621f3504909ec0542865dc7e381c243d6
2019-08-30 01:30:03 -07:00
davidriazati
07db41bb07 Remove spurious print
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25378

Pulled By: driazati

Differential Revision: D17109684

fbshipit-source-id: 0d437b81c5d765427d129eeb217ea2a951c426d3
2019-08-29 00:49:22 -07:00
davidriazati
8e189a327c Fix lint
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25371

Pulled By: driazati

Differential Revision: D17106672

fbshipit-source-id: eab87a22798da40dd10487dc2f4b1528bd1f703e
2019-08-28 18:25:19 -07:00
davidriazati
43c4b9f2a5 Add source location to class instantiation error (#24990)
Summary:
Fixes #24987
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24990

Pulled By: driazati

Differential Revision: D17099779

fbshipit-source-id: 296e2b4ccc3fddabd4998497d0753e99680ba92d
2019-08-28 17:14:00 -07:00
Zachary DeVito
61818b8986 Add interface declarations to JIT (#25258)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25258

this is the first commit in a series to add interfaces to JIT.
Interfaces allow the specification through a blank python class of an
abstract interface that can be used in type annotations for Script functions.
If a TorchScript class implements all the methods in the interface with
the appropriate types, then it is implicitly considered to implement
that interface.

Follows required:
* implementation of serialization
* implementation in the parser frontend
* better error reporting for explaining why a class does not meet an
  interface specification.

Test Plan: Imported from OSS

Differential Revision: D17079963

Pulled By: zdevito

fbshipit-source-id: a9986eeba2d4fdedd0064ce7d459c0251480a5a0
2019-08-27 22:54:37 -07:00
Edward Yang
9340b155bc Revert D15901930: Add interface declarations to JIT
Test Plan: revert-hammer

Differential Revision:
D15901930

Original commit changeset: 22c82d12c9c2

fbshipit-source-id: 4009a3ce7af245d7e0f4924824ece59cdc774180
2019-08-27 06:41:32 -07:00
Zachary DeVito
4b22cf6bd5 Add interface declarations to JIT (#21972)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21972
ghimport-source-id: 280f89ca678615f915be2139d1c05cb6bc39eefc

Test Plan: Imported from OSS

Differential Revision: D15901930

Pulled By: zdevito

fbshipit-source-id: 22c82d12c9c2600e569d7083e2771fd6ec3de2b1
2019-08-26 16:57:59 -07:00
Wanchao Liang
969c918f56 bind autograd.backward and tensor.backward in TorchScript (#23913)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23913

This PR bind torch.autograd.backward and tensor.backward to TorchScript,
and make aliasing to the conservative for these two ops, this is mainly
because backward op might write to every input tensors in the graph

Test Plan: Imported from OSS

Differential Revision: D16923272

fbshipit-source-id: 8a4016c62e00d00e0dee3d8c599d3aca220202f7
2019-08-26 12:11:02 -07:00
Elias Ellison
ab38059bc7 fix annotated assignment (#25094)
Summary:
Fixing parsing for annotated assignment
`List[int] a = []`.

See https://github.com/pytorch/pytorch/pull/24989/files?file-filters%5B%5D=.py for changes to the test_jit_py3 & run_test files.

follow up to https://github.com/pytorch/pytorch/pull/24477 and fix for https://github.com/pytorch/pytorch/issues/25086
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25094

Differential Revision: D16985016

Pulled By: eellison

fbshipit-source-id: 6be1363f2503303b96bd2e6a9f188ad72441f4eb
2019-08-23 13:14:38 -07:00
Elias Ellison
e8ea44796e add support for multiple assignment statements (#24477)
Summary:
add support for : `a = b, c = (1, 2)`

partial fix for https://github.com/pytorch/pytorch/issues/24256
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24477

Differential Revision: D16963413

Pulled By: eellison

fbshipit-source-id: 0433a1e759b3aa719ef1b766bb5160f2ca814205
2019-08-22 10:17:14 -07:00
davidriazati
6dca147946 Misc doc updates #2 (#24445)
Summary:
Another pass over the docs, this covers most of the remaining stuff

* content updates for new API
* adds links to functions instead of just names
* removes some useless indentations
* some more code examples + `testcode`s
](https://our.intern.facebook.com/intern/diff/16847964/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24445

Pulled By: driazati

Differential Revision: D16847964

fbshipit-source-id: cd0b403fe4a89802ce79289f7cf54ee0cea45073
2019-08-21 16:45:19 -07:00
Wanchao Liang
f6daab5686 bind autograd.grad function into TorchScript (#24871)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24871

Bind the torch.autograd.grad function into TorchScript so that well
formed inputs can directly call this from a TorchScript function.

This also change the serliazation a bit, it fixes a small bug where node
output type can never be tensor type in prim::ListConstruct(only its elementype can be), and add the case where we need to annotate the ListType if the element type is optional type to preserve type information when reimport

Differential Revision: D16923273

fbshipit-source-id: 151cc13411c8c287def35b4e65122d9fc083ccfd
2019-08-21 11:22:23 -07:00
davidriazati
b99ab492ea Fix missing super call error
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24852

Pulled By: driazati

Differential Revision: D16902742

fbshipit-source-id: a72403dc37a406ee228d3b19afc22bd86812f962
2019-08-21 10:53:38 -07:00
davidriazati
1d53d07566 Add docs to CI (#24435)
Summary:
Stacked PRs
 * #24445 - [jit] Misc doc updates #2
 * **#24435 - [jit] Add docs to CI**

This integrates the [doctest](http://www.sphinx-doc.org/en/master/usage/extensions/doctest.html) module into `jit.rst` so that we can run our code examples as unit tests. They're added to `test_jit.py` under the `TestDocs` class (which takes about 30s to run). This should help prevent things like #24429 from happening in the future. They can be run manually by doing `cd docs && make doctest`.

* The test setup requires a hack since `doctest` defines everything in the `builtins` module which upsets `inspect`
* There are several places where the code wasn't testable (i.e. it threw an exception on purpose). This may be resolvable, but I'd prefer to leave that for a follow up. For now there are `TODO` comments littered around.
](https://our.intern.facebook.com/intern/diff/16840882/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24435

Pulled By: driazati

Differential Revision: D16840882

fbshipit-source-id: c4b26e7c374cd224a5a4a2d523163d7b997280ed
2019-08-20 21:40:44 -07:00
Elias Ellison
8e3c0210a5 extend torch.jit._overload to module methods (#24259)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24259

Follow up to https://github.com/pytorch/pytorch/pull/23886, add the same overload api specified in PEP 484 to module methods to reduce the friction of adding method overloads that was brought up in #23266.

The usage is:
```
torch.jit.overload
def add(self, y: int) -> int: ...
torch.jit.overload
def add(self, y: float) -> float: ...
def add():
   ...
```

Test Plan: Imported from OSS

Differential Revision: D16921304

Pulled By: eellison

fbshipit-source-id: 784e2f26f7ca9a330a434a603c86b53725c3dc71
2019-08-20 16:47:35 -07:00
James Reed
896e4b6e09 Support QScheme in script
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24358

Test Plan: Imported from OSS

Differential Revision: D16811412

Pulled By: jamesr66a

fbshipit-source-id: 2b0c981f7e8793bf036e398e02aca3c62ddcb64b
2019-08-20 13:09:44 -07:00
Michael Suo
755f91b400 serializing function calls (#23799)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23799

Before, we inlined as part of the initial IR generation process, which
has a few disadvantages:

1. It loses information about what nodes came from which function/method
calls. Other parties who want to implement transformations on the
function/module level don't have a reliable way of doing so.
2. It duplicates a ton of code if we are inlining the same
function/method a tons of times.

After this PR: inline is deferred to the optimization stage, so
optimizations that rely on inlining will still work. But things get
serialized with the function/method calls in.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23799

Differential Revision: D16652819

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Pulled By: suo

fbshipit-source-id: a11af82aec796487586f81f5a9102fefb6c246db
2019-08-19 18:42:43 -07:00
davidriazati
e0e5813b72 Fix unicode in comments (#24218)
Summary:
Fixes #24164
](https://our.intern.facebook.com/intern/diff/16901789/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24218

Pulled By: driazati

Differential Revision: D16901789

fbshipit-source-id: 8f1c7af437e66119bec616bc906c96d5d92cfb13
2019-08-19 16:33:21 -07:00
Michael Suo
ef14d88f27 Make torch.jit.Attribute work with PYTORCH_ENABLED=0
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23851

Test Plan: Imported from OSS

Differential Revision: D16840394

Pulled By: suo

fbshipit-source-id: b72e081513de73f565f3aeaa61125b7d3aa9c3e7
2019-08-19 15:23:21 -07:00
davidriazati
10c456417c Clear recursive error stack on each compilation (#23458)
Summary:
Previously we weren't clearing the stack, so any failures that didn't
stop the program stayed around in the stack and would show up if
something else accessed the stack.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23458

Pulled By: driazati

Differential Revision: D16866719

fbshipit-source-id: 29739b11f79de91c6468129da1bdcbf3c53b42d9
2019-08-16 16:10:19 -07:00
Elias Ellison
2e44630d35 fix double copying of constants (#24412)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/24369
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24412

Test Plan: Imported from GitHub, without a `Test Plan:` line.

Differential Revision: D16843311

Pulled By: eellison

fbshipit-source-id: b25552c49b963c031c98749bcda31f65cd82f19d
2019-08-16 13:29:22 -07:00
davidriazati
b59fa077b3 Misc doc updates / fixes (#24371)
Summary:
This is a bunch of changes to the docs for stylistic changes,
correctness, and updates to the new script API / recent TorchScript
changes (i.e. namedtuple)

For reviewers, ping me to see a link of the rendered output.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24371

Pulled By: driazati

Differential Revision: D16832417

fbshipit-source-id: a28e748cf1b590964ca0ae2dfb5d8259c766a203
2019-08-15 11:31:24 -07:00
James Reed
79710604cc fix lint
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24375

Test Plan: Imported from OSS

Differential Revision: D16819647

Pulled By: jamesr66a

fbshipit-source-id: 84eefe1ee27bd05ed9b8745d8011dddf6cb3ddbf
2019-08-14 17:37:39 -07:00
davidriazati
9fe4052b6c Add trace_module to docs (#24258)
Summary:
Stacked PRs
 * **#24258 - [jit] Add `trace_module` to docs**
 * #24208 - [jit] Cleanup documentation around `script` and `trace`

](https://our.intern.facebook.com/intern/diff/16811342/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24258

Pulled By: driazati

Differential Revision: D16811342

fbshipit-source-id: 893be85a711ab180319b790ed1c72b93022373c1
2019-08-14 14:04:14 -07:00
davidriazati
716abd8705 Cleanup documentation around script and trace (#24208)
Summary:
Stacked PRs
 * #24258 - [jit] Add `trace_module` to docs
 * **#24208 - [jit] Cleanup documentation around `script` and `trace`**

Examples / info was duplicated between `ScriptModule`, `script`, and
`trace`, so this PR consolidates it and moves some things around to make
the docs more clear.

For reviewers, if you want to see the rendered output, ping me for a
link
](https://our.intern.facebook.com/intern/diff/16746236/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24208

Pulled By: driazati

Differential Revision: D16746236

fbshipit-source-id: fac3c6e762a31c897b132b8421baa8d4d61f694c
2019-08-14 14:04:10 -07:00
James Reed
0619b57c4c Add the ability to compile exports on traced modules (#24298)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24298

This helps in situations like when you have `__{g,s}etstate__` on an `nn.Module` and you'd like to trace the module, but still preserve the serialization methods to make the module semantically correct

Test Plan: Imported from OSS

Differential Revision: D16799800

Pulled By: jamesr66a

fbshipit-source-id: 91c2957c94c9ec97a486ea376b2a3e3a821270af
2019-08-14 13:51:22 -07:00
Michael Suo
8a7e57c416 clean up import_source (#24282)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24282

This moves a test from Python to cpp, and in doing so lets us clean up a
bunch of otherwise unused code.

Test Plan: Imported from OSS

Differential Revision: D16800562

Pulled By: suo

fbshipit-source-id: ebc29bb81f4fb2538081fa309ead1739980f1093
2019-08-14 11:26:26 -07:00
Edward Yang
94aae71ba9 Revert D16684390: [jit] clean up import_source
Differential Revision:
D16684390

Original commit changeset: fca81ca14d1a

fbshipit-source-id: eb229097560ab1ead43756175e552764c8a14703
2019-08-13 13:26:59 -07:00
Pieter Noordhuis
14ab44f834 Fix flake8 issues in ./torch/jit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24240

Differential Revision: D16783395

Pulled By: ezyang

fbshipit-source-id: 8427b7cd7d0552820cbbf20ebfca86898f3f53f7
2019-08-13 11:50:02 -07:00
Michael Suo
bb4f4e4d03 clean up import_source (#23846)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23846

This moves a test from Python to cpp, and in doing so lets us clean up a
bunch of otherwise unused code.

Test Plan: Imported from OSS

Differential Revision: D16684390

Pulled By: suo

fbshipit-source-id: fca81ca14d1ac9e4d6b47ae5eecaa42b38d69147
2019-08-12 20:30:06 -07:00
davidriazati
90895c8f85 Fix trace docs (#24191)
Summary:
These were incorrect and didn't run before

Pull Request resolved: https://github.com/pytorch/pytorch/pull/24191

Pulled By: driazati

Differential Revision: D16770604

fbshipit-source-id: 0d8547185871f7f4b1e44c660e45699ed8240900
2019-08-12 14:48:42 -07:00
davidriazati
cb4a6327a3 Delete WeakScriptModuleProxy (#23398)
Summary:
This PR deletes `WeakScriptModuleProxy` and uses `ScriptModule` directly and moves the recursive script stuff into `torch/jit/_recursive.py`. The first commit is just moving code, the latter 2 contain the actual changes
](https://our.intern.facebook.com/intern/diff/16712340/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23398

Pulled By: driazati

Reviewed By: eellison

Differential Revision: D16712340

fbshipit-source-id: f907efcec59bb2694c079ab655304324c125e9bb
2019-08-12 13:36:47 -07:00
Michael Suo
77c08aa46c serialize modules as classes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23098

Test Plan: Imported from OSS

Differential Revision: D16383328

Pulled By: suo

fbshipit-source-id: 36389b8e45c3febb7f224cd9c630fe643fa90bef
2019-08-11 15:50:29 -07:00
Michael Suo
5ec1c293eb Revert D16552212: [jit] fix py-compat fbcode lint warnings
Differential Revision:
D16552212

Original commit changeset: 7c7de5a096ad

fbshipit-source-id: b5ea5f626883e2b213b9d02875e83e64ed206e58
2019-08-10 21:58:25 -07:00
Mikhail Zolotukhin
d3f6d5885d Replace Module::copy_into with Module::clone. (#24068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24068

The new method has a simpler interface (no arguments).

Resolves #23915.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/24068

Differential Revision: D16736379

Test Plan: Imported from OSS

Pulled By: ZolotukhinM

fbshipit-source-id: 1c1f397ce9cdaa5467fd7da3025cf44d1436ae6b
2019-08-09 18:25:38 -07:00
James Reed
442b3512d4 Simplified nnq.Linear class (#24046)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24046

`nnq.Linear` was a confusing mess of buffers/attributes and Tensor/not tensor members. This PR reworks it to consistently have only Python attributes, with the conversions handled explicitly by state_dict or __{get,set}state__ methods (added in PRs further up the stack

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D16728345

Pulled By: jamesr66a

fbshipit-source-id: 47468b776b428fca2409bb55c8b161afb68a3379
2019-08-09 17:14:48 -07:00
davidriazati
be5eb6782b Fix builtin function reference (#24056)
Summary:
This was previously buggy and not being displayed on master. This fixes
the issues with the script to generate the builtin function schemas and
moves it to its own page (it's 6000+ lines of schemas)

Sphinx looks like it will just keep going if it hits errors when importing modules, we should find out how to turn that off and put it in the CI

This also includes some other small fixes:
* removing internal only args from `script()` and `trace()` docs, this also requires manually keeping these argument lists up to date but I think the cleanliness is worth it
* removes outdated note about early returns
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24056

Pulled By: driazati

Differential Revision: D16742406

fbshipit-source-id: 9102ba14215995ffef5aaafcb66a6441113fad59
2019-08-09 15:58:15 -07:00
Michael Suo
f45ec71c4e fix py-compat fbcode lint warnings
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23530

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D16552212

Pulled By: suo

fbshipit-source-id: 7c7de5a096ad9a125976e4710d3660294d3991c5
2019-08-09 12:06:21 -07:00
Elias Ellison
7d8dfd6f76 make _overloads importable in nn/functional (#24049)
Summary:
Move `_overload` to `_jit_internal.py` so that it can be imported in nn/functional.py for `conv`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24049

Differential Revision: D16723339

Pulled By: eellison

fbshipit-source-id: 527e6069dbfa81f8133c405be5350a8c76873a12
2019-08-08 16:57:50 -07:00
Elias Ellison
451fc51d8d add support for overloading functions (#23886)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23886

This is a series of PRs that will allow us to support adding [padding to conv](https://github.com/pytorch/pytorch/pull/22484) and also reduce the friction of adding method overloads that was brought up in  https://github.com/pytorch/pytorch/pull/23266.

Support for overloaded functions following the specification in [PEP 484](https://www.python.org/dev/peps/pep-0484/#function-method-overloading).

The usage is:
```
torch.jit.overload
def add(x: int, y: int) -> int: ...
torch.jit.overload
def add(x: float, y: float) -> float: ...

def add:
    return x + y
```

Follow up PRs:

- Add same API for methods
- A couple of cleanups for functions:
     - don't require default params specified on the overload as well
     - potentially error if invocation could be matched to multiple overloads. now it just chooses the first one, mypy does the same thing currently

Test Plan: Imported from OSS

Differential Revision: D16694863

Pulled By: eellison

fbshipit-source-id: f94f2933bc1c97fa58f31846acfe962b0630068c
2019-08-07 19:18:19 -07:00
Wanchao Liang
c74216d396 add NotIn support in script
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23637

Test Plan: Imported from OSS

Differential Revision: D16683558

Pulled By: wanchaol

fbshipit-source-id: 27d79850d76506255ba954601fae751e07ad7cd1
2019-08-07 16:07:21 -07:00
Jianyu Huang
2635b6262e Remove K and N function arguments for fbgemm_pack_quantized_matrix (#22956)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22956

As Title says: remove the extra function arguments for better engineering.

Differential Revision: D16297724

fbshipit-source-id: a31be17708d13508c4ce9a3ce7eb5238e8d17984
2019-08-07 08:50:13 -07:00
Jianyu Huang
78cc9b92a5 Change fbgemm_linear_{int8,fp16}_weight to fbgemm_linear_{int8,fp16}_weight_fp32_activation (#22955)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22955

Following the comment in https://github.com/pytorch/pytorch/pull/22891, change the fbgemm wrapper function name to indicate whether it is dynamic quantization or static quantization.

Differential Revision: D16297512

fbshipit-source-id: 498678e2af27070628be11a6d724ce17c2a3cde5
2019-08-06 23:19:26 -07:00
davidriazati
b0a27278bd Recursive script migration guide
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23892

Pulled By: driazati

Differential Revision: D16677532

fbshipit-source-id: 40f506b1c770e60309c0628d4745047996a05295
2019-08-06 21:43:28 -07:00
Michael Suo
e2f5bc5c08 Properly mangle nn.Module.__construct (#23779)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23779

Mangling is two underscores, not one :(. We want this method to be
private so that inheritors who define a `__construct` do not interfere
with Module initialization

Test Plan: Imported from OSS

Differential Revision: D16645156

Pulled By: suo

fbshipit-source-id: b9060cb35bfaa0391ff200b63fb78b1ac15fee39
2019-08-05 17:58:34 -07:00
Wanchao Liang
8fb0d198e9 make nn.LSTM accept PackedSequence instead of Tuples
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23643

Differential Revision: D16615531

fbshipit-source-id: af508838cac21d271d3470f0f16fd75473a6e68d
2019-08-05 17:16:18 -07:00
Michael Suo
cbf05305c0 don't try to set training after ScriptModule has been initialized. (#23680)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23680

Now when initializing a ScriptModule during the torch.jit.load()
process, there is already a cpp module backing the thing. That means
that setting training will overwrite whatever the initialized
ScriptModule had.

This PR splits apart the common "set up internal state" part of the
Module __init__ and calls that from ScriptModule.__init__ and
Module.__init__, leaving the "nn.Module-specific" part (setting
`self.training`) for the nn.Module __init__

Test Plan: Imported from OSS

Differential Revision: D16606959

Pulled By: suo

fbshipit-source-id: f7ea6b36551ff4e4472b7685f65731d5cfab87fd
2019-08-04 15:04:55 -07:00
Mingzhe Li
29881c7f02 Fix LSTM int8 quantization model size issue (#23577)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23577

This diff is fixing a model size issue introduced in #23291. After that PR, the model size after in8 quantization is the same as that of the original unquantized model. The reason is that we save original weight for int8 quantization even when that's not needed anymore. This diff fixes that by only saving original weight for fp16 quantization path.

Reviewed By: llyfacebook

Differential Revision: D16557619

fbshipit-source-id: f924ae8d155a0d525b86a7440b3c7147d5bead0a
2019-08-02 13:38:30 -07:00
davidriazati
995920ae2c Fix frontend error message
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23576

Pulled By: driazati

Differential Revision: D16611640

fbshipit-source-id: 4a6937e779dc43b3f043aca33e66d2b84376501c
2019-08-02 11:37:21 -07:00
Nikolay Korovaiko
3d15ee1b34 Remove more uses of DimensionedTensorType
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23060

Differential Revision: D16460391

Pulled By: Krovatkin

fbshipit-source-id: b50ee87d22ad18b8cbfff719b199ea876ef172f1
2019-08-01 21:19:28 -07:00
Elias Ellison
029c8e7754 allow forward hooks in tracing (#23613)
Summary:
As far as I could tell forward hooks work out of the box, so allow them in the tracing. We don't have any way of supporting backward hooks though.

Fixes https://github.com/pytorch/pytorch/issues/20862 and fixes https://github.com/pytorch/pytorch/issues/17571
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23613

Differential Revision: D16601437

Pulled By: eellison

fbshipit-source-id: ecf5dc6201ca08b3b9afdb9fcdb0fda8741133a9
2019-08-01 09:51:19 -07:00
davidriazati
756bdcbca4 Include recursive class compilations in error call stack (#23454)
Summary:
Previously these were left out which would lead to confusing messages,
now it looks something like:

```
torch.jit.frontend.UnsupportedNodeError: import statements aren't
supported
:
at ../test.py:13:9
    def bad_fn(self):
        import pdb
        ~~~~~~ <--- HERE
'__torch__.X' is being compiled since it was called from 'fn'
at ../test.py:16:12
def fn(x):
    return X(10)
           ~~~~ <--- HERE
```

Fixes #23453

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23454

Pulled By: driazati

Differential Revision: D16567930

fbshipit-source-id: 251b6f91f37a2816e06bb4c803f9bc172fa1d91b
2019-07-30 17:29:54 -07:00
Michael Suo
c8817f9436 fix default value for script
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23542

Test Plan: Imported from OSS

Differential Revision: D16557122

Pulled By: suo

fbshipit-source-id: c86578aa2c55f44ed5d573d33874a82244df3d09
2019-07-29 19:51:26 -07:00
Michael Suo
6314af6e57 Revert D16526027: [jit] Include recursive class compilations in error call stack
Differential Revision:
D16526027

Original commit changeset: 109f2968430d

fbshipit-source-id: c27252540ec6b7da60739eb7dcc8b1650672c226
2019-07-29 19:02:39 -07:00
davidriazati
52b95fd4be Include recursive class compilations in error call stack (#23454)
Summary:
Previously these were left out which would lead to confusing messages,
now it looks something like:

```
torch.jit.frontend.UnsupportedNodeError: import statements aren't
supported
:
at ../test.py:13:9
    def bad_fn(self):
        import pdb
        ~~~~~~ <--- HERE
'__torch__.X' is being compiled since it was called from 'fn'
at ../test.py:16:12
def fn(x):
    return X(10)
           ~~~~ <--- HERE
```

Fixes #23453
](https://our.intern.facebook.com/intern/diff/16526027/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23454

Pulled By: driazati

Differential Revision: D16526027

fbshipit-source-id: 109f2968430dbf51ee91b1b3409badfd557d19a4
2019-07-29 18:00:05 -07:00
davidriazati
696642ae8d Change docs to use recursive script API (#21612)
Summary:
Use the recursive script API in the existing docs

TODO:
* Migration guide for 1.1 -> 1.2
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21612

Pulled By: driazati

Differential Revision: D16553734

fbshipit-source-id: fb6be81a950224390bd5d19b9b3de2d97b3dc515
2019-07-29 17:51:22 -07:00
Michael Suo
65a89472c4 Put all modules in the global Python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23154

Test Plan: Imported from OSS

Differential Revision: D16441913

Pulled By: suo

fbshipit-source-id: a79f2c3e06a33cbd79b2e3333f16c069f356f451
2019-07-29 16:38:20 -07:00
Wanchao Liang
c384fbf4c8 support torch._C._get_tracing_state in script
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23248

Test Plan: Imported from OSS

Differential Revision: D16466588

Pulled By: wanchaol

fbshipit-source-id: 3c3d5dec2cea2f9cb080eadaef457cc62ac3fbe0
2019-07-29 15:05:50 -07:00
Nikolay Korovaiko
d6ff78fd00 fix an over-indented return in trace_module
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23358

Differential Revision: D16519010

Pulled By: Krovatkin

fbshipit-source-id: a7e4225b70e915d91c74874e3eca9bcb87baf84c
2019-07-29 11:15:55 -07:00
Michael Suo
711be82951 Make optimize a thread_local flag
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23170

Test Plan: Imported from OSS

Differential Revision: D16441912

Pulled By: suo

fbshipit-source-id: a33485178a329d54e41e364c4f14950f88481c55
2019-07-24 23:09:21 -07:00
Mingzhe Li
b3980f46a2 Replace uint8 with int8 in Linear and LSTM quantization path (#23347)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23347

This diff replaces uint8 with int8 to match with the underlying kernel implementation.  When we do int8 quantization,  we are computing with uint8 (input activation) * int8 (weight) -> uint8 (output activation). The weight is quantized into int8.

Reviewed By: jianyuh

Differential Revision: D16469435

fbshipit-source-id: a697655b0e97833fc601e5980970aec4dba53c39
2019-07-24 22:25:12 -07:00
davidriazati
48ca64dbf7 Better error for compiling a module type
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23312

Pulled By: driazati

Differential Revision: D16461299

fbshipit-source-id: 11e56c44d561c3fbf70a96c22c5fd494eea0cf19
2019-07-24 14:24:50 -07:00
Mingzhe Li
8fdbe1e10b Support LSTM with FP16 weight (#23291)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23291

This diff implements LSTM with FP16 weights based on FBGEMM.

At a high level, here are the steps:
1. Quantize and pack weight in every layer of LSTM
2. Pass weights from step 1 to the ATen `quantized_lstm` function which does matrix multiplication with FP16 weight. The following code shows the dtype of each variable used in MM:
Y  =   X * W + B
(fp32, fp32, fp16, fp32)

Reviewed By: jianyuh

Differential Revision: D16389595

fbshipit-source-id: c26ae4e153c667a941f4af64e9d07fc251403cee
2019-07-24 12:40:11 -07:00
Michael Suo
017870a633 kill module_lookup
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23097

Test Plan: Imported from OSS

Differential Revision: D16383329

Pulled By: suo

fbshipit-source-id: 282f8bac2245d584b66139daf4e5ea7b2b317295
2019-07-23 12:21:23 -07:00
Michael Suo
3be0a2b4be Parse all stmts in class defs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23031

Test Plan: Imported from OSS

Differential Revision: D16383327

Pulled By: suo

fbshipit-source-id: 6485109a66e653b7f26d30b91a97af8d71594e22
2019-07-23 12:21:15 -07:00
davidriazati
2891784a72 Resolve with closed over variables instead of stack frame (#22270)
Summary:
Previously we looked at the stack frame of the function that called
`script` to resolve variables. This doesn't work if someone calls script
with a function defined somewhere else that references captured
variables. We already have a mechanism to look at the closed over
variables for a function, so this changes the `rcb` to use that.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/22270

Pulled By: driazati

Differential Revision: D16391346

fbshipit-source-id: ad9b314ae86c249251b106079e76a5d7cf6c04c2
2019-07-22 11:44:36 -07:00
Zachary DeVito
c09e92255c Add initial support for serializing classes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22953

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D16340214

Pulled By: zdevito

fbshipit-source-id: 70fb1968eca34e14492e0d2be52e28b27813f821
2019-07-19 14:51:59 -07:00
davidriazati
9897ec4701 Recursively compile class types (#22475)
Summary:
Try to compile for class types encountered in recursive script
](https://our.intern.facebook.com/intern/diff/16340717/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22475

Pulled By: driazati

Differential Revision: D16340717

fbshipit-source-id: 5e1a46db517be2412f57156efbc4eb3347b01a8a
2019-07-18 15:43:16 -07:00
Michael Suo
5911cb8e5c Make load() create only one CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22727

Differential Revision: D16197603

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: 3eaefe6f229032b109d63a151fe0a20268b5cf56
2019-07-16 20:08:10 -07:00
davidriazati
7a370dbb41 Enable recursive script mode as the default (#22887)
Summary:
This fixes up the test suite (mostly just adding `ignore` decorations
to tests that need to call Python function) so that it all passes with
recursive script enabled.

The main user-facing result of this change is that Python functions are
compiled without any decorators, so non-TorchScriptable code must be
decorated with `torch.jit.ignore` (or
`torch.jit.ignore(drop_on_export=True` to maintain the functionality of
the current `ignore`)

Details can be found in #20939
](https://our.intern.facebook.com/intern/diff/16277608/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22887

Pulled By: driazati

Differential Revision: D16277608

fbshipit-source-id: 0abd0dc4291cf40651a1719bff813abb2b559640
2019-07-16 13:00:08 -07:00
Michael Suo
b6a88b3344 Make traced fns also go into the global python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22901

Test Plan: Imported from OSS

Differential Revision: D16278160

Pulled By: suo

fbshipit-source-id: f3e7d83b48d5f5b5cb1548ccc5b9bd382a3c411a
2019-07-16 12:04:16 -07:00
Michael Suo
c5afdd0b55 Revert D16197605: [jit] Make traced fns also go into the global python CU
Differential Revision:
D16197605

Original commit changeset: d32c975486b0

fbshipit-source-id: a00f0490cc23824792f3e745d7b5a003b1a33d20
2019-07-15 22:31:33 -07:00
davidriazati
6ffacd5f02 Use original module's class name for ScriptModules (#22873)
Summary:
Since recursive script creates a ScriptModule from an `nn.Module`,
there's no ties to the original module to pull a type name from, so we
have to explicitly pass it in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22873

Pulled By: driazati

Differential Revision: D16268547

fbshipit-source-id: 902a30e6e36427c6ba7033ded027a29d9dcbc1ee
2019-07-15 15:27:29 -07:00
Michael Suo
5fc1260e0a Make traced fns also go into the global python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22725

Differential Revision: D16197605

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: d32c975486b0cb4808687f0aa89325571f2817c4
2019-07-15 13:13:12 -07:00
Michael Suo
16aa235f43 _script_compile and _script_class_compile add to the python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22724

Differential Revision: D16197609

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: e12b31f8c8ce14b0968f4ac9445e7d225126b210
2019-07-15 13:13:08 -07:00
Lingyi Liu
1a93b96815 Revert da315a4
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22837

Differential Revision: D16239667

Pulled By: llyfacebook

fbshipit-source-id: 1a625d78d633927129dd2791e65b333b3902f94f
2019-07-13 01:54:20 -07:00
Karl Ostmo
da315a4e2a Revert D16037021: Support GRU module quantization in Pytorch
Differential Revision:
D16037021

Original commit changeset: 71145c67d869

fbshipit-source-id: 33cd2e57eba30ea33cc4f3116732a721c26f6efb
2019-07-12 21:05:34 -07:00
Lingyi Liu
d8c1b86135 Support GRU module quantization in Pytorch
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22498

Reviewed By: BIT-silence

Differential Revision: D16037021

fbshipit-source-id: 71145c67d8696e525b686cd3313033e5b6771718
2019-07-12 18:31:08 -07:00
Mingzhe Li
9eb039334f Use Linear Operator with fp16 weights in JIT (#22323)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22323

This diff adds an interface to use quantized Linear op in JIT.

Reviewed By: jamesr66a

Differential Revision: D16040724

fbshipit-source-id: 90e90aff9973c96ea076ed6a21ae02c349ee2bcf
2019-07-12 15:59:17 -07:00
Elias Ellison
cf2889ad8f add support for breaks and continues (#21692)
Summary:
Add support for breaks and continues in the jit. We do with a Graph transform pre-SSA.

A graph of the form
```
def test():
    while i < 5:
        if i == 3:
            break
        i += 1
        print(i)
```
has the body of the loop transformed to
```
if i == 3:
    did_break = True
else:
    did_break = False
if did_break:
    loop_exit = True
else:
    i += 1
    print(i)
    loop_exit = i < 5
```

I am going to add more tests but I think it is ready for review now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21692

Differential Revision: D16215807

Pulled By: eellison

fbshipit-source-id: 365102f42de4861d9323caaeb39a96de7619a667
2019-07-12 15:02:44 -07:00
Michael Suo
de819be93e refactor self to be a class again
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22722

Test Plan: Imported from OSS

Differential Revision: D16197607

Pulled By: suo

fbshipit-source-id: b4dd96b3f9cc46b48678aab0ff89afc3666e2185
2019-07-11 14:55:39 -07:00
Michael Suo
22d70e0d4b Give functions qualified names
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22721

Test Plan: Imported from OSS

Differential Revision: D16197606

Pulled By: suo

fbshipit-source-id: 94718fcdb0d3b651f16674af3cfd6249ed4533ae
2019-07-11 14:55:34 -07:00
Karl Ostmo
1ecc945ab2 Revert D15998762: [jit] Give functions qualified names
Differential Revision:
D15998762

Original commit changeset: bc2b734f626a

fbshipit-source-id: a118cc4e9a34233279e8380529a8d8120a25839d
2019-07-10 16:10:28 -07:00
Karl Ostmo
a1ca32409f Revert D15998758: [jit] refactor self to be a class again
Differential Revision:
D15998758

Original commit changeset: 14bad87bb6e4

fbshipit-source-id: f2c29974d4afc4d8f88a36e9c266e6d5a22a6191
2019-07-10 16:10:24 -07:00
Michael Suo
ee9c8a75f4 refactor self to be a class again (#22207)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22207
ghimport-source-id: 36ee8bd17411a2e220665ad2a27364653061070e

Test Plan: Imported from OSS

Differential Revision: D15998758

Pulled By: suo

fbshipit-source-id: 14bad87bb6e44bf1a43ae86339d8cc7b311c76dd
2019-07-10 15:19:07 -07:00
Michael Suo
c0674cebf1 Give functions qualified names (#22206)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22206
ghimport-source-id: d453219d907e048f24eb7f63c096b2c300307c83

Test Plan: Imported from OSS

Differential Revision: D15998762

Pulled By: suo

fbshipit-source-id: bc2b734f626ab07f97dc50ddf1b021e8b46de312
2019-07-10 15:19:03 -07:00
davidriazati
8a233b99cb Report errors through call stack (#22280)
Summary:
The error for `test_error_stack_module`:

```
Traceback (most recent call last):
  File "../test.py", line 35, in <module>
    scripted = torch.jit.script(M())
  File "/home/davidriazati/other/pytorch/torch/jit/__init__.py", line 1119, in script
    return _convert_to_script_module(obj)
  File "/home/davidriazati/other/pytorch/torch/jit/__init__.py", line 1825, in _convert_to_script_module
    raise e
RuntimeError:

d(int x) -> int:
Expected a value of type 'int' for argument 'x' but instead found type 'str'.
:
at ../test.py:11:12
def c(x):
    return d("hello") + d(x)
           ~ <--- HERE

'c' is being compiled since it was called from 'b'
at ../test.py:14:12
def b(x):
    return c(x)
           ~~~ <--- HERE

'b' is being compiled since it was called from 'forward'
at ../test.py:22:16
    def forward(self, x):
        return b(x)
               ~~~ <--- HERE

'forward' is being compiled since it was called from 'forward'
at ../test.py:31:20
    def forward(self, x):
        return x + self.submodule(x)
                   ~~~~~~~~~~~~~~~~ <--- HERE
```

This also unifies our error reporting in the front end with `ErrorReport`

TODO
* Include module names in message, #22207 should make this easy

](https://our.intern.facebook.com/intern/diff/16060781/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22280

Pulled By: driazati

Differential Revision: D16060781

fbshipit-source-id: c42968b53aaddb774ac69d5abbf7e60c23df8eed
2019-07-09 16:41:22 -07:00
Michael Suo
3b2844eeea Make CompilationUnit own Functions (#22202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22202
ghimport-source-id: de6c963af1df76d2d6357155e64a5913ab879f76

Test Plan: Imported from OSS

Differential Revision: D15998761

Pulled By: suo

fbshipit-source-id: 5414a6424953738d823b265d20dc67dde6e5b2d8
2019-07-04 17:12:00 -07:00
David Riazati
10c4b98ade Remove weak script (#22212)
Summary:
* Deletes all weak script decorators / associated data structures / methods
   * In order to keep supporting the standard library in script, this enables recursive script on any function defined in `torch.nn`
   * Most changes in `torch/nn` are the result of `ag -Q "weak" torch/nn/ -l | xargs sed -i '/weak/d'`, only `rnn.py` needed manual editing to use the `ignore` and `export` to continue supporting the overloaded `forward` methods
* `Sequential`/`ModuleList` no longer need to be added to constants since they are compiled on demand

This should also fix https://github.com/pytorch/pytorch/issues/22212
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22212

Differential Revision: D15988346

Pulled By: driazati

fbshipit-source-id: af223e3ad0580be895377312949997a70e988e4f
2019-07-03 17:28:25 -07:00
Karl Ostmo
7235532df3 Revert D16088193: Refactored math tests to iterate over all math ops
Differential Revision:
D16088193

Original commit changeset: 81b6e536b450

fbshipit-source-id: 81ee8857c3d5353955d75e05203cfbf2d3b8dacd
2019-07-02 10:18:46 -07:00
Karl Ostmo
a845d02cd5 Revert D16088191: Added math.log2 and hypot
Differential Revision:
D16088191

Original commit changeset: 5d80c480243d

fbshipit-source-id: 12ea2617e3af5bf81b1f2a57f8633ca06a99db5b
2019-07-02 10:18:42 -07:00
Horace He
b76877728a Added math.log2 and hypot
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21512

Test Plan: Imported from OSS

Differential Revision: D16088191

Pulled By: Chillee

fbshipit-source-id: 5d80c480243d2644c96df26337cf65918d79443e
2019-07-02 06:28:34 -07:00
Horace He
3d3d07b7dd Refactored math tests to iterate over all math ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21511

Test Plan: Imported from OSS

Differential Revision: D16088193

Pulled By: Chillee

fbshipit-source-id: 81b6e536b4505178c829a9d925c30cd185b7a706
2019-07-02 06:28:30 -07:00
David Riazati
2c18bf21be Fix ScriptModule.__dir__() (#22426)
Summary:
`_method_names` is on `_c`, so `dir(script_module)` calls previously
didn't work
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22426

Differential Revision: D16085330

Pulled By: driazati

fbshipit-source-id: 6f9f1bef5da4306c0f26aa0be1bcec6dd3a6f0fb
2019-07-01 19:33:14 -07:00
Nick Korovaiko
21da33f0f9 Better trace comments
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22090

Differential Revision: D15968440

Pulled By: Krovatkin

fbshipit-source-id: e55e03a4303adbaa576c4384e7a42410bd99da6e
2019-06-24 12:51:27 -07:00
Nikolay Korovaiko
2347a4032b Fix tracing docs and add more comprehensive examples (#22082)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/21857
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22082

Differential Revision: D15968306

Pulled By: Krovatkin

fbshipit-source-id: a76e500b0b7192bd814931ec48bbe9c37b8b92e0
2019-06-24 12:10:19 -07:00
Wanchao Liang
e0f5ab2c2e Tree based Iterator infrastructure: for in range/list/tensor/zip/enumerate (#21801)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21801
ghimport-source-id: b019d3e9a6f9bf152991a01b40e424dff176ffaa

Test Plan: Imported from OSS

Differential Revision: D15948545

Pulled By: wanchaol

fbshipit-source-id: 6110a0f3ab08cbbb398441e8330f56083ecd2d99
2019-06-22 01:00:42 -07:00
davidriazati
1c5fe2e8c4 Add support for Python 3.8 Constant node (#22007)
Summary:
We can't really test these until we get Python 3.8 in the CI, but these all work locally and won't be invoked at all for Python 3.7 and lower so this should be pretty safe.

Fixes #21710
](https://our.intern.facebook.com/intern/diff/15914735/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22007

Pulled By: driazati

Differential Revision: D15914735

fbshipit-source-id: 83833cebe7e38b162719a4f53cbe52c3fc638edd
2019-06-21 14:22:06 -07:00
Dmytro Dzhulgakov
82dd69326b Split nn.Module._save_to_state_dict to make it overridable (#21933)
Summary:
# Motivation

We allow to override JIT module serialization with `__getstate__/__setstate__` in order to cover cases where parameters are not serializable. Use cases include: MKLDNN integration: a388c78350/torch/utils/mkldnn.py (L18-L26)
and also fbgemm prepacked format integration for quantized tensors.

However many Eager scripts use `torch.save(module.state_dict())` form of serialization. There are several ways to make it work:

* make packed_weight itself pickleable (e.g. by binding `__getstate__/__setstate__` on C++ UDT level)
    * change: we’d need to allow module buffers to be of arbitrary, non-Tensor types
    * pro: no change to state_dict behavior
    * cons: might not be directly inspectable by user calling .state_dict(), especially if packed weights represent several tensors fused together
* make packed_weight being proper Tensor layout
    * pro: no change to state_dict or buffers behavior
    * cons: adding new tensor layouts is pretty costly today
    * cons: doesn’t work if multiple tensors are packed in one interleaved representation
* *[this approach]* allow Modules to override state_dict and return regular tensors
    * pro: most flexible and hackable
    * pro: maintains semantic meaning of statedict as all data necessary to represent module’s state
    * cons: complicates state_dict logic
    * cons: potential code duplication between `__getstate__/__setstate__`

Based on discussions with zdevito and gchanan we decided to pick latter approach. Rationale: this behavior is fully opt-in and will impact only modules that need it. For those modules the requirement listed above won't be true. But we do preserve requirement that all elements of state_dict are tensors. (https://fburl.com/qgybrug4 for internal discussion)

In the future we might also implement one of the approaches above but those are more involved.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21933

Differential Revision: D15937678

Pulled By: dzhulgakov

fbshipit-source-id: 3cb5d1a8304d04def7aabc0969d0a2e7be182367
2019-06-21 09:55:22 -07:00
James Reed
5a37f8c63f Refactor TupleType to take a NamedTupleSpec (#21836)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21836
ghimport-source-id: 91cab735765ff875046b42864188e86b8487b0ae

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D15860003

Pulled By: jamesr66a

fbshipit-source-id: 62a99a212ae6f9af83a90305e443f2dd05588292
2019-06-19 10:43:51 -07:00
David Riazati
52e1cea057 Fix recusive method compilation (#21862)
Summary:
The code in `python_sugared_value.cpp` to recursively compile methods
was not being tested, so this adds a test for it and fixes some errors
in it

It was necessary to disable any hooks set since (at least in our tests) they would try to export
a half-finished graph since they were being called on recursively
compiled methods
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21862

Differential Revision: D15860314

Pulled By: driazati

fbshipit-source-id: e8afe9d4c75c345b6e1471072d67c5e335b61337
2019-06-18 14:08:56 -07:00
davidriazati
5eb25c3704 Support in membership checks (#21527)
Summary:
This PR adds support for `in` checks like `key in my_dict`

For now it leaves lists as a follow up due to the changes around `IValue` lists and it needing an `IValue` equality op.

For objects it uses the magic method `__contains__(self, key)`
](https://our.intern.facebook.com/intern/diff/15811203/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21527

Pulled By: driazati

Differential Revision: D15811203

fbshipit-source-id: 95745060394f8a9450efaaf8ab09d9af83bea01e
2019-06-18 09:49:12 -07:00
David Riazati
afad3e4954 Add support for class annotations (#21379)
Summary:
This adds support for inferred attributes (everything except empty lists, dicts, and tuples) as well as using the PEP 526 style annotations on a class, so this eliminates the need for `torch.jit.Attribute`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21379

Differential Revision: D15718537

Pulled By: driazati

fbshipit-source-id: b7481ae3d7ee421613e931b7dc3427ef2a99757f
2019-06-18 09:49:09 -07:00
Michael Suo
cdae8b93a7 improve recursive scripting error message (#21841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21841
ghimport-source-id: fbca813d12ca4bfad7967e12c8dafe5eaba77cab

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D15857569

Pulled By: suo

fbshipit-source-id: 152eba10565cf7119508079e98512f116eb3a5a8
2019-06-17 15:23:17 -07:00
Zachary DeVito
972ec676b2 Remove lowered execution (#21674)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21674
ghimport-source-id: b8e27f0ce9b8b362daf73556ee67457fb5355062

Reviewed By: eellison

Differential Revision: D15777726

Pulled By: zdevito

fbshipit-source-id: 718ac676c9a1bcf99b856862fd29631d825645da
2019-06-16 14:29:18 -07:00
davidriazati
220efdbdc4 Refactor pybind_utils.h (#21550)
Summary:
This refactors pybind_utils so we can have all our type-inferring stuff in
1 place (e.g. for #21379)

There is some follow up work to make the error messages better, but I think that's fine to save for another PR.
](https://our.intern.facebook.com/intern/diff/15727002/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21550

Pulled By: driazati

Differential Revision: D15727002

fbshipit-source-id: a6974f2e1e5879f0503a18efc138da31cda7afa2
2019-06-14 17:27:45 -07:00
James Reed
4bcc72fe95 Support for NamedTuple (#21428)
Summary:
Resolves https://github.com/pytorch/lockdown/issues/18

This implements NamedTuple by taking advantage of the existing `names` field in `TupleType`.

TODO: This currently doesn't retain the NamedTuple-ness through serialization. Discussed with suo offline, we can probably make a way to define an anonymous NamedTuple in script (e.g. `NamedTuple('Foo', [('a', int), ('b', float), ('c', List[float])])` and serialize that
TODO: implement support for calling the constructor with kwargs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21428

Differential Revision: D15741564

Pulled By: jamesr66a

fbshipit-source-id: c077cbcea1880675ca6deb340a9ec78f824a136c
2019-06-14 16:45:56 -07:00
Zachary DeVito
8c57ce87b0 make tests pass with enable_first_class_module() enabled. (#21565)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21565
ghimport-source-id: d1fe735fb7821eadc59116fb921d8fe39a49f818

Reviewed By: driazati

Differential Revision: D15729503

Pulled By: zdevito

fbshipit-source-id: fabb678f040d21fae7545e3b2be1d098e24c544e
2019-06-12 17:13:00 -07:00
davidriazati
0293cf5bb6 Add Final[T] annotated members to __constants__ (#21603)
Summary:
Class member annotations can be marked with `Final[T]` instead of adding them to `__constants__`. `Final` comes from the `typing_extensions` module (which will be used if it is present). If not, the polyfill from `_jit_internal` is exposed as `torch.jit.Final` for users that don't want to install `typing_extensions`.

This keeps around `__constants__` since a lot of code is still using it, but in documentation follow ups we should change the examples to all to use `Final`.

TODO: install typing_extensions on CI, move tests to a Python3 only file when #21489 lands
](https://our.intern.facebook.com/intern/diff/15746274/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21603

Pulled By: driazati

Differential Revision: D15746274

fbshipit-source-id: d2c9b5643b4abba069b130c26fd42714c906ffac
2019-06-12 16:40:40 -07:00
David Riazati
0481a7710d Support for type annotations instead of torch.jit.annotate() (#21390)
Summary:
This adds support for PEP 526 style annotations on assignments in place of
`torch.jit.annotate()`, so

```python
a = torch.jit.annotate(List[int], [])
```

turns into

```python
a : List[int] = []
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21390

Differential Revision: D15790937

Pulled By: driazati

fbshipit-source-id: 0cc204f7209a79839d330663cc6ba8320d3a4120
2019-06-12 15:51:46 -07:00
Bram Wasti
cdbc20677c Add len to OrderedDict types (#21651)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21651
ghimport-source-id: 0bba5c1930865e2d18b18782ba8c8990b0761d4d

Differential Revision: D15767795

Pulled By: bwasti

fbshipit-source-id: 70e27176897b0f977c9034ffb3ad21091c91e12e
2019-06-11 14:53:40 -07:00
Will Feng
7a040f4b0b Revert D15706021: [jit] Support for type annotations instead of torch.jit.annotate()
Differential Revision:
D15706021

Original commit changeset: 8bf1459f229d

fbshipit-source-id: 7ae34578560e2dccd0f04af2220445b3999771fe
2019-06-11 14:33:28 -07:00
davidriazati
bbcd6cc782 Support for type annotations instead of torch.jit.annotate() (#21390)
Summary:
This adds support for PEP 526 style annotations on assignments in place of
`torch.jit.annotate()`, so

```python
a = torch.jit.annotate(List[int], [])
```

turns into

```python
a : List[int] = []
```
](https://our.intern.facebook.com/intern/diff/15706021/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21390

Pulled By: driazati

Differential Revision: D15706021

fbshipit-source-id: 8bf1459f229d5fd0e16e59953b9656e85a2207fb
2019-06-11 12:03:57 -07:00
Nikolay Korovaiko
5b4a188a95 add support for steps(strides) in tensor slices
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20929

Differential Revision: D15632636

Pulled By: Krovatkin

fbshipit-source-id: 0e127bbd7b339784c4be2e0a57f28024727d5ad3
2019-06-07 15:55:26 -07:00
davidriazati
8a2985eb05 Support recursive ModuleList / Sequential (#21306)
Summary:
Adds support for recursively compiling `nn.Sequential` and
`nn.ModuleList`. When either is used, it is converted to a
`jit._ConstModuleList` or `jit._ConstSequential` as necessary. Due to
this, we don't need to add it to `__constants__` since it's made
constant on demand.

This PR also moves the recursive script tests out to their own class
`TestRecursiveScript` (the added test is called `test_iterable_modules`)
](https://our.intern.facebook.com/intern/diff/15611738/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21306

Pulled By: driazati

Differential Revision: D15611738

fbshipit-source-id: fac52993990bd2dfad71d044c463a58a3759932a
2019-06-06 12:23:59 -07:00
davidriazati
61cc03fb8d Make ScriptModule.training an attribute instead of a parameter (#21078)
Summary:
Redo of #19587
](https://our.intern.facebook.com/intern/diff/15560540/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21078

Pulled By: driazati

Differential Revision: D15560540

fbshipit-source-id: f415775d87c163f93b3bbdd5f87c9ff73f58b049
2019-06-06 12:06:49 -07:00
Horace He
7e300fbb21 Added degrees, radians, ldexp (#21131)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21131
ghimport-source-id: 62b9cb71a17f9c9a7999a6e33c2d8b840ce097ff

Differential Revision: D15563184

Pulled By: Chillee

fbshipit-source-id: e2c47fb9f9c0fe9f039cfd001c5e6d5b455e034c
2019-06-05 19:17:02 -07:00
Horace He
f8202d85a0 Added frexp, isinf, isnan, isfinite (#21130)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21130
ghimport-source-id: fa771086da13deed232e142db6f940439bcc67bc

Differential Revision: D15563186

Pulled By: Chillee

fbshipit-source-id: fe33dbc454af2a9626ad810a5304300eb17d7530
2019-06-05 18:46:39 -07:00
Horace He
ba2bdf8d0e Added factorial (#21129)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21129
ghimport-source-id: a676dd33c4d0b2b60c3e9ce725bda0abeb22375f

Differential Revision: D15563183

Pulled By: Chillee

fbshipit-source-id: 641cae34c181a16c772665f5f7ed01c96a67ea9c
2019-06-05 11:51:03 -07:00
Horace He
92b76df8f6 Finished trigonometric functions (#21128)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21128
ghimport-source-id: d566de103f2aefc59e6423181de325d8f42620f4

Differential Revision: D15563190

Pulled By: Chillee

fbshipit-source-id: ad2e09cac5c7dae9978a7bd61098c2828620cdc4
2019-06-04 17:59:09 -07:00
Horace He
7309cb60fd Finished the high-priority functions (#21127)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21127
ghimport-source-id: 609021958e76ea01299f62b9491038005e6b4f27

Differential Revision: D15563189

Pulled By: Chillee

fbshipit-source-id: 5c6155a69fff7447689ef012ea303dc358d50486
2019-06-04 17:59:05 -07:00
Horace He
622588d8fd Added remainder of high-priority trigonometric math ops (#21126)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21126
ghimport-source-id: e310f3cfb28436b99ad038691887ca82068ca2c9

Differential Revision: D15563191

Pulled By: Chillee

fbshipit-source-id: 7135ddd5bc9eebc818694fa8b67eaade907fa8a1
2019-06-04 17:59:02 -07:00
Lucas Hendren
770089c2b8 math module support: isnan, asinh, atanh, cosh, sinh, and tanh (#19337)
Summary:
driazati and eellison Please review This PR is for #19026 .  Specifically, isnan, asinh, atanh, cosh, sinh, and tanh
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19337

Differential Revision: D15580932

Pulled By: driazati

fbshipit-source-id: 38513fa59088e038264f9f6f0d6374a13a165589
2019-06-03 10:54:42 -07:00
davidriazati
2f4824b2fb Add support for recursive compilation on Modules (#20708)
Summary:
Following on #19747, this implements most of the `torch.jit.script()` changes laid out in #20939.

Still to do:
* Accessing a method from Python does not add it as a `ScriptMethod` (so only `export`ed methods and `forward` are compiled)
* Calling a method other than `forward` on a submodule doesn't work

](https://our.intern.facebook.com/intern/diff/15560490/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20708

Pulled By: driazati

Differential Revision: D15560490

fbshipit-source-id: cc7ef3a1c2772eff9beba5f3e66546d2b7d7198a
2019-05-31 14:27:16 -07:00
davidriazati
e13b483f58 Fix weak module cuda() _flat_weights bug (#21107)
Summary:
Dynamically creating a type at runtime was messing up the MRO and has been causing many other problems. I think it's best to delete it, this causes a regression since
```python
self.linear = nn.Linear(10, 10)
isinstance(self.linear, nn.Linear)
```
will now be `False` again, but this will be fixed once recursive script mode is the default (#20939)
](https://our.intern.facebook.com/intern/diff/15560549/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21107

Pulled By: driazati

Differential Revision: D15560549

fbshipit-source-id: 7bd6b958acb4f353d427d66196bb4ee577ecb1a6
2019-05-31 10:35:30 -07:00
James Reed
fe39602451 Support for rudimentary f-strings (#21037)
Summary:
Resolves https://github.com/pytorch/lockdown/issues/51

This adds support for converting simple f-string literals to calls to `string.format()`. It does not support conversion specifiers or format strings.

This also does not support the string parser frontend, since that implementation would be more involved and likely would require modifying our TorchScript AST
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21037

Reviewed By: zdevito

Differential Revision: D15541183

Pulled By: jamesr66a

fbshipit-source-id: ae9df85e73f646d7219c1349f5b7683becbcef20
2019-05-30 15:50:45 -07:00
James Reed
76deb450c6 Record source/line info in SourceRange and report in highlight (#21157)
Summary:
Resubmission of https://github.com/pytorch/pytorch/pull/20898 with flake8 fix
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21157

Reviewed By: zdevito

Differential Revision: D15560324

Pulled By: jamesr66a

fbshipit-source-id: fc4e429eac03d2768f758b19c9d43e0bb614c2b8
2019-05-30 15:45:30 -07:00
Michael Suo
b6d1a72f48 improve error message on inferred type (#21058)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21058
ghimport-source-id: 7fad3a0567022dd417f4bd079a50a22e3c1dc020

Differential Revision: D15547218

Pulled By: suo

fbshipit-source-id: 5dbd567c79e6d01e9af4b8552777f7f0043df5b2
2019-05-30 10:50:34 -07:00
Edward Yang
e9df9e7960 Revert D15552424: [pytorch][PR] [JIT] Record source/line info in SourceRange and report in highlight
Differential Revision:
D15552424

Original commit changeset: 78d0f0de03f7

fbshipit-source-id: cc24f62189b7bbcdc1406912cfb3d4ca52b8e67e
2019-05-30 05:17:15 -07:00
Dmytro Dzhulgakov
52ded63128 Revert D15546045: [jit] Add support for recursive compilation on Modules
Differential Revision:
D15546045

Original commit changeset: c2c8fe179088

fbshipit-source-id: c921fb92cf9f5c6c94c77fa5070f9c5775c91b77
2019-05-29 23:42:50 -07:00
James Reed
6875018793 Record source/line info in SourceRange and report in highlight (#20898)
Summary:
Resolves https://github.com/pytorch/lockdown/issues/29

Examples:

```
import torch

torch.jit.script
def foobar(x):
    return torch.blargh(xyz)

==

RuntimeError:
object has no attribute blargh:
at compile.py:5:12
torch.jit.script
def foo(x):
    return torch.blargh(x)
           ~~~~~~~~~~~~ <--- HERE
```

It also gets the correct column number in the case where the original source file has common leading whitespace in front of the callable:

```
import torch

with torch.no_grad():
            torch.jit.script
            def foo(x):
                return torch.blargh(x)

==
RuntimeError:
object has no attribute blargh:
at compile_leading.py:6:24
torch.jit.script
def foo(x):
    return torch.blargh(x)
           ~~~~~~~~~~~~ <--- HERE
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20898

Differential Revision: D15552424

Pulled By: jamesr66a

fbshipit-source-id: 78d0f0de03f7ccbf3e7ea193a1b4eced57ea5d69
2019-05-29 21:32:33 -07:00
davidriazati
8d3388aef2 Add support for recursive compilation on Modules (#20708)
Summary:
Following on #19747, this implements most of the `torch.jit.script()` changes laid out in #20939.

Still to do:
* Accessing a method from Python does not add it as a `ScriptMethod` (so only `export`ed methods and `forward` are compiled)
* Calling a method other than `forward` on a submodule doesn't work
](https://our.intern.facebook.com/intern/diff/15546045/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20708

Pulled By: driazati

Differential Revision: D15546045

fbshipit-source-id: c2c8fe179088ffbdad47198e799a456560655b86
2019-05-29 18:52:36 -07:00
Michael Suo
154029a6ff Revert D15534670: [jit] improve error message on inferred type
Differential Revision:
D15534670

Original commit changeset: 8bbfd6e9c1af

fbshipit-source-id: fe62cf954292e8ef1d00a3cc569206f73cedcd31
2019-05-29 14:56:08 -07:00
Michael Suo
5dacf6b048 improve error message on inferred type (#21058)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21058
ghimport-source-id: e7d6e082b0faf4f3d3e683f2c98863ee269439f0

Differential Revision: D15534670

Pulled By: suo

fbshipit-source-id: 8bbfd6e9c1afbc3006d7d55ed633e18618e05021
2019-05-29 14:47:00 -07:00
David Riazati
fa8c132e24 Revert D15502768: [pytorch][PR] [jit] Make ScriptModule.training an attribute instead of a parameter
Differential Revision:
D15502768

Original commit changeset: 3022f2d57ec6

fbshipit-source-id: 5cd08d3c3a75e38e3aa9b75a0c0059a2c6c85a1e
2019-05-29 12:18:18 -07:00
Nikolay Korovaiko
cbf2a4f5c4 print a warning if a type annotation prefix is invalid according to mypy (#20884)
Summary:
This PR adds a check that prints a warning if a type annotation prefix isn't what mypy expects.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20884

Differential Revision: D15511043

Pulled By: Krovatkin

fbshipit-source-id: 9038e074807832931faaa5f4e69628f94f51fd72
2019-05-29 11:56:55 -07:00
David Riazati
28079c3906 Make ScriptModule.training an attribute instead of a parameter (#19587)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#19587 [jit] Make ScriptModule.training an attribute instead of a parameter**

Remove the hack we had previously where `training` was a buffer
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19587

Differential Revision: D15502768

Pulled By: driazati

fbshipit-source-id: 3022f2d57ec6849868f9225d9bc2bfb7828cb318
2019-05-28 16:06:46 -07:00
Nikolay Korovaiko
18809f7b0b Better error message in __get_state__ to let a user know that ScriptModules can't be deep-copied atm (#20885)
Summary:
Before we look into supporting `deepcopy` we could at least improve an error msg.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20885

Differential Revision: D15511023

Pulled By: Krovatkin

fbshipit-source-id: 93b8730a2cc663eee0147f14d3341d0606748eaf
2019-05-28 15:09:07 -07:00
Wanchao Liang
c7e0722814 allow pass ordered dict for nn sequential (#20796)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20796
ghimport-source-id: 9f895a2a6ebc71984196b868dc3ea6a12286bc81

Differential Revision: D15505330

Pulled By: wanchaol

fbshipit-source-id: 2922c56606b477a34f4e6433fa790d5b2de9d77a
2019-05-24 20:31:05 -07:00
Nikolay Korovaiko
31e2d20c5e Dictionarize check_inputs coming from trace
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20813

Differential Revision: D15466836

Pulled By: Krovatkin

fbshipit-source-id: ffdb418592b76dc67c65c59f4dc7303f08734f97
2019-05-23 11:17:55 -07:00
Lingyi Liu
2c556a9489 fix the input/output type mismatch (#20829)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20829

as title

Reviewed By: jamesr66a

Differential Revision: D15461937

fbshipit-source-id: 02c7150c0e8d020030ae8898008f718c74850dca
2019-05-23 11:08:21 -07:00
Edward Yang
fdb923996d Revert D15445092: Some minor fix to unblock the Bert model quantization
Differential Revision:
D15445092

Original commit changeset: 22da41a56ecb

fbshipit-source-id: eca9a85900bf48fe6a9da5cfff61606a10f0c3de
2019-05-22 14:25:14 -07:00
Lingyi Liu
70ecddfd76 Some minor fix to unblock the Bert model quantization (#20787)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20787

Set requires_grad=False for bias: this will block the jit tracing.
The as_type fix: The input tensor shape and output tensor shape will be different, which will trigger the assertion failure at https://fburl.com/0m8xy7tc.

Reviewed By: jamesr66a

Differential Revision: D15445092

fbshipit-source-id: 22da41a56ecb9ac092585d0cc1ff0658fb9d631b
2019-05-21 23:13:08 -07:00
davidriazati
796e359601 Refactor builtin ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20666

Pulled By: driazati

Differential Revision: D15404419

fbshipit-source-id: c52bfa408c3caabb0a14779308686cf27fed349b
2019-05-20 11:21:18 -07:00
davidriazati
a543586bff Add _enable_recursive_script to try to script all Python functions (#19578)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#19578 [jit] Try to script all Python functions**

This adds the `torch.jit._enable_recursive_script` context manager, which will try to compile any Python functions it sees. It's hidden behind an internal context manager for now since it's incomplete (doesn't work for script_methods/Python submodules). If it can't compile the Python function it outputs an error.
](https://our.intern.facebook.com/intern/diff/15386727/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19578

Pulled By: driazati

Differential Revision: D15386727

fbshipit-source-id: 4e308f67677b8e9fccfc525a91bb2f4585062048
2019-05-17 14:50:45 -07:00
Zachary DeVito
5243fe0350 Allow static inheritence for ScriptModules (#20503)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20503
ghimport-source-id: 35684175f0806485b074ea136548823ad1bc1c30

Differential Revision: D15341555

Pulled By: zdevito

fbshipit-source-id: ad19da3306914196dcbbcee829dcb0a9f22e3722
2019-05-15 12:41:55 -07:00
Yauheni Koran
5f7ef09f57 math module support: gcd, copysign, erf, erfc, expm1, fabs, gamma, lgamma (#19707)
Summary:
eellison driazati Refer to issue #19026
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19707

Differential Revision: D15302632

Pulled By: eellison

fbshipit-source-id: 68ff13b478b93cc33703ef3276b5fa727c8ff31a
2019-05-13 08:55:23 -07:00
Nikolay Korovaiko
3f3ee5600a make trace's errors more helpful in terms of what it can and can't do when tracing module's methods
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20368

Differential Revision: D15305340

Pulled By: Krovatkin

fbshipit-source-id: bafb13002df5c9741160e96205e0846243dde3ec
2019-05-10 17:40:21 -07:00
davidriazati
7d3d5b73f4 Add multiline type annotation support for Python frontend (#14922)
Summary:
This allows multiline type comments in accordance with [PEP 484](https://www.python.org/dev/peps/pep-0484/#suggested-syntax-for-python-2-7-and-straddling-code)

```python
torch.jit.script
def foo(x   # type: Tensor
        y   # type: Tuple[Tensor, Tensor]
        ):
    # type: (...) -> Tuple[Tensor, Tensor]
    return x, x
```](https://our.intern.facebook.com/intern/diff/15268432/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14922

Pulled By: driazati

Differential Revision: D15268432

fbshipit-source-id: e9add8d8025e42390a14a643835d15cc67a2f33e
2019-05-10 17:27:41 -07:00
davidriazati
3a39ce0f41 Fix reflection on weak modules, copy attributes (#20190)
Summary:
* Constructs a new type at runtime so that `isinstance` checks work for
weak modules assigned to `ScriptModule`s
* Fix some extraneous names in `__constants__`
* Add `in_features` and `out_features` to `nn.Linear` `__constants__`

Fixes #19363
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20190

Pulled By: driazati

Differential Revision: D15302350

fbshipit-source-id: 1d4d21ed44ab9578a4bc2a72396a82e9bbcd387c
2019-05-10 17:14:49 -07:00
davidriazati
00d0ddb140 Add all list specializations to pickler (#20191)
Summary:
TensorList, DoubleList, and BoolList were missing from the pickler, so
this adds them.

As a follow up a lot of the code for these could be templated and cut
down

](https://our.intern.facebook.com/intern/diff/15299106/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20191

Pulled By: driazati

Differential Revision: D15299106

fbshipit-source-id: f10c0c9af9d60a6b7fb8d93cea9f550b1a7e2415
2019-05-10 17:14:42 -07:00
James Cross
a4ae689636 quantize_rnn_modules in ensemble_export (#20365)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20365

Enable quantization of `torch.nn.LSTM` module in the decoder of PyTorch natively exported beam search models.

Reviewed By: jmp84

Differential Revision: D15260631

fbshipit-source-id: bbdd3a30c2c2110986eb7aa7ff11ce1c9090ddf4
2019-05-10 15:33:41 -07:00
Zachary DeVito
3afd99680c Remove SourceLocation (respin) (#20333)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20333
ghimport-source-id: e64075bb82067224463e9955d10bd13967d1975d

Differential Revision: D15284081

Pulled By: zdevito

fbshipit-source-id: ac26ae48392b9daff08f460529c06af8f4e4722a
2019-05-09 16:17:33 -07:00
Wanchao Liang
e870b11ae6 Revert D15275731: Remote SourceLocation
Differential Revision:
D15275731

Original commit changeset: f4da178c3137

fbshipit-source-id: 830b79735eb2dadc4795b5aae407826bf20ef121
2019-05-09 13:07:11 -07:00
Zachary DeVito
eca91de5d2 Remote SourceLocation (#20300)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20300
ghimport-source-id: 06f606c4db3b70b1d2ed9f6ed4542c3f703c4e17

Differential Revision: D15275731

Pulled By: zdevito

fbshipit-source-id: f4da178c31372c2264feb9f99476b9c9aa66c1f2
2019-05-09 11:48:29 -07:00
Zachary DeVito
1102f4c56e Bump op_version_set (#19812)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19812
ghimport-source-id: 7cdb24c6a7501c6ec5f0eae07325512746f5abb9

Differential Revision: D15102803

Pulled By: zdevito

fbshipit-source-id: bedf0bd6e1170fa294c65c87df75b82d8694f89c
2019-05-08 23:21:22 -07:00
James Reed
2179d5b32b Dynamic quantized full LSTM module (#20249)
Summary:
Previously we only had a Python wrapper for `torch.quantized_lstm_cell`. We had the op `torch.quantized_lstm`, but it didn't have a wrapper. This makes the wrapper
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20249

Reviewed By: driazati

Differential Revision: D15250023

Pulled By: jamesr66a

fbshipit-source-id: f05ad784d903e0ef3a62633c8bf80bad79de48ae
2019-05-08 19:46:59 -07:00
davidriazati
877b7c1b8d Fix NameError with PYTORCH_JIT=0 (#20120)
Summary:
Right now using `PYTORCH_JIT=0` gives this error:

`NameError: name '_CachedForward' is not defined`](https://our.intern.facebook.com/intern/diff/15210046/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20120

Pulled By: driazati

Differential Revision: D15210046

fbshipit-source-id: 493716d38e0078bfe96fab3dc624ec029988cf1c
2019-05-06 17:41:09 -07:00
Michael Suo
47f5be164a allow classes to be used in their own methods (#20106)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20106
ghimport-source-id: e85bebfae59ad7720cfa64c5f88d5d5fa742c221

Reviewed By: shannonzhu

Differential Revision: D15202261

Pulled By: suo

fbshipit-source-id: ae01b868c0939cecf650bd2b5ad8bb94312eaef7
2019-05-06 15:25:52 -07:00
Nikolay Korovaiko
7ddd5d06ed trace multiple methods (#19905)
Summary:
This PR adds a new trace API `trace_module` that will allow us to trace multiple methods as a part of a single `ScriptModule`

See the example below.

```python
        class Net(nn.Module):
            def __init__(self):
                super(Net, self).__init__()
                self.conv = nn.Conv2d(1, 1, 3)

            def forward(self, x):
                return self.conv(x)

            def weighted_kernel_sum(self, weight):
                return weight * self.conv.weight

        example_weight = torch.rand(1, 1, 3, 3)
        example_forward_input = torch.rand(1, 1, 3, 3)
        n = Net()
        inputs = {'forward' : example_forward_input, 'weighted_kernel_sum' : example_weight}
        module = torch.jit.trace_module(n, inputs)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19905

Differential Revision: D15200007

Pulled By: Krovatkin

fbshipit-source-id: 0354d973fe40cb6e58b395bd866df14e0fc29d5b
2019-05-03 16:21:58 -07:00
davidriazati
947fd9c3f5 More doc edits (#19929)
Summary:
* document `torch.jit.Attribute`
* add JIT one-liner to `README.md`
* misc clarity edits](https://our.intern.facebook.com/intern/diff/15152418/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19929

Pulled By: driazati

Differential Revision: D15152418

fbshipit-source-id: dfee03f0a17300aaf453fcf17f418463288f66c2
2019-04-30 13:52:07 -07:00
davidriazati
4294dba981 Misc pickler improvements (#19638)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#19638 [jit] Serialize attribute module as torch.jit._pickle**

* use `torch.jit._pickle` as the module for globals in the pickle program. Pickle will try to resolve these to the actual functions in `torch.jit._pickle.py` automatically (I believe this can also be overridden to point to whatever functions you want). This means that `pickle.load("my_model/attributes.pkl")` will work instead of having to use a custom `pickle.Unpickler`
* use `REDUCE` opcodes instead of `BUILD` to make use of the last bullet
* use a union in the unpickler to support globals better (+ any future metadata we might need that can't be stored in an `IValue`), this makes some of the code around `IntList`s clearer and lets us get rid of any lookbehind for opcodes
* pickle things as a tuple instead of a list (an immutable result is more semantically correct)](https://our.intern.facebook.com/intern/diff/15111203/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19638

Pulled By: driazati

Differential Revision: D15111203

fbshipit-source-id: 526c6c2b63a48eb1cba1c658045a7809730070dd
2019-04-29 13:45:10 -07:00
Michael Suo
a25b79531c use fully qualified name for ScriptClasses (#19239)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19239
ghimport-source-id: 830aad6dc11d2a7247760a9c7c9fc8556f70a706

Differential Revision: D14928293

Reviewed By: eellison

Pulled By: suo

fbshipit-source-id: d2efa5d7f7397526083278d6650b9cee8d967b1a
2019-04-26 19:17:21 -07:00
davidriazati
dc67d9f3b9 Cleanup documentation (#19584)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#19584 [jit] Cleanup documentation**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/19584

Pulled By: driazati

Differential Revision: D15104801

fbshipit-source-id: 87391fd62ee92b615e680469f8bd9a1ac654be7e
2019-04-26 15:43:07 -07:00
Zachary DeVito
330990d878 Serialize first-class version of functions (#19723)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19723
ghimport-source-id: 7f7ec6200c3b42d19046a3e228a3d82212697f14

Reviewed By: jamesr66a

Differential Revision: D15078533

Pulled By: zdevito

fbshipit-source-id: fe421afab9607ee942f6d200f04bb6335fc0aa97
2019-04-25 15:53:07 -07:00
Zachary DeVito
6cb1b994d8 Trace directly into first-class module form. (#19722)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19722
ghimport-source-id: b024666feccb324f5ba9aae4a6301723e04d9846

Reviewed By: jamesr66a

Differential Revision: D15078535

Pulled By: zdevito

fbshipit-source-id: b866b31c1864a090c545560cbecee81e34ad2d16
2019-04-25 15:53:03 -07:00
Zachary DeVito
31524bda1f @torch.jit.script(fn) now is a torch.jit.Function (#19721)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19721
ghimport-source-id: b4f5024adc845a82dc5197d19aab1496bf85089f

Reviewed By: jamesr66a

Differential Revision: D15078534

Pulled By: zdevito

fbshipit-source-id: 408d3a871302c5ac5d6426dc5de567f2188ebf4c
2019-04-25 15:53:00 -07:00
Zachary DeVito
12f7c2dea3 pybind CompilationUnit and Function directly (#19720)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19720
ghimport-source-id: c5829234dbbe8f7fe719ffce3fa92ce5198ffd21

Reviewed By: jamesr66a

Differential Revision: D15078536

Pulled By: zdevito

fbshipit-source-id: e617de31fc907a408fb50e18d9358dfd64de1f9e
2019-04-25 15:52:57 -07:00
davidriazati
c08f3d06c3 Add some of nn.init to weak script (#19640)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#19640 [jit] Add some of nn.init to weak script**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/19640

Pulled By: driazati

Differential Revision: D15065332

fbshipit-source-id: 30df9f02e527cd5e5ebe34b7e003444eae96c66d
2019-04-24 17:00:48 -07:00
Zachary DeVito
87a6974193 Make it possible for self.forward to return a ScriptMethod (#19217)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19217
ghimport-source-id: 6fdd7f5ac041dae950b47ca316f30682ede0b083

Reviewed By: suo

Differential Revision: D14922120

Pulled By: zdevito

fbshipit-source-id: 5e82e5d7ee72df6f401146d2519c80ea336ff40e
2019-04-24 11:14:34 -07:00
Eric Faust
593bb145ce Allow passing dicts as trace inputs. (#18092)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18092

Previously, tracing required all inputs to be either tensors,
or tuples of tensor. Now, we allow users to pass dicts as well.

Differential Revision: D14491795

fbshipit-source-id: 7a2df218e5d00f898d01fa5b9669f9d674280be3
2019-04-18 23:52:00 -07:00
Michael Suo
1b1d1c9837 allow bools to be used as attributes (#19440)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19440
ghimport-source-id: 9c962054d760526bf7da324b114455fcb1038521

Differential Revision: D15005723

Pulled By: suo

fbshipit-source-id: 75fc87ae33894fc34d3b913881defb7e6b8d7af0
2019-04-18 18:13:21 -07:00
Alexandr Morev
da4ff17eee math module support (#19115)
Summary:
This PR refer to issue [#19026](https://github.com/pytorch/pytorch/issues/19026)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19115

Differential Revision: D14936053

Pulled By: driazati

fbshipit-source-id: 68d5f33ced085fcb8c10ff953bc7e99df055eccc
2019-04-16 10:44:07 -07:00
Zachary DeVito
b9c20d5224 graph_for based on last_optimized_executed_graph (#19142)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19142
ghimport-source-id: 822013fb7e93032c74867fc77c6774c680aef6d1

Differential Revision: D14888703

Pulled By: zdevito

fbshipit-source-id: a2ad65a042d08b1adef965c2cceef37bb5d26ba9
2019-04-16 09:17:53 -07:00
Nikolay Korovaiko
ada10ad416 Ellipsis in subscript
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17763

Differential Revision: D14893533

Pulled By: Krovatkin

fbshipit-source-id: c46b4e386d3aa30e6dc03e3052d2e5ff097fa74b
2019-04-15 22:10:44 -07:00
Zachary DeVito
ef406ee925 First class modules in the compiler, round 2 (#19167)
Summary:
This PR propagates where we use first-class modules objects into the compiler. This creates a transitionary state where:

* compiler.cpp creates Graphs where `self` is a Module class and attributes/parameters/buffers/submodules are looked up with `prim::GetAttr`
* GraphExecutor still runs "lowered graphs" where the self object has been removed by a compiler pass `lower_first_class_method`.
* Tracing still creates "lowered graphs", and a pass "lift_lowered_method" creates a first-class method graph for things.

* This PR separates out Method and Function. A script::Function is a pure Graph with no `self` bound.  Similar to Python, a script::Method is just a bound `self` and its underlying `script::Function`.
* This PR also separates CompilationUnit from Module. A CompilationUnit is just a list of named script::Functions.  Class's have a CompilationUnit holding the class methods, and Modules also have a CompilationUnit holding their Methods. This avoids the weird circular case Module --has a-> Class -> has a -> Module ...

Details:
* In this transitionary state, we maintain two copies of a Graph, first-class module and lowered. Th first-class one has a self argument that is the module's class type. The lowered one is the lowered graph that uses the initial_ivalues inputs.
* When defining lowered methods using `_defined_lowered` we immediately create the first-class equivalent. The reverse is done lazily, creating lowered_methods on demand from the class.
* The two way conversions will be deleted in a future PR when the executor itself runs first-class objects. However this requires more changes to (1) the traces, (2) the python bindings, and (3) the onnx export pass and would make this PR way to large.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19167

Differential Revision: D14891966

Pulled By: zdevito

fbshipit-source-id: 0b5f03118aa65448a15c7a7818e64089ec93d7ea
2019-04-11 13:55:48 -07:00
Zachary DeVito
f5165ade5b Revert D14842057: Compiler uses first-class modules**
Differential Revision:
D14842057

Original commit changeset: ca6e7b5a4380

fbshipit-source-id: e8f1862a59bf20d5f78648b2fdc53a8b3750ead3
2019-04-11 06:17:01 -07:00
Zachary DeVito
5e1f0b2a07 Compiler uses first-class modules** (#19043)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19043
ghimport-source-id: 0c9e80d5f35654af6d472abd5643bff3e9eb9ddf

Differential Revision: D14842057

Pulled By: zdevito

fbshipit-source-id: ca6e7b5a43805240f40b84d30e54495061067dc0
2019-04-11 00:00:48 -07:00
Lu Fang
0c237f1383 Fix the duplication problem in _unique_state_dict (#18139)
Summary:
Since parameter.data will create a new torch.Tensor each time, we get duplicate tensors when call _unique_state_dict now. Try to deduplicate it before creating new tensor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18139

Reviewed By: dzhulgakov

Differential Revision: D14511262

Pulled By: houseroad

fbshipit-source-id: cb69795d0b6509721220650bbb19edeb3459a503
2019-04-03 23:16:44 -07:00
Edward Yang
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
David Riazati
24db1667da Attribute serialization improvements (#18188)
Summary:
* adds attributes to `ScriptModule.__getattr__` so they can be accessed in Python after re-importing
* full support for all the possible values for an `int64_t`
    * this necessitated a bunch more `pushWhatever` functions, so re-introduced a templated version to cut down on duplicate code
* tests to validate references / value sharing works
* adds `torch.jit.Unpickler` which people can use to de-serialize the pickle files into Python / have a quick reference on how to do this without PyTorch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18188

Differential Revision: D14527490

Pulled By: driazati

fbshipit-source-id: efd15579cc04aa2e28c4b2c9490d82d849dee559
2019-03-29 19:10:12 -07:00
James Reed
85f36014e2 Experimental logging/counters API (#18235)
Summary:
This defines a generic counters API that users can utilize to provide monitoring functionality in e.g. a production service. We expose both counters for runtime internals as well as a TorchScript API to create user-defined counters. Synopsis of the API:

- `torch/csrc/jit/script/logging.h` specifies the externally-facing API in C++
- `torch/jit/_logging.py` specifies the Python API

We use an interface, `LoggerBase`, to define the interactions between users and a logging backend. Implementing a subclass of `LoggerBase` allows the user to handle these events in a custom way, such as logging into a DB or calling into an infra-specific counters API.

From the frontend perspective, we can create log events in two ways:
1. We provide an `add_stat_value(name, val)` function. This calls into the Logger backend with a key/value pair. For example, we might call `add_stat_value('foo', 1)` to bump an event counter.
2. We provide a `time_point()` function to record a timestamp in nanoseconds. This can be used in conjunction with `add_stat_value` to record runtime wall clock durations.

Examples of frontend usage can be found in `test_jit.py TestLogging`.

We provide a trivial `LockingLogger` implementation as an example and for testing purposes. It is likely not ready for production usage. It demonstrates that a backend implementing the API can do things like specify aggregation types and report these aggregate stats via the `get_counters()` API.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18235

Differential Revision: D14545060

Pulled By: jamesr66a

fbshipit-source-id: 04099543a1898cfdd411511e46e03d5dce9b4881
2019-03-29 17:14:03 -07:00
Elias Ellison
ff4b6d1a49 Delete batch tensor (#18575)
Summary:
Deleting batch tensor since we are no longer maintaining the project and keeping it functional is blocking other improvements.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18575

Differential Revision: D14671126

Pulled By: eellison

fbshipit-source-id: b42d5b699c4d12171ed95e6d3a977532167f0d2c
2019-03-28 23:13:27 -07:00
Michael Suo
10751d5fb4 python interop for script classes (#18148)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18148
ghimport-source-id: 40a9d745dc9aeba53d098743323fcbd50ca65137

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18148 py interop**

Support for converting classes between the Python–TorchScript boundary. Like other TorchScript values, ScriptClasses are native Python values when used in Python and IValues when used in TorchScript.

Notably, there is a copy across this boundary, which will be surprising to users who will expect standard Python reference semantics. I have some ideas for fixing that, but it's a more involved process.

Reviewed By: jamesr66a

Differential Revision: D14526259

fbshipit-source-id: 5916e3032488a42dc7da756c1826d7c040a21ebd
2019-03-22 16:30:04 -07:00
Nikolay Korovaiko
2ad2b2c7b1 Support for basic list comprehensions (#17267)
Summary:
Supports the following syntax:
```
        torch.jit.script
        def comp(l):
            # type: (List[float]) -> List[float]

            n = [x * 3 for x in l]
            return n
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17267

Differential Revision: D14581119

Pulled By: Krovatkin

fbshipit-source-id: 6fd091a8a9ab607386ac58fda6ad88bf8aea380e
2019-03-22 15:25:13 -07:00
Edward Yang
d1497debf2 Fix B903 lint: save memory for data classes with slots/namedtuple (#18184)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18184
ghimport-source-id: 2ce860b07c58d06dc10cd7e5b97d4ef7c709a50d

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18184 Fix B903 lint: save memory for data classes with slots/namedtuple**
* #18181 Fix B902 lint error: invalid first argument.
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* #18177 Fix lstrip bug revealed by B005 lint

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14530872

fbshipit-source-id: e26cecab3a8545e7638454c28e654e7b82a3c08a
2019-03-21 09:10:30 -07:00
Edward Yang
ba81074c40 Fix B902 lint error: invalid first argument. (#18181)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18181
ghimport-source-id: 9c23551584a1a1b0b7ac246367f3a7ae1c50b315

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18184 Fix B903 lint: save memory for data classes with slots/namedtuple
* **#18181 Fix B902 lint error: invalid first argument.**
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* #18177 Fix lstrip bug revealed by B005 lint

A variety of sins were committed:
- Some code was dead
- Some code was actually a staticmethod
- Some code just named it the wrong way
- Some code was purposely testing the omitted case

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14530876

fbshipit-source-id: 292a371d9a76ddc7bfcfd38b6f0da9165290a58e
2019-03-21 09:10:28 -07:00
David Riazati
3d44305e9d Attribute serialization (#17423)
Summary:
Allows serialization/loading of attributes (`IValue`s of any type).
* metadata (attribute name, type) is stored in the `model.json`
* The binary format is a subset of the `pickle` module that supports the operations necessary for `IValue`s
    * Attributes are serialized in the order they are defined on a module to a list in a single `attributes` file, with submodule attributes coming first. This order directly matches the order attributes are listed in `model.json`
    * This can be inspected in Python with `pickle.load()` or with `pickletools` (PyTorch need not be installed for this to work)
        * A class is used to store a tensor's index into the tensor table of the model, so to unpickle the file you have to use a custom Unpickler:
        ```python
        class TensorID(object):
            def __setstate__(self, id):
                self.id = id

        class JitUnpickler(pickle.Unpickler):
            def find_class(self, module, name):
                if module == '__main__' and name == 'TensorID':
                    return TensorID

        JitUnpickler(open("my_model/attributes.pkl", "rb")).load()
        ```
    * pickle format: https://svn.python.org/projects/python/trunk/Lib/pickletools.py
* It currently does not support/guarantee that anything saved out with `pickle` (i.e. if you edit `attributes` with `pickle` directly) instead of our tools will be imported correctly

Also will fix #17683 and fix #16367

Followup Work:
* document format / choice of pickle: #17951
* create an example
* list specializations
* int size specializations, large binputs
* do a first pass over attributes to output only necessary `BINPUT` ops
* attribute reassignment (e.g `self.my_attribute = new_value`)
* `tensor.save("some_checkpoint.pkl")` support with tensors embedded in Pickle file
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17423

Differential Revision: D14470965

Pulled By: driazati

fbshipit-source-id: 6a21a9939efdbe59b4bc57fd31d6d630bab5297e
2019-03-18 18:18:22 -07:00
Lu Fang
4dcb4b1601 Add more hint in the JIT tracer (#17957)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17957

So developer knows what action should be taken when model contains nondeterministic node

Reviewed By: dzhulgakov

Differential Revision: D14435923

fbshipit-source-id: 12d930185852f78c54efc8e90c51aa7c7c7faab5
2019-03-13 00:56:59 -07:00
James Reed
81e025d9ac Clarify JIT docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17846

Differential Revision: D14400363

Pulled By: jamesr66a

fbshipit-source-id: 862316b5fd95526b6edebeca19d2cc522779df11
2019-03-09 23:13:31 -08:00
Pritam Damania
24e7b824e0 Add metadata for torch jit TracedModules. (#17640)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17640

Pull Request resolved: https://github.com/pytorch/pytorch/pull/17311

I've extended our model metadata framework in this diff to support
traced modules as well. Re-used a lot of components from the previous
implementation of ScriptModule metadata.

Tracing is a little different from Scripting since you can't just create a
subclass of TopLevelTraceModule (type returned by torch.jit.trace) and attach
metadata the way we did for ScriptModule. As a result, I've introduced a
separate API torch.fb.jit_trace which returns an instance of
TracedModuleWithMetadata which is a subclass of TopLevelTracedModule. As a
result, we can now attach metadata to this instance.

Reviewed By: dzhulgakov

Differential Revision: D14117966

fbshipit-source-id: 3eee5eef733cb8d6a219c02e2f41d08698eca326
2019-03-09 21:37:15 -08:00
David Riazati
a2381fa346 Add module attributes (#17309)
Summary:
Similar to `nn.Parameter`s, this PR lets you store any `IValue` on a module as an attribute on a `ScriptModule` (only from the Python front-end currently). To mark something as an attribute, it should wrapped in `jit.Attribute(value, type)` (ex. `self.table = torch.jit.Attribute(table, Dict[str, torch.Tensor])`)

Followup Work:
* (de)serializing for use in C++
* change `self.training` to be a `bool` attribute instead of a buffer
* mutable attributes
* string frontend support
* documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17309

Differential Revision: D14354316

Pulled By: driazati

fbshipit-source-id: 67e08ab5229366b67fbc837e67b58831a4fb3318
2019-03-07 10:44:10 -08:00
Elias Ellison
561037aef8 use flake8-mypy (#17721)
Summary:
Use flake8 installed with mypy checks so that our linter matches fbcode. Mypy type errors also provide valuable signal
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17721

Differential Revision: D14357778

Pulled By: eellison

fbshipit-source-id: d8c9ea3fe3b5f550c3b70fe259e0eabf95e4c92d
2019-03-07 09:15:54 -08:00
Elias Ellison
221edddd18 disallow shape analysis with resize ops (#17518)
Summary:
resize_ and resize_as resize the input tensor. because our shape analysis
is flow invariant, we don't do shape analysis on any op that relies on a Tensor that can alias a resized Tensor.

E.g. in the following graph the x += 10 x may have been resized.
```
torch.jit.script
def test(x, y):
    for i in range(10):
        x += 10
        x.resize_as_([1 for i in int(range(torch.rand())))
    return x

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17518

Differential Revision: D14249835

Pulled By: eellison

fbshipit-source-id: f281b468ccb8c29eeb0f68ca5458cc7246a166d9
2019-02-27 19:02:09 -08:00
Elias Ellison
e5b4baab40 new batch of expect file removals
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17486

Differential Revision: D14218963

Pulled By: eellison

fbshipit-source-id: dadc8bb71e756f47cdb04525d47f66c13ed56d16
2019-02-26 08:20:43 -08:00
Michael Suo
2cdbb140e6 user defined types (#17314)
Summary:
First pass at user defined types. The following is contained in this PR:
- `UserType` type, which contains a reference to a module with all methods for the type, and a separate namespace for data attributes (map of name -> TypePtr).
- `UserTypeRegistry`, similar to the operator registry
- `UserObject` which is the runtime representation of the user type (just a map of names -> IValues)
- `UserTypeValue` SugaredValue, to manage getattr and setattr while generating IR, plus compiler.cpp changes to make that work.
- Frontend changes to get `torch.jit.script` to work as a class decorator
- `ClassDef` node in our AST.
- primitive ops for object creation, setattr, and getattr, plus alias analysis changes to make mutation safe.

Things that definitely need to get done:
- Import/export, python_print support
- String frontend doesn't understand class definitions yet
- Python interop (using a user-defined type outside TorchScript) is completely broken
- Static methods (without `self`) don't work

Things that are nice but not essential:
- Method definition shouldn't matter (right now you can only reference a method that's already been defined)
- Class definitions can only contain defs, no other expressions are supported.

Things I definitely won't do initially:
- Polymorphism/inheritance
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17314

Differential Revision: D14194065

Pulled By: suo

fbshipit-source-id: c5434afdb9b39f84b7c85a9fdc2891f8250b5025
2019-02-26 01:34:07 -08:00
David Riazati
2370c989d8 Add LSTM to standard library (#15744)
Summary:
**WIP**

Attempt 2 at #14831

This adds `nn.LSTM` to the jit standard library. Necessary changes to the module itself are detailed in comments. The main limitation is the lack of a true `PackedSequence`, instead this PR uses an ordinary `tuple` to stand in for `PackedSequence`.

Most of the new code in `rnn.py` is copied to `nn.LSTM` from `nn.RNNBase` to specialize it for LSTM since `hx` is a `Tuple[Tensor, Tensor]` (rather than just a `Tensor` as in the other RNN modules) for LSTM.

As a hack it adds an internal annotation `@_parameter_list` to mark that a function returns all the parameters of a module. The weights for `RNN` modules are passed to the corresponding op as a `List[Tensor]`. In Python this has to be gathered dynamically since Parameters could be moved from CPU to GPU or be deleted and replaced (i.e. if someone calls `weight_norm` on their module, #15766), but in the JIT parameter lists are immutable, hence a builtin to handle this differently in Python/JIT.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15744

Differential Revision: D14173198

Pulled By: driazati

fbshipit-source-id: 4ee8113159b3a8f29a9f56fe661cfbb6b30dffcd
2019-02-21 16:24:19 -08:00