Summary:
Fix for https://github.com/pytorch/pytorch/issues/37986
Follows the stack in https://github.com/pytorch/pytorch/pull/33783 stack to make functions in `torch/functional.py` resolve to their python implementations. Because the return type of `torch.unique` depends on `return_inverse` and `return_counts` I had to refactor the implementation to use our boolean_dispatch mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38156
Differential Revision: D21504449
Pulled By: eellison
fbshipit-source-id: 7efb1dff3b5c00655da10168403ac4817286ff59
Summary:
Make it so that non-nn Module classes do not need to be annotated with `torch.jit.script`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38050
Differential Revision: D21482654
Pulled By: eellison
fbshipit-source-id: 22689e4d7a33f6e1574b9495cff29a1fe6abb910
Summary:
**Summary**
This commit detects and prohibits the case in which `typing.List` is
used as an annotation without a type argument (i.e. `typing.List[T]`).
At present, `typing.List` is always assumed to have one argument, and
when it is used without one, `typing.List.__args__[0]` is nonempty and
set to some `typing.TypeVar` instance, which has no JIT type equivalent.
Consequently, trying to convert `typing.List` to a JIT type results in
a `c10::ListType` with `nullptr` for its element type, which can cause
a segmentation fault.
This is fixed by returning a `ListType` from
`jit.annotations.try_ann_to_type` only if the element type is converted
successfully to a JIT type and returning `None` otherwise.
**Test Plan**
I ran the code from the issue (https://github.com/pytorch/pytorch/issues/37530) that reported this problem and also ran some unit tests.
*Before*
```
$ python3 segfault.py
Segmentation fault (core dumped)
```
*After*
```
$ python3 segfault.py
Traceback (most recent call last):
...
RuntimeError:
Unknown type name 'List':
File "segfault.py", line 9
classmethod
def cat(cls, box_lists: List):
~~~~ <--- HERE
return cls(torch.cat([x for x in box_lists]))
'Boxes.cat' is being compiled since it was called from 'Boxes'
File "segfault.py", line 13
def f(t: torch.Tensor):
b = Boxes(t)
~~~~~ <--- HERE
c = Boxes(torch.tensor([3, 4]))
return Boxes.cat([b, c])
'Boxes' is being compiled since it was called from 'f'
File "segfault.py", line 13
def f(t: torch.Tensor):
b = Boxes(t)
~~~~~~~~~~~ <--- HERE
c = Boxes(torch.tensor([3, 4]))
return Boxes.cat([b, c])
```
**Fixes**
This pull request fixes https://github.com/pytorch/pytorch/issues/37530.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38130
Differential Revision: D21485284
Pulled By: SplitInfinity
fbshipit-source-id: 9b51ef6340485a24c8b7cfb85832d4668b8ac51a
Summary:
del in python supports multiple operands, but PyTorch c++ frontend doesn't support that. To be consistent across different frontends, we decided to throw an exception when finding del with multiple operands inside torchscript.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38089
Test Plan: Unit tests in test/jit/test_builtins.py
Differential Revision: D21478900
Pulled By: SplitInfinity
fbshipit-source-id: 1cbd61301680c5d6652ef104996178cefcdd3716
Summary:
Also move the ignores for imports to the bottom in `mypy.ini`, those are much less interesting - start with the stuff people want to work on.
Second commit tests the instructions: remove an ignore, fix the issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37594
Differential Revision: D21434858
Pulled By: ezyang
fbshipit-source-id: 4f1a6868cdb4cb59d072bcf105f48c3a5ba3ff98
Summary:
We can implement this as a builtin instead of as a registered op.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37886
Differential Revision: D21414329
Pulled By: eellison
fbshipit-source-id: 6e130fa83fbf7ba4d4601f509cb169a2fa804108
Summary:
We used to only support indexing through
- numbers like `x[0, 1]`
- tuple like `x[(0, 1)]`
- tensor like `x[torch.tensor([0, 1])]`
This PR adds support for indexing through list which is equivalent to tensor.
- `x[[0, 1, 5]]`
- `x[[0, 1], [0, 1]]`
- `x[[[0, 1], [0, 1]], [[0, 1], [0, 1]]]`
Note for `x[[0, 1, 5]]` we had a bug in AST conversion code so we used to treat it like `x[0, 1, 5]` which means it might accidentally run and produce wrong result(fixes https://github.com/pytorch/pytorch/issues/37286 fixes https://github.com/pytorch/pytorch/issues/18616), now that it's fixed we probably want to mark it as BC breaking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37848
Reviewed By: suo
Differential Revision: D21409840
Pulled By: ailzhang
fbshipit-source-id: 6f2d962885c6dc009cb384d98be1822f5ca7a189
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37842
Fixes https://github.com/pytorch/pytorch/issues/23993.
Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample1 = torch.ones(1)
sample2 = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample1, sample2,),))
> produces a graph with something like:
> %sample1, %sample2 = prim::TupleUnpack(%input)
```
This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
> produces a graph with something like
> %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.
Test Plan: Imported from OSS
Differential Revision: D21406478
Pulled By: suo
fbshipit-source-id: 3c7066b95d4a6e9b528888309954b02dadbc1a07
Summary:
xref gh-32838, gh-34032
This is a major refactor of parts of the documentation to split it up using sphinx's `autosummary` feature which will build out `autofuction` and `autoclass` stub files and link to them. The end result is that the top module pages like torch.nn.rst and torch.rst are now more like table-of-contents to the actual single-class or single-function documentations pages.
Along the way, I modified many of the docstrings to eliminate sphinx warnings when building. I think the only thing I changed from a non-documentation perspective is to add names to `__all__` when adding them to `globals()` in `torch.__init__.py`
I do not know the CI system: are the documentation build artifacts available after the build, so reviewers can preview before merging?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37419
Differential Revision: D21337640
Pulled By: ezyang
fbshipit-source-id: d4ad198780c3ae7a96a9f22651e00ff2d31a0c0f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37464
Fixes https://github.com/pytorch/pytorch/issues/23993.
There are two fixes here:
1. Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample, sample,),))
# produces a graph with something like
# %sample, %sample = prim::TupleUnpack(%input)
```
This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
# produces a graph with something like
# %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.
2. Previously, our input cloning for trace checking didn't do a memoized
deep copy. So:
```
_clone_inputs((sample, sample, sample))
```
produces a tuple with three non-aliased tensors. That's wrong! Use
copy.deepcopy with a memoization argument to fix this.
Test Plan: Imported from OSS
Differential Revision: D21297549
Pulled By: suo
fbshipit-source-id: 981d5879a4a244520dd68489767129ff357f1497
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35615
Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well (though using side-by-side view and ignoring
whitespace change might be helpful).
Test Plan: CI
Differential Revision: D20842886
Pulled By: dreiss
fbshipit-source-id: 8cad4e87c45895e7ce3938a88e61157a79504aed
Summary:
This test was failing because caching resulted into a function with multiple execution plans rather than multiple functions with a single execution plan each as a test writer intended.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35847
Differential Revision: D20839674
Pulled By: Krovatkin
fbshipit-source-id: 68f41610a823d94c1e744c85ac72652c741d73ae
Summary:
Previously we were copying the bound method of the original class to the
new script module class, which causes `self` to be wrong. This PR
changes it so we fetch the unbound function, then bind it to the new
script module, then attach it to the module.
Fixes#28280
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36546
Pulled By: driazati
Differential Revision: D21023329
fbshipit-source-id: 6b3f8404700860151792f669a9c02fbd13365272
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36277
This PR introduce a flag to the tracer that guard the risky behaviors
like adding list/dict as output of the tracer. Currently to ensure not
BC breaking user, we throw warning if the tracer output is list, and
will throw error when the tracer output is dict to enforce using this
flag (next PR)
Test Plan: Imported from OSS
Differential Revision: D20998157
Pulled By: wanchaol
fbshipit-source-id: 0d2c55f1a263a48b1b92dd6ad54407815e0a6f72
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35042
Removing python2 tests and some compat code in torch.jit. Check if dependent projects and external tests have any issues after these changes.
Test Plan: waitforsandcastle
Reviewed By: suo, seemethere
Differential Revision: D18942633
fbshipit-source-id: d76cc41ff20bee147dd8d44d70563c10d8a95a35
Summary:
Stacked PRs
* #34938 - [jit] Remove stray `script`
* **#34935 - [jit] Add lazy script decorator**
Some users maintain libraries of code that is largely trace-able but not
script-able. However, some functions may need to be `torch.jit.script`ed if
they contain control flow so the tracer will use the compiler version.
This however impacts library start up time as in #33418, so this PR adds
a workaround in the form of a `torch.jit._lazy_script_while_tracing`
that will only initialize the compiler if the function is called while
actually tracing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34935
Pulled By: driazati
Differential Revision: D20569778
fbshipit-source-id: d87c88c02b1abc86b283729ab8db94285d7d4853
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33981
Okay it turns out that https://github.com/pytorch/pytorch/pull/29342
deletes actually useful things from the resulting Python module. In
particular, people like having `ignore`'d methods attached so that they
can invoke them from python.
Test Plan: Imported from OSS
Differential Revision: D20171650
Pulled By: suo
fbshipit-source-id: 71862e932c6a56cd055d0cff6657887ee0ceb9a8
Summary:
With this PR, we can now support left and right shift operators in the JIT engine for <int, int> and <Tensor, int>.
Updated tests pass as expected:
```
> python test/test_jit.py
...
Ran 2427 tests in 84.861s
OK (skipped=139, expected failures=1)
```
Running the following code with Python results in the output below:
```
> cat ~/expressions.py
import torch
torch.jit.script
def fn(a, b):
# type: (int, int)
return (
a << b, # supported
b >> a, # supported
a & b,
a | b,
a ^ b
)
print(fn.graph)
```
```
> python ~/expressions.py
graph(%a.1 : int,
%b.1 : int):
%4 : int = aten::leftshift(%a.1, %b.1) # /home/ince/expressions.py:7:8
%7 : int = aten::rightshift(%b.1, %a.1) # /home/ince/expressions.py:8:8
%10 : int = aten::__and__(%a.1, %b.1) # /home/ince/expressions.py:9:8
%13 : int = aten::__or__(%a.1, %b.1) # /home/ince/expressions.py:10:8
%16 : int = aten::__xor__(%a.1, %b.1) # /home/ince/expressions.py:11:8
%17 : (int, int, int, int, int) = prim::TupleConstruct(%4, %7, %10, %13, %16)
return (%17)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34563
Differential Revision: D20434209
Pulled By: tugrulince
fbshipit-source-id: 886386c59755106e17b84778b8e495b80a6269cd
Summary:
`torch.nn.functional.interpolate` was written as a builtin op when we scripted the standard library, because it has four possible overloads. As a result, whenever we make a change to `interpolate`, we need to make changes in two places, and it also makes it impossible to optimize the interpolate op. The builtin is tech debt.
I talked with ailzhang, and the symbolic script changes are good to remove (i guess that makes a third place we needed to re-implement interpolate).
I'm trying to get rid of unneccessary builtin operators because we're standardizing mobile bytecode soon, so we should try to get this landed as soon as possible.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34514
Differential Revision: D20391089
Pulled By: eellison
fbshipit-source-id: abc84cdecfac67332bcba6b308fca4db44303121
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34515
Once upon a time we thought this was necessary. In reality it is not, so
removing it.
For backcompat, our public interface (defined in `api/`) still has
typedefs to the old `script::` names.
There was only one collision: `Pass` as a `Stmt` and `Pass` as a graph
transform. I renamed one of them.
Test Plan: Imported from OSS
Differential Revision: D20353503
Pulled By: suo
fbshipit-source-id: 48bb911ce75120a8c9e0c6fb65262ef775dfba93
Summary:
Stacked PRs
* #33474 - [jit] Remove list specializations from pickler
* **#33255 - [jit] Add type tags to lists/dicts in pickle**
This adds a global call to `torch.jit._pickle.restore_type_tags` for
lists and dicts so that we can preserve their types after serialization.
](https://our.intern.facebook.com/intern/diff/20346780/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33255
Pulled By: driazati
Differential Revision: D20346780
fbshipit-source-id: c8534954ef4adb2e3c880401acbee30cd284f3db
Summary:
I think this was added when we couldn't compile the function itself. now we can.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34171
Differential Revision: D20269960
Pulled By: eellison
fbshipit-source-id: 0a60458d639995d9448789c249d405343881b304
Summary:
Currently, putting `outputs: List[Tensor]` instead of `outputs: List[Tensor] = []` in your JITed code results in:
```
Traceback (most recent call last):
File "custom_lstms.py", line 453, in <module>
test_script_stacked_bidir_rnn(5, 2, 3, 7, 4)
File "custom_lstms.py", line 404, in test_script_stacked_bidir_rnn
rnn = script_lstm(input_size, hidden_size, num_layers, bidirectional=True)
File "custom_lstms.py", line 62, in script_lstm
other_layer_args=[LSTMCell, hidden_size * dirs, hidden_size]))
File "/home/apaszke/pytorch/torch/jit/__init__.py", line 1267, in script
return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 305, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 348, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/home/apaszke/pytorch/torch/jit/__init__.py", line 1612, in _construct
init_fn(script_module)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 340, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, infer_methods_to_compile)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 348, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/home/apaszke/pytorch/torch/jit/__init__.py", line 1612, in _construct
init_fn(script_module)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 340, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, infer_methods_to_compile)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 348, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/home/apaszke/pytorch/torch/jit/__init__.py", line 1612, in _construct
init_fn(script_module)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 340, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, infer_methods_to_compile)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 348, in create_script_module_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "/home/apaszke/pytorch/torch/jit/__init__.py", line 1612, in _construct
init_fn(script_module)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 340, in init_fn
scripted = create_script_module_impl(orig_value, sub_concrete_type, infer_methods_to_compile)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 317, in create_script_module_impl
stubs = stubs_fn(nn_module)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 511, in infer_methods_to_compile
stubs.append(make_stub_from_method(nn_module, method))
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 41, in make_stub_from_method
return make_stub(func)
File "/home/apaszke/pytorch/torch/jit/_recursive.py", line 34, in make_stub
ast = torch.jit.get_jit_def(func, self_name="RecursiveScriptModule")
File "/home/apaszke/pytorch/torch/jit/frontend.py", line 173, in get_jit_def
return build_def(ctx, py_ast.body[0], type_line, self_name)
File "/home/apaszke/pytorch/torch/jit/frontend.py", line 206, in build_def
build_stmts(ctx, body))
File "/home/apaszke/pytorch/torch/jit/frontend.py", line 129, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/apaszke/pytorch/torch/jit/frontend.py", line 129, in <listcomp>
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/apaszke/pytorch/torch/jit/frontend.py", line 181, in __call__
return method(ctx, node)
File "/home/apaszke/pytorch/torch/jit/frontend.py", line 294, in build_AnnAssign
rhs = build_expr(ctx, stmt.value)
File "/home/apaszke/pytorch/torch/jit/frontend.py", line 180, in __call__
raise UnsupportedNodeError(ctx, node)
File "/home/apaszke/pytorch/torch/jit/frontend.py", line 116, in __init__
source_range = ctx.make_range(offending_node.lineno,
AttributeError: 'NoneType' object has no attribute 'lineno'
```
This patch makes the error message more reasonable:
```
torch.jit.frontend.UnsupportedNodeError: annotated assignments without assigned value aren't supported:
File "custom_lstms.py", line 221
# type: (Tensor, Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]
inputs = reverse(input.unbind(0))
outputs: List[Tensor]
~ <--- HERE
for i in range(len(inputs)):
out, state = self.cell(inputs[i], state)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34133
Differential Revision: D20249076
Pulled By: ezyang
fbshipit-source-id: 40ec34ad38859f9fe56f379d3f8d08644b00fab9
Summary:
Stacked PRs
* #33474 - [jit] Remove list specializations from pickler
* **#33255 - [jit] Add type tags to lists/dicts in pickle**
This adds a global call to `torch.jit._pickle.restore_type_tags` for
lists and dicts so that we can preserve their types after serialization.
](https://our.intern.facebook.com/intern/diff/19868637/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33255
Pulled By: driazati
Reviewed By: xman1979, Tianshu-Bao
Differential Revision: D19868637
fbshipit-source-id: 2f1826e6679a786ca209198690269f399a542c04
Summary:
This adds some machinery so that we use Python to resolve types to a value and the corresponding resolution logic in `annotations.py` instead of using the string.
This PR also `slowTests` a random test since it was taking > 1 min whereas all the other tests take < 10 seconds.
Fixes#31864Fixes#31950
](https://our.intern.facebook.com/intern/diff/20144407/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29623
Pulled By: driazati
Differential Revision: D20144407
fbshipit-source-id: ef3699f6b86039d8b4646ffc42c21bd1132d1681
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33504
Fix resolution fo functions that are bound onto torch in torch/functional.py. This does not fix compilation of all of those functions, those will be done in follow ups. Does torch.stft as a start.
Fixes#21478
Test Plan: Imported from OSS
Differential Revision: D20014591
Pulled By: eellison
fbshipit-source-id: bb362f1b5479adbb890e72a54111ef716679d127
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33508
Ever since we switched to not inlining by default, some users have
complained since they relied on inlining occuring to, e.g. process the
graph with some other tool. Add an inlined_graph for convenience in
those cases.
Test Plan: Imported from OSS
Differential Revision: D19977638
Pulled By: suo
fbshipit-source-id: fe1fa92ff888959203d5d1995930d488b5f9e24c
Summary:
This was old code that isn't tested and is broken, it should have been
deleted in #24874
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33453
Pulled By: driazati
Differential Revision: D19961403
fbshipit-source-id: 94c52360460194d279dad5b0ea756ee366f525e1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32771
It's a patch to #32621, make the api private.
Test Plan: Imported from OSS
Differential Revision: D19657307
Pulled By: iseeyuan
fbshipit-source-id: e604a0cbed6a1e61413daaafc65bea92b90f1f5d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32745
Some parameters (like `bias` in conv) are optional. To achieve this
previously, you had to add `bias` as a constant, which would invoke some
pretty weird behavior in the frontend, summarized as:
```
if bias is not None:
add it as a parameter normally
else: # bias is None
add it as a constant with the value None
```
There are several things bad about this:
1. Bias is not a constant. Marking it `__constants__` is confusing.
2. It basically relies on an implementation detail (the frontend
processes parameters before constants) to work.
Okay, whatever. I don't even know why we did this originally, but
getting rid of it doesn't break anything, so I assume improved NoneType
refinement has made this a non-issue.
Note on perf: this will make no difference; if bias was `None` it's still
folded out today, if bias is a Tensor it would be added as a parameter
both before and after this change
Test Plan: Imported from OSS
Differential Revision: D19628634
Pulled By: suo
fbshipit-source-id: d9128a09c5d096b938fcf567b8c23b09ac9ab37f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32621
Export the "_save_for_mobile" method to Python so that the bytecode format for lite interpreter can be added or updated to the original script model.
It's the first step of python binding for lite interpreter, as discussed in this [internal post](https://fb.workplace.com/groups/1144215345733672/permalink/1478900738931796/) and offline.
Next step is to export the load_for_mobile and run method of mobile module, so that users could verify the mobile model from Python.
Test: use the following python script to display the bytecode part of the updated model file.
```
#!/usr/bin/env python3
import sys
import pickle
import pprint
import zipfile
class FakeObject(object):
def __init__(self, module, name, args):
self.module = module
self.name = name
self.args = args
self.state = None
def __repr__(self):
state_str = "" if self.state is None else f"(state={self.state!r})"
return f"{self.module}.{self.name}{self.args!r}{state_str}"
def __setstate__(self, state):
self.state = state
class FakeClass(object):
def __init__(self, module, name):
self.module = module
self.name = name
self.__new__ = self.fake_new
def __repr__(self):
return f"{self.module}.{self.name}"
def __call__(self, *args):
return FakeObject(self.module, self.name, args)
def fake_new(self, *args):
return FakeObject(self.module, self.name, args)
class DumpUnpickler(pickle._Unpickler):
def find_class(self, module, name):
return FakeClass(module, name)
def persistent_load(self, pid):
return FakeObject("pers", "obj", (pid,))
def main(argv):
zfile = zipfile.ZipFile(argv[1])
names = [i for i in zfile.namelist() if "bytecode.pkl" in i]
if not names:
print("bytecode.pkl not found.")
return
with zfile.open(names[0], "r") as handle:
value = DumpUnpickler(handle).load()
pprint.pprint(value)
if __name__ == "__main__":
sys.exit(main(sys.argv))
```
Test Plan: Imported from OSS
Differential Revision: D19596359
Pulled By: iseeyuan
fbshipit-source-id: 19a4a771320f95217f5b0f031c2c04db7b4079a8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32539
Before: if something in `_modules` was `None`, we would barf. This is
incorrect because it's allowed for users to put `None` there, in case a
module is optional.
This case ought to be handled correctly during scripting. Fixes https://github.com/pytorch/pytorch/issues/32469
Test Plan: Imported from OSS
Differential Revision: D19552346
Pulled By: suo
fbshipit-source-id: aba7fdc19fd84d195c81cdaca8a75013a8626a8b
Summary:
This should be covered under recursive script now
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32235
Pulled By: driazati
Differential Revision: D19414889
fbshipit-source-id: 85f8132401dbe44c9dbaef7c0350110f90eb9843
Summary:
This was not tested before, fixes#32139 (which was actually a false positive, functions with kwargs but without defaults on those kwargs are supported). This PR adds testing for both cases and cleans up the error reporting.
](https://our.intern.facebook.com/intern/diff/19385828/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32146
Pulled By: driazati
Differential Revision: D19385828
fbshipit-source-id: 5eab74df6d02f8e1d7ec054cafb44f909f9d637e
Summary:
This hooks up `inspect` so that Python functions get their parameters
names attached instead of naming them `0, 1, 2, ...`. This also fixes
issue #28537 where `ignore` functions were improperly typing `self`.
](https://our.intern.facebook.com/intern/diff/19256434/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29300
Pulled By: driazati
Differential Revision: D19256434
fbshipit-source-id: 6a1fe7bd0afab708b8439517798955d0abfeb44c
Summary:
Fixes a bad merge that is breaking distributed tests on master
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31492
Pulled By: driazati
Differential Revision: D19180978
fbshipit-source-id: f69f525e2c7f61194686f07cf75db00eb642882f
Summary:
Fixes#27495
This adds builtins as another piece of a concrete type. They're separate from normal functions since they represent the `BuiltinFunction` sugared value (which is a direct call to a builtin op). It also moves the builtins related logic from `jit/__init__.py` to `jit/_builtins.py` so it can be used from `jit/_recursive.py` to look up functions in the builtins table.
](https://our.intern.facebook.com/intern/diff/19149779/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31269
Pulled By: driazati
Differential Revision: D19149779
fbshipit-source-id: d4e5e5d7d7d528b75a2f503e6004394251a4e82d
Summary:
Add a section for unsupported ops, and modules. Automatically generate the properties and attributes that aren't bound, and for ops that have semantic mismatches set up tests so the docs stay up to date.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31329
Differential Revision: D19164472
Pulled By: eellison
fbshipit-source-id: 46290bb8a64d9de928cfb1eda5ff4558c3799c88
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31020
Before, the recursive scripting process re-did the concrete type
inference process for every submodule call. This changes things so that
the concrete type inference process only occurs once (at the top level),
and we re-use all the inferred concrete types while recursively
compiling submodules.
This is both more efficient (we don't do n^2 work inferring concrete
types) and less bug-prone (since we infer the concrete type only once,
there is no possibility of a mismatch).
Test Plan: Imported from OSS
Differential Revision: D18904110
Pulled By: suo
fbshipit-source-id: 6560b85ae29fe5e9db1ee982dbf8bc222614b8d8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31019
No more `recurisve_script`, just direct calls to `create_script_module`.
This reduces the number of pathways through the frontend, and the
uniformity is useful for a future PR.
Test Plan: Imported from OSS
Differential Revision: D18904113
Pulled By: suo
fbshipit-source-id: 7de061dfef0cbdfc9376408fc6c1167b81803f01
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31018
Properties are now disallowed so this hack is no longer necessary
Test Plan: Imported from OSS
Differential Revision: D18904112
Pulled By: suo
fbshipit-source-id: 83448da677082d59355729bb72d9f9f4c31ea756
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31017
This arg is now derivable from another one. So we don't need to pass
both
Test Plan: Imported from OSS
Differential Revision: D18904111
Pulled By: suo
fbshipit-source-id: ea74ea9c2ae83d9e0e6977b0eb6629f53545e2e4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31214
This set up the basic infrastructure for distributed autograd and rpc to
bind their operators to TorchScript, since the whole distributed package
is builtin behind the `USE_DISTRIBUTED` flag, we separate the
registration and build it only when the flag is on.
Test Plan: Imported from OSS
Differential Revision: D19137160
fbshipit-source-id: ff47dc4c380ebe273fe0eea9e5e3fccfbd6466d7
Summary:
Resubmit of https://github.com/pytorch/pytorch/pull/30356 and https://github.com/pytorch/pytorch/pull/31014 :'(
The last commit contains the fix. There was an internal FBcode error not able to compile the previous `impl_default->second.equal(default_val.second))` line. I tried various fixes in C++ internally but couldn't figure anything out. This is a good example of the programming costs of going from python -> c++ for different types of objects, because the conceptual overhead has expanded in scope from (python) -> (python, c++, pybind).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31123
Differential Revision: D18936128
Pulled By: eellison
fbshipit-source-id: 7d8fd66a6dd4a3e9838f3a0b68c219b6565a9462
Summary:
This makes `nn.Transformer` usable from TorchScript. It preserves backwards compatibility via `__setstate__` on the encoder/decoder.
Fixes https://github.com/pytorch/pytorch/issues/24173
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28561
Differential Revision: D18124753
Pulled By: driazati
fbshipit-source-id: 7314843e5aa9c9bf974c4672e4edb24ed8ef4a6f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30356
This finishes up the `torch.jit.overload` api for free-functions.
- defaults now required on the implementation function itself
- fully follows [overload spec](https://mypy.readthedocs.io/en/latest/more_types.html#function-overloading) such that the following is supported
```
overload
def mouse_event(x1: int, y1: int) -> ClickEvent: ...
def mouse_event(x1: int,
y1: int,
x2: Optional[int] = None,
y2: Optional[int] = None): ...
```
Note: `jit.overload` isn't supported yet for UDT, but is support for modules. This PR doesn't make the same changes for modules, if reviewers think I should include them then I could do so in a follow up PR or wait to land this. Since that's still an internal api I think it's fine, and the changes here would allow us to expose `torch.jit.overload` on free functions.
Test Plan: Imported from OSS
Differential Revision: D18864774
Pulled By: eellison
fbshipit-source-id: 6c566738bd6f0551a000a9ea8d56e403636b7856
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30892
Fixes all outstanding lints and actually installs a properly configured
flake8
Test Plan: Imported from OSS
Differential Revision: D18862825
Pulled By: suo
fbshipit-source-id: 08e9083338a7309272e17bb803feaa42e348aa85
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30467
Introduce function jit.export_opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.
Example:
import torch
m = torch.jit.load("example.pt")
print(torch.jit.export_opnames(m))
The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
Test Plan: Imported from OSS
Differential Revision: D18801619
Pulled By: iseeyuan
fbshipit-source-id: f9b198d3e82b095daf704ee595d8026ad889bb13
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30168
Previous implementation of `clone` in `script::Module` copies both the module instance and the
class type, after we enabled type sharing https://github.com/pytorch/pytorch/pull/26666 we also
need to have a function to clone instance only and share the underlying class type.
Test Plan:
tbd
Imported from OSS
Differential Revision: D18631324
fbshipit-source-id: dbadcf19695faee0f755f45093b24618c047b9d1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29826
After save/load, we lose concrete type information. So if you tried to
script something that contained a loaded ScriptModule as a submodule,
the following sequence happened:
1. During ConcreteType inference, the loaded submodule got a new
inferred type.
2. But it already has a type! So there was a type mismatch.
To fix this, we should generate a ConcreteType directly from the loaded
submodule type (similar to what we do for interfaces). This makes sense
too--the ConcreteModuleType should be empty, since all the "sugaredness"
was stripped out during the save/load process.
Test Plan: Imported from OSS
Differential Revision: D18575009
Pulled By: suo
fbshipit-source-id: 4d329b7e9b7e7624f459e50092e35ab0ab813791
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29825
We made `ModuleInfo` a union initially to represent the idea that a
submodule could either be a regular module or a module interface.
This PR represents module interfaces as a ConcreteModuleType with no
info (e.g. no "sugaredness"), and with the interface type as the
underlying `jitType_`. This has the effect of reducing the special
casing around adding/maintaining module info.
Test Plan: Imported from OSS
Differential Revision: D18575011
Pulled By: suo
fbshipit-source-id: 53e297b39aa1a03bcdadd795ff225aa68fec9d70
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29824
We have two distinct phases/uses for ConcreteModuleType:
1. We are building it up and using it to check whether we can
reuse JIT types. (RawConcreteModuleType)
2. We are using it to satisfy ModuleValue::attr queries.
(ConcreteModuleType)
These types share an underlying `ConcreteModuleTypeData` which
actually stores the relevant info.
Previously they were the same type because I was lazy, but it's been the
source of a bug. So split them to formalize the differing invariants for
the two phases.
Test Plan: Imported from OSS
Differential Revision: D18575010
Pulled By: suo
fbshipit-source-id: 3e4ebcd36e78b947150d8f0dbb74ecccad23e7c4
Summary:
Uses new overload mechanism for rnns, making it so that python & torchscript go through the same path and using an API that is in line with the one specified
in https://docs.python.org/3/library/typing.html#typing.overload
This brings the TorchScriptable rnns closer to the base implementation; unifying them should be done in a follow up PR but there are still a few limitations that make it difficult to do so.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29614
Differential Revision: D18486982
Pulled By: eellison
fbshipit-source-id: aaaea66a4a7f12d2e46199ca254f9e8f7475500e
Summary:
Uses new overload mechanism for rnns, making it so that python & torchscript go through the same path and using an API that is in line with the one specified
in https://docs.python.org/3/library/typing.html#typing.overload
This brings the TorchScriptable rnns closer to the base implementation; unifying them should be done in a follow up PR but there are still a few limitations that make it difficult to do so.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29614
Differential Revision: D18458751
Pulled By: eellison
fbshipit-source-id: 07c71838f21cb5425e8d6dbd4a512f774c8c2970
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28988
Make ModuleList, Sequential, ModuleDict go through the same pathway as other modules, cleaning up a bunch of code and allowing them to define custom forwards and other methods.
EDIT: Previously, we would ignore an nn.Sequential attribute if it was not in `__constants__` ("did you forget to add it to Constants"). This PR scripts it even if it is not in `__constants__`. Is that what we want?
Test Plan: Imported from OSS
Differential Revision: D18402821
Pulled By: eellison
fbshipit-source-id: dd4f28fb0df0d1ba4ad1b3bc34ba141959a433f7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29499
This changes how DataParallel and trace module creation works so that
we no longer need to mutate Module class after it has been created.
The only remaining usage of register_* functions are now inside C++
tests.
Test Plan: Imported from OSS
Differential Revision: D18413652
Pulled By: zdevito
fbshipit-source-id: f039e5400cd016632768be4547892f6a69645c20
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29432
This removes a lot of the private methods on torch._C.ScriptModule,
and instead implements functionality in terms of slot_dict_impl views
to implement _parameter, _buffers, and _modules in nn.Module.
A followup PR should also remove the _register_attribute,
_register_module, and _register_parameter methods, but this requires
more refactoring of the way tracing creates modules and replication
for data parallel works.
Test Plan: Imported from OSS
Differential Revision: D18387963
Pulled By: zdevito
fbshipit-source-id: f10d47afeb30c1e05d704ae5ac4166830933125c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29405
We never actually used this (function attributes are a separate pathway
in ConcreteModuleType).
Test Plan: Imported from OSS
Differential Revision: D18378392
Pulled By: suo
fbshipit-source-id: b06c4b6d70f0b2534be78a215125cffd22ab44f0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29342
This is not necessary, as we use `lazy_bind` to retrieve those methods
from the class anyway.
Test Plan: Imported from OSS
Differential Revision: D18383381
Pulled By: suo
fbshipit-source-id: e8b7c9e696087cc1e707ac38f7ae85f569f08371
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28395
Currently property methods are broken in TorchScript because we
basically treat it as an attribute in the existing path: we'll evaluate
the method once and store that as the value forever.
Since lack of property support is easily worked around (just make it
a method), I've opted to just explicitly error to avoid confusion. If
people want it, they can file an issue and we can look at their use
case.
This also helps us nicely clean up some parts of the ScriptModule conversion
path.
Test Plan: Imported from OSS
Reviewed By: shannonzhu
Differential Revision: D18054946
Pulled By: suo
fbshipit-source-id: 7e927836ae687cd2f13a94b9f0af399437fae422
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28828
This updates torch::script::Module to more closely match the behavior
of nn.Module. In particular, it implements the (optionally recurisive)
iterators that retrieve submodules, parameters, and buffers and makes
their names match the python versions.
This also removes the individual accessors for Parameter, Module, Buffer, etc.
and replaces them with a single `attr` function which is equivalent to
writing `a.foo` in Python (`setattr` emulates `a.foo = v`).
As we build out the user-facing API for TorchScript values this will end
up matching how an attribute is accessed on general objects.
This PR preservers the python bindings for script::Module by emulating the
old API at the binding level. A followup will clean up the usage to more
directly match the C++ API.
Test Plan: Imported from OSS
Differential Revision: D18197611
Pulled By: zdevito
fbshipit-source-id: 7ee4dcbb258605d1c988314b05d938423f1ccee5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28409
This PR enables submodule swapping via module interface. User could
declare a submodule as an module interface type in the ScriptModule,
during compilation we will record the module interface type in
ModuleInfo of ConcreteModuleType, the JIT type associated will have the
correct ModuleInterfaceType, and CppModule will get the correct module list
Given that we still keep the module interface type in the type system,
the graph is not inlined when we call Module::Attr and it will use
prim::CallMethod to call the method, this allow us to do module swapping
for the ScriptModule that also meet the same module interface type, and
we only allow the module swapping through the module interface
approach.
Test Plan: Imported from OSS
Reviewed By: driazati
Differential Revision: D18284309
fbshipit-source-id: 2cb843e4b75fa3fcd8c6020832a81014dbff4f03
Summary:
I don't see `_frames_up` being used anywhere. Just to clean up the code thought it should be removed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29179
Differential Revision: D18319876
Pulled By: suo
fbshipit-source-id: 5e612ff94ccc88fc85288ffc26213e1d11580c36
Summary:
# Description
I'm new to this project just wanted to start with small bug fixes. I found some unused local variables and I've removed them in this pr.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29181
Differential Revision: D18319893
Pulled By: suo
fbshipit-source-id: e4f9f13b6db2ca213015569deb12d3fd9beb74a8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28408
This enable interface to defined on a nn.Module, and the InterfaceType
now have a field of is_module_ to distinguish if it's a module interface
or a normal interface (This is similar to what ClassType distinguish on
module and torchscript classes).
The module interface can be assigned with any ScriptModule that has the
compatible signatures on schemas. A normal object that is not a
ScriptModule will not be able to assigned to an module interface and
will error out when user explicitly doing so. Assigning a ScriptModule
to class interface will make it only available in attribute_list, not
module_list. More details on subtyping relationship documented in the
jit_type.h
If you declare an module interface inside an nn.Module that is being
compiled to a ScriptModule, behavior to our internal compilation will
be:
1. ConcreteModuleType will record it as an module attribute and add to
the attributes_ list.
2. JitType that is created from the ConcreteModuleType will record it as
an attribute and pre-genenerate the slot. The slot will be marked as
EntityType::MODULE still to make sure JitType record it as a Module
slot
3. cpp_module will also register it as a Module as the Slot type is the
source of truth
Since JitType will record it as attribute as store its type, it will
behave normally as the class interface attribute behave now. This means
the submodule assigned to this module interface is not getting inlined
into the graph as the normal `Module::attr` behave, it will generate
interface callMethod and allow us to later swap this with another
ScriptModule that implicitly implements this module interface.
Test Plan: Imported from OSS
Differential Revision: D18284311
fbshipit-source-id: e0b8f6e8c34b2087fab337a969e5ea3fb37ec209
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28407
Given that we do not have support for inheitance or any polymorphism
strategy yet, we should guard against user from using it until we get
the full support so that user won't confuse by the weird behaviors.
Test Plan: Imported from OSS
Differential Revision: D18284310
fbshipit-source-id: f55a224f4190d57926d91ed98f6168d787387eb8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28766
Add the warning message to explicitly ask the users to upgrade the deprecated `torch.jit.quantized` API to the new `torch.quantization.quantize_dynamic` API.
ghstack-source-id: 92711620
Test Plan: CI
Differential Revision: D18164903
fbshipit-source-id: e6aff2527f335c2d9f362e6856ce8597edb52aaa
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28569
Previously, the inclusion of function attributes would "poison" a
ConcreteModuleType, because we did not have a way of checking whether
they are actually the same function. This PR uses the Python function
object to perform that check. This improves our ability to reuse JIT
types between modules.
Also this PR fixes a bug where we weren't properly adding modules as
attributes when converting from ConcreteType -> JIT type (we were adding
them after the fact--another reason to switch from using `register_x` to
`set_x` during module construction, which is on my to-do list after
this).
Fixes https://github.com/pytorch/pytorch/issues/28559
Test Plan: Imported from OSS
Differential Revision: D18111331
Pulled By: suo
fbshipit-source-id: ec2cccf832d3ddd4cd4d28fe19cb265f1275325a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26572
Combined with isinstance specialization this allows a degree of polymorphic
functions to work without needing to use our weirder overload hacks.
We do not define any operators on Any, so the only thing you can do with it
is to put it in containers or type refine it using an isinstance check.
Any is restricted from appearing in non-argument position because we
cannot restore type tags if it ends up as a field in a class.
Test Plan: Imported from OSS
Differential Revision: D17530643
Pulled By: zdevito
fbshipit-source-id: f06f78ce84819f7773953a492f3d4c49219ee94c
Summary:
This PR fixes https://github.com/pytorch/pytorch/issues/25801 (see there for my verbose analysis).
As an example, for the following code:
```
import torch
torch.jit.script
def f1(x):
# type: (int, int) -> None
pass
```
this PR will change error message from this:
```
RuntimeError:
Number of type annotations (2) did not match the number of function parameters (1):
# type: (int, int) -> None
```
to this:
```
RuntimeError:
Number of type annotations (2) did not match the number of function parameters (1):
at __scratch__/example.py:4:0
torch.jit.script
def f1(x):
~~~~~~~~ <--- HERE
# type: (int, int) -> None
pass
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27195
Differential Revision: D17910902
Pulled By: driazati
fbshipit-source-id: af5c6353069d005752d6c7f0bd6a0c6db8437e55
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27850
Many of these are real problems in the documentation (i.e., link or
bullet point doesn't display correctly).
Test Plan: - built and viewed the documentation for each change locally.
Differential Revision: D17908123
Pulled By: zou3519
fbshipit-source-id: 65c92a352c89b90fb6b508c388b0874233a3817a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26666
Changes:
- Introduce a `ConcreteModuleType` concept. This acts both as the key into the type
cache, and as the source of truth for `ModuleValue::attr` queries. It needs
to do both jobs because that's how we ensure correctness (if the types are
different, it's because `ModuleValue::attr` would return different things).
- Now `recursive_script` will first construct a `ConcreteModuleType` and search for a
pre-existing type before starting compilation.
- All previous paths to creating a `ScriptModule` (including inheriting from
`ScriptModule`) are now rewritten to go through `create_script_module`, so
that we have only a single place where construction happens.
Behavioral changes:
- Big change to `torch.jit.ScriptModule` inheritance: all attributes are now
recursively scripted if possible, matching recursive scripting semantics.
This makes it hard to keep something from being scripted (for example, a
Python submodule). Possibly we'll need an `ignore()` type thing for
attributes. In particular, this adds `self.training` to *every* ScriptModule, since
it's present on every `nn.Module`.
- I believe this change to be transparent to existing users of the inheritance API, since if you had an attribute that is unscriptable that you never used, there is no error. In some cases, we will create new attributes (even if they are unused), which will increase serialized model size from before.
Test Plan: Imported from OSS
Differential Revision: D17551196
Pulled By: suo
fbshipit-source-id: b476d1c9feb3ddfd63406d90989aaf9dfe890591
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27399
This was devised in a time when we didn't have module attributes. They
are essentially just tensor lists, so represent them that way. This has
the additional benefit of making the RNN forward pass faster because we
effectively cache the flattened weights.
The only complication part is that someone may come along and do:
```
my_rnn_mod.w_ih_l0 = torch.nn.Parameter(...)
```
This means we need to override setattr to keep the flattened weights
cache up to date.
Test Plan: Imported from OSS
Differential Revision: D17785658
Pulled By: suo
fbshipit-source-id: 7789cd1d0d4922bfd5eba1716976442fbf150766
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27515
Resoving variable names using the local activation frames does not work
when using recursive scripting, but our current code tries to do it
(incorrectly) anyway. The reason it works is only because the script
call is in the same local frame as the definition. This will not be
true in practice and makes it seem like the API works in more cases
than it really does. This forces us to always use closure-based annotations,
documents it, and it fixes the tests so that they still pass.
Test Plan: Imported from OSS
Differential Revision: D17803403
Pulled By: zdevito
fbshipit-source-id: e172559c655b05f0acf96c34f5bdc849f4e09ce2
Summary:
Most of this was old cruft left over from special handling of `training` before we had a `bool` type. This makes all modules have a `training` attribute that is true by default and removes all other special handling.
Fixes#26884
](https://our.intern.facebook.com/intern/diff/17728129/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27109
Pulled By: driazati
Differential Revision: D17728129
fbshipit-source-id: 8ddc9fbb07a953dd05529538bfdd01ed88b5cb57
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26734
This PR added the python assignment for interface as an attribute in the
module, it enables any object that implicitly inheriting the specific
interface to be able to be assigned to the interface type in python.
Serialization support for interface/class assignment will be done in the
follow up PR
Test Plan: Imported from OSS
Differential Revision: D17742708
Pulled By: wanchaol
fbshipit-source-id: a0a2d8c74b60ed3fa6c05e1b0d49b7ad1abc670b
Summary:
ONNX does not support dictionaries for inputs and output. The reason is that the arg flattening and unflattening does not handle Dictionary types.
This PR adds flattening/unflattening support for dictionaries and strings.
However this feature should be handled with caution for input dictionaries; and users need to verify their dict inputs carefully, and keep in mind that dynamic lookups are not available.
This PR will allow exporting cases where models have dictionnary outputs (detection and segmentation models in torchvision), and where dictionary inputs are used for model configurations (MultiScaleRoiAlign in torchvision).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25889
Reviewed By: hl475
Differential Revision: D17613605
Pulled By: houseroad
fbshipit-source-id: c62da4f35e5dc2aa23a85dfd5e2e11f63e9174db
Summary:
When used as annotations on Python functions, `NamedTuple`s go through our Python annotation -> type mapping which previously had no way of lookup up `NamedTuple`s (which are created lazily by checking if the type has certain properties, so the lookup is creating the `TupleType` from scratch). This PR threads through the necessary data to make them work.
Fixes#26437
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26443
Pulled By: driazati
Differential Revision: D17486441
fbshipit-source-id: a6bbb543ff05a5abe61f1a7f68db9ecdb652b358
Summary:
Makes c10::Dict Ordered and bins binds the OrderedDict() and dict() constructor into torchscript. For the case of the empty constructor dict() i typed it as [str, Tensor] because:
• we're almost dropping support for python 2, at which point all dicts are ordered
• then it's more conventional to write x : Dict[int, int] = {} which is already supported
• It is possible to construct an arbitrarily typed empty OrderedDict through
OrderedDict(torch.jit.annotate(List[Tuple[key, value]], [])
We could consider dropping the no inputs aten::dict constructor since then the types would be more explicit.
This replaces https://github.com/pytorch/pytorch/issues/26170 and https://github.com/pytorch/pytorch/pull/26372 b/c ghstack was poisioned and i had to resubmit
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26465
Differential Revision: D17481604
Pulled By: eellison
fbshipit-source-id: d2d49795a518c3489881afac45d070e5262c5849
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26311
We are currently unable to deploy models due to D16955662 changing function signature of ```quantized_lstm(``` but the function call here (https://fburl.com/diffusion/e4wrmx83) not passing the newly added ```use_dynamic``` param.
Here is the details of the error: P111215482
```
E0916 12:36:16.423516 1149877 ExceptionTracer.cpp:214] exception stack complete
terminate called after throwing an instance of 'torch::jit::script::ErrorReport'
what():
Arguments for call are not valid.
The following operator variants are available:
aten::quantized_lstm(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first, *, int? dtype=None) -> (Tensor, Tensor, Tensor):
Keyword argument use_dynamic unknown.
```
This diff fixes that.
Test Plan:
Running quantization tests after.
```buck test mode/dev caffe2/test:jit -- 'test_quantization_modules \(test_jit\.TestScript\)'```
https://our.intern.facebook.com/intern/testinfra/testrun/5910974518872494
Also, currently building a package (language_technology.translation.jedi.scripts:35c3643) and testing this (f138747078).
f138771702
Reviewed By: jhcross
Differential Revision: D17404451
fbshipit-source-id: 390d2ce1ecbdd63a07a8f16c80e4c3ac25ab0a99
Summary:
If source code is not available due to packaging (e.g. sources are compiled to .pyc), TorchScript produces very obscure error message. This tries to make it nicer and allow to customize message by overriding _utils_internal.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25415
Test Plan: Really hard to unittest properly. Did one off testing by compiling to .pyc and checking the message.
Differential Revision: D17118238
Pulled By: dzhulgakov
fbshipit-source-id: 3cbfee0abddc8613000680548bfe0b8ed52a36b0
Summary:
Add support for nn.ModuleDict in script. This is needed to support torchvision.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25715
Differential Revision: D17301826
Pulled By: eellison
fbshipit-source-id: 541b5477e980f519a8c3bbb1be91dac227f6d00f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25263
This adds an api to return true in script and false in eager, which together with ignore allows guarding of not yet supported JIT features. Bikeshedding requested please.
cc zou3519
```
def foo():
if not torch.jit.is_scripting():
return torch.linear(...)
else:
return addmm(...)
```
Test Plan: Imported from OSS
Differential Revision: D17272443
Pulled By: eellison
fbshipit-source-id: de0f769c7eaae91de0007b98969183df93a91f42
Summary:
All of the code examples should now run as unit tests, save for those
that require interaction (i.e. show `pdb` usage) and those that use
CUDA.
`save` had to be moved before `load` in `jit/__init__.py` so `load`
could use the file generated by `save`
](https://our.intern.facebook.com/intern/diff/17192417/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25668
Pulled By: driazati
Differential Revision: D17192417
fbshipit-source-id: 931b310ae0c3d2cc6affeabccae5296f53fe42bc