Commit Graph

1913 Commits

Author SHA1 Message Date
Lillian Johnson
f83cf2dab3 [JIT] adding torch.jit.isinstance support (#46062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46062

Adds support for torch.jit.isinstance in both eager and script mode

Example use:

```
import torch
from typing import Any, List

class TestModule(torch.nn.Module):
    def __init__(self):
        super(TestModule, self).__init__()

    def call(self, input1: str, input2: str) -> str:
        return input1

    def forward(self, input: Any) -> None:
        if torch.jit.isinstance(input, List[str]):
            for el in input:
                print(el)

TestModule().forward(["1","2"])
scripted_module = torch.jit.script(TestModule())
scripted_module(["1", "2"])
```

Test Plan: Imported from OSS

Reviewed By: bertmaher, zou3519

Differential Revision: D24264415

Pulled By: Lilyjjo

fbshipit-source-id: 039c95bddd854c414027ac8332832e6bc830b5b9
2020-10-20 16:47:49 -07:00
Ansley Ussery
fdc5261a20 Support %-based string formatting (#45976)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45976

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D24374215

Pulled By: ansley

fbshipit-source-id: 2005fe7f09dc8d3c44c4bfdccab6b4dc46a5e517
2020-10-20 16:13:36 -07:00
Alexander Grund
5b0f400488 Replace list(map(...)) constructs by list comprehensions (#46461)
Summary:
As discussed in https://github.com/pytorch/pytorch/issues/46392 this makes the code more readable and possibly more performant.

It also fixes a bug detected by this where the argument order of `map` was confused: 030a24906e (diff-5bb26bd3a23ee3bb540aeadcc0385df2a4e48de39f87ed9ea76b21990738fe98L1537-R1537)

Fixes https://github.com/pytorch/pytorch/issues/46392

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46461

Reviewed By: ailzhang

Differential Revision: D24367015

Pulled By: ezyang

fbshipit-source-id: d55a67933cc22346b00544c9671f09982ad920e7
2020-10-19 18:42:49 -07:00
Yanan Cao
6a2f40dc66 Expose script_if_tracing as public API (#46494)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45921

`torch.jit._script_if_tracing` is still kept for BC

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46494

Reviewed By: ZolotukhinM

Differential Revision: D24381621

Pulled By: gmagogsfm

fbshipit-source-id: 35d9f2da38c591039ba95cd95ef186e6c7e47586
2020-10-17 17:31:57 -07:00
Kurt Mohler
ef4817fe5a Add tensor_split function, based on numpy.array_split (#45168)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/9382

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45168

Reviewed By: ngimel

Differential Revision: D24166164

Pulled By: mruberry

fbshipit-source-id: 795459821e52885bc99623a01a2abec060995ce6
2020-10-07 23:14:48 -07:00
Elias Ellison
c86655a815 [JIT] Fix Dict bug in constant hashing (#45929)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45929

We were checking `and` when we should have been checking `or`.

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D24148804

Pulled By: eellison

fbshipit-source-id: 9c394ea10ac91a588169d934b1e3208512c71b9d
2020-10-07 17:40:17 -07:00
Ansley Ussery
5072728d88 Fix stride printing/parsing formatting (#45156)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45156

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D24078695

Pulled By: ansley

fbshipit-source-id: dab993277d43b31105c38d12098c37653747b42a
2020-10-06 15:06:46 -07:00
Ansley Ussery
f18cc9c57d Change type inferred from empty annotation (#45360)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45360

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D24078645

Pulled By: ansley

fbshipit-source-id: 5d37d07df75bd7a2111d44638befe53c1021ee82
2020-10-05 15:16:56 -07:00
Edward Yang
546aab66c1 Revert D24027761: Update backward definition for more operators and reenable tests in test_ops.py
Test Plan: revert-hammer

Differential Revision:
D24027761 (7d809f5d8e)

Original commit changeset: c1f707c2a039

fbshipit-source-id: 30750d2f08886036fb8b2cd0ae51c7732d3b7b19
2020-10-02 18:52:57 -07:00
Yanan Cao
d150d3e276 Make sure each warnings.warn only executes once inside TorchScript. (#45382)
Summary:
* Add a pass at end of runCleanupPasses to annotate `aten::warn` so that each has its unique id
* Enhanced interpreter so that it tracks which `aten::warn` has been executed before and skip them
* Improved insertInstruction so that it correctly checks for overflow

Fixes https://github.com/pytorch/pytorch/issues/45108

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45382

Reviewed By: mrshenli

Differential Revision: D24060677

Pulled By: gmagogsfm

fbshipit-source-id: 9221bc55b9ce36b374bdf614da3fe47496b481c1
2020-10-02 14:55:10 -07:00
anjali411
7d809f5d8e Update backward definition for more operators and reenable tests in test_ops.py (#44444)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44444

This PR:
1. Fixes https://github.com/pytorch/pytorch/issues/41510. Updates backward formula for the following functions: `asin`, `acos`, `asinh`, `acosh`, `atan`, `atanh`, `div`, `log`, `log10`, `log2`, `log1p`, `pow`, `reciprocal`, `angle`.
2. Re-enables the tests in `test_ops.py`.
3. Adds dispatch for complex dtypes for `tanh_backward`.
4. Re-enables commented tests in `common_methods_invocation.py`.

Test Plan: Imported from OSS

Reviewed By: glaringlee

Differential Revision: D24027761

Pulled By: anjali411

fbshipit-source-id: c1f707c2a039149a6e04bbde53ee120d9119d99a
2020-10-02 13:37:10 -07:00
Malgi Nikitha Vivekananda
85a70ce71f Add multiline string dedent support (#45580)
Summary:
Fixes #{44842}
Summary
========
This PR adds support for multiline string dedents.

Test
=====
pytest -k test_multiline_string_dedents test/test_jit.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45580

Reviewed By: wconstab

Differential Revision: D24025866

Pulled By: nikithamalgifb

fbshipit-source-id: 0f49739fb93f70f73a8f367caca2887f558a3937
2020-09-30 16:08:26 -07:00
Nikolay Korovaiko
6ab1c0b1ca Disable a few tests in preparation to enabling PE+TE (#44815)
Summary:
Disable a few tests in preparation to enabling PE+TE
Next PR: https://github.com/pytorch/pytorch/pull/45396

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44815

Reviewed By: ZolotukhinM

Differential Revision: D23948445

Pulled By: Krovatkin

fbshipit-source-id: 93e641b7b8a3f13bd3fd3840116076553408f224
2020-09-28 12:55:12 -07:00
anjali411
9f67176b82 Complex gradcheck logic (#43208)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43208

This PR adds gradcheck for complex. The logic used for complex gradcheck is described in Section 3.5.3 here: https://arxiv.org/pdf/1701.00392.pdf

More concretely, this PR introduces the following changes:
1. Updates get_numerical_jacobian to take as input a scalar value for vector (v). Adds gradcheck logic for C -> C, C-> R, R -> C. For R -> C functions, only the real value of gradient is propagated.
2. Adds backward definition for `torch.complex` and also adds a test to verify the definition added.
3. Updates backward for `mul`, `sin`, `cos`, `sinh`, `cosh`.
4. Adds tests for all `torch.real`, `torch.imag`, `torch.view_as_real`, `torch.view_as_complex`, `torch.conj`.

Follow up tasks:
1. Add more thorough tests for R -> C cases. Specifically, add R->C test variants for functions. for e.g., `torch.mul(complex_tensor, real_tensor)`
2. Add back commented test in `common_methods_invocation.py`.
3. Add more special case checking for complex gradcheck to make debugging easier.
4. Update complex autograd note.
5. disable complex autograd for operators not tested for complex.

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D23655088

Pulled By: anjali411

fbshipit-source-id: caa75e09864b5f6ead0f988f6368dce64cf15deb
2020-09-20 22:05:04 -07:00
Michael Suo
374e9373b5 [jit] Pull (most) tests out of libtorch_python (#44795)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44795

Today, we build our cpp tests twice, once as a standalone gtest binary,
and once linked in `libtorch_python` so we can call them from
`test_jit.py`.

This is convenient (it means that `test_jit.py` is a single entry point
for all our tests), but has a few drawbacks:
1. We can't actually use the gtest APIs, since we don't link gtest into
`libtorch_python`. We're stuck with the subset that we want to write
polyfills for, and an awkward registration scheme where you have to
write a test then include it in `tests.h`).
2. More seriously, we register custom operators and classes in these
tests. In a world where we may be linking many `libtorch_python`s, this
has a tendency to cause errors with `libtorch`.

So now, only tests that explicitly require cooperation with Python are
built into `libtorch_python`. The rest are built into
`build/bin/test_jit`.

There are tests which require that we define custom classes and
operators. In these cases, I've built thm into separate `.so`s that we
call `torch.ops.load_library()` on.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity, ZolotukhinM

Differential Revision: D23735520

Pulled By: suo

fbshipit-source-id: d146bf4e7eb908afa6f96b394e4d395d63ad72ff
2020-09-18 14:04:40 -07:00
Yanan Cao
174cbff00a Improve sugared value's error message (#42889)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **https://github.com/pytorch/pytorch/issues/42889 Improve sugared value's error message**

I think most (if not all) cases where this code path is reached can be attributed to closing over a global variable.
Improving error message to make this clearer to users.

close https://github.com/pytorch/pytorch/issues/41288

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42889

Reviewed By: SplitInfinity

Differential Revision: D23779347

Pulled By: gmagogsfm

fbshipit-source-id: ced702a96234040f79eb16ad998d202e360d6654
2020-09-18 11:01:40 -07:00
Yuxin Wu
9a007ba4cb [jit] stop parsing the block after seeing exit statements (#44870)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44870

fix https://github.com/pytorch/pytorch/issues/44864

Test Plan: buck test mode/dev-nosan //caffe2/test:jit -- 'test_assert_is_script'

Reviewed By: eellison

Differential Revision: D23755094

fbshipit-source-id: ca3f8b27dc6f9dc9364a22a1bce0e2f588ed4308
2020-09-17 18:09:16 -07:00
Yanan Cao
2558e5769d Implement sort for list of tuples (#43448)
Summary:
* Implement tuple sort by traversing contained IValue types and generate a lambda function as comparator for sort.
* Tuple, class objects can now arbitrarily nest within each other and still be sortable

Fixes https://github.com/pytorch/pytorch/issues/43219

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43448

Reviewed By: eellison

Differential Revision: D23352273

Pulled By: gmagogsfm

fbshipit-source-id: b6efa8d00e112178de8256da3deebdba7d06c0e1
2020-09-17 11:20:56 -07:00
Yanan Cao
99093277c0 Support Python Slice class in TorchScript (#44335)
Summary:
Implements support for[ Python Slice class](https://docs.python.org/3/c-api/slice.html) (not slice expression, which is already supported)

Slice object can be used in any place that supports slice expression, including multi-dim tensor slicing.

Fixes https://github.com/pytorch/pytorch/issues/43511
Fixes https://github.com/pytorch/pytorch/issues/43125

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44335

Reviewed By: suo, jamesr66a

Differential Revision: D23682213

Pulled By: gmagogsfm

fbshipit-source-id: f74fe25370e89fbfd2b3727d95ce4e1c4ba8dec4
2020-09-17 00:41:53 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Yanan Cao
07d07e3c6c Remove EXPERIMENTAL_ENUM_SUPPORT feature guard (#44243)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41095

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44243

Reviewed By: ZolotukhinM

Differential Revision: D23605979

Pulled By: gmagogsfm

fbshipit-source-id: 098ae69049c4664ad5d1521c45b8a7dd22e72f6c
2020-09-16 11:45:59 -07:00
Elias Ellison
551494b01d [JIT] Fix torch.tensor for empty multidimensional-typed lists (#44652)
Summary:
We were hitting an assert error when you passed in an empty `List[List[int]]` - this fixes that error by not recursing into 0-element tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44652

Reviewed By: ZolotukhinM

Differential Revision: D23688247

Pulled By: eellison

fbshipit-source-id: d48ea24893044fae96bc39f76c0f1f9726eaf4c7
2020-09-14 17:28:23 -07:00
Mike Ruberry
686e281bcf Updates div to perform true division (#42907)
Summary:
This PR:

- updates div to perform true division
- makes torch.true_divide an alias of torch.div

This follows on work in previous PyTorch releases that first deprecated div performing "integer" or "floor" division, then prevented it by throwing a runtime error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42907

Reviewed By: ngimel

Differential Revision: D23622114

Pulled By: mruberry

fbshipit-source-id: 414c7e3c1a662a6c3c731ad99cc942507d843927
2020-09-14 15:50:38 -07:00
Akihiro Nitta
84949672bf Fix exception chaining in test/ (#44193)
Summary:
## Motivation
This PR fixes https://github.com/pytorch/pytorch/issues/43770 and is the continuation of https://github.com/pytorch/pytorch/issues/43836.

## Description of the change
This PR fixes exception chaining only in files under `test/` where appropriate.
To fix exception chaining, I used either:
1. `raise new_exception from old_exception` where `new_exception` itself seems not descriptive enough to debug or `old_exception` delivers valuable information.
2. `raise new_exception from None` where raising both of `new_exception` and `old_exception` seems a bit noisy and redundant.

## List of lines containing `raise` in `except` clause:
I wrote [this simple script](https://gist.github.com/akihironitta/4223c1b32404b36c1b349d70c4c93b4d) using [ast](https://docs.python.org/3.8/library/ast.html#module-ast) to list lines where `raise`ing in `except` clause.

- [x] f8f35fddd4/test/test_cpp_extensions_aot.py (L16)
- [x] f8f35fddd4/test/test_jit.py (L2503)
- [x] f8f35fddd4/test/onnx/model_defs/word_language_model.py (L22)
- [x] f8f35fddd4/test/onnx/verify.py (L73)
- [x] f8f35fddd4/test/onnx/verify.py (L110)
- [x] f8f35fddd4/test/onnx/test_verify.py (L31)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L255)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L2992)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L3025)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L3712)
- [x] f8f35fddd4/test/distributed/test_distributed.py (L3180)
- [x] f8f35fddd4/test/distributed/test_distributed.py (L3198)
- [x] f8f35fddd4/test/distributed/test_data_parallel.py (L752)
- [x] f8f35fddd4/test/distributed/test_data_parallel.py (L776)
- [x] f8f35fddd4/test/test_type_hints.py (L151)
- [x] f8f35fddd4/test/test_jit_fuser.py (L771)
- [x] f8f35fddd4/test/test_jit_fuser.py (L773)
- [x] f8f35fddd4/test/test_dispatch.py (L105)
- [x] f8f35fddd4/test/test_distributions.py (L4738)
- [x] f8f35fddd4/test/test_nn.py (L9824)
- [x] f8f35fddd4/test/test_namedtensor.py (L843)
- [x] f8f35fddd4/test/test_jit_fuser_te.py (L875)
- [x] f8f35fddd4/test/test_jit_fuser_te.py (L877)
- [x] f8f35fddd4/test/test_dataloader.py (L31)
- [x] f8f35fddd4/test/test_dataloader.py (L43)
- [x] f8f35fddd4/test/test_dataloader.py (L365)
- [x] f8f35fddd4/test/test_dataloader.py (L391)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44193

Reviewed By: albanD

Differential Revision: D23681529

Pulled By: malfet

fbshipit-source-id: 7c2256ff17334625081137b35baeb816c1e53e0b
2020-09-14 14:20:16 -07:00
Nikolay Korovaiko
fe26102a0e Enable TE in test_jit.py (#44200)
Summary:
Enable TE in test_jit.py and adjust/fix tests accordingly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44200

Reviewed By: SplitInfinity

Differential Revision: D23673624

Pulled By: Krovatkin

fbshipit-source-id: 5999725c7aacc6ee77885eb855a41ddfb4d9a8d8
2020-09-13 15:58:20 -07:00
Mikhail Zolotukhin
c6febc6480 [JIT] Add a python hook for a function to interpret JIT graphs. (#44493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44493

This function allows to execute a graph exactly as it is, without going
through a graph executor which would run passes on the graph before
interpreting it. I found this feature extremely helpful when I worked on
a stress-testing script to shake out bugs from the TE fuser: I needed to
execute a very specific set of passes on a graph and nothing else, and
then execute exactly it.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23632505

Pulled By: ZolotukhinM

fbshipit-source-id: ea81fc838933743e2057312d3156b77284d832ef
2020-09-11 02:55:26 -07:00
Gregory Chanan
c8914afdfa Merge criterion_tests and new_criterion_tests. (#44398)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44398

These end up executing the same tests, so no reason to have them separate.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D23600855

Pulled By: gchanan

fbshipit-source-id: 0952492771498bf813f1bf8e1d7c8dce574ec965
2020-09-10 08:29:59 -07:00
Gregory Chanan
fa158c4ca6 Combine criterion and new criterion tests in test_jit. (#43958)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43958

There is not any difference between these tests (I'm merging them), so let's merge them in the JIT as well.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D23452337

Pulled By: gchanan

fbshipit-source-id: e6d13cdb164205eec3dbb7cdcd0052b02c961778
2020-09-10 08:28:14 -07:00
Elias Ellison
b69c28d02c Improving ModuleList indexing error msg (#43361)
Summary:
Follow up to https://github.com/pytorch/pytorch/pull/41946/, to suggest enumerating a module as an alternative if a user tries indexing into a modulelist/sequential with a non-integer literal

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43361

Reviewed By: mrshenli

Differential Revision: D23602388

Pulled By: eellison

fbshipit-source-id: 51fa28d5bc45720529b3d45e92d367ee6c9e3316
2020-09-09 16:22:57 -07:00
Sujoy Saraswati
54931ebb7b Release saved variable from DifferentiableGraphBackward (#42994)
Summary:
When the backward ops execute via the autograd engine evaluate_function(), the fn.release_variables() is called to release the SavedVariables. For the eager mode ops, this releases the saved inputs that was required for backward grad function. However, with TorchScript, we get a DifferentableGraph and the DifferentiableGraphBackward() doesn't implement a release_variables(). This leads to the SavedVariables to be alive longer. Implement release_variables() for DifferentiableGraphBackward to release these SavedVariables  early.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42994

Reviewed By: izdeby

Differential Revision: D23503172

Pulled By: albanD

fbshipit-source-id: d87127498cfa72883ae6bb31d0e6c7056c4c36d4
2020-09-08 14:36:52 -07:00
Michael Suo
9dd8670d7d [jit] Better match behavior of loaded ScriptModules vs. freshly created ones (#43298)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43298

IR emitter uses `ModuleValue` to represent ScriptModules and emit IR for
attribute access, submodule access, etc.

`ModuleValue` relies on two pieces of information, the JIT type of the
module, and the `ConcreteModuleType`, which encapsulates Python-only
information about the module.

ScriptModules loaded from a package used to create a dummy
ConcreteModuleType without any info in it. This led to divergences in
behavior during compilation.

This PR makes the two ways of constructing a ConcreteModuleType equivalent,
modulo any py-only information (which, by definition, is never present in
packaged files anyway).

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D23228738

Pulled By: suo

fbshipit-source-id: f6a660f42272640ca1a1bb8c4ee7edfa2d1b07cc
2020-09-03 15:03:39 -07:00
Michael Suo
74f18476a2 [jit] fix segfault in attribute lookup on loaded ScriptModules (#43284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43284

The IR emitter looks for attributes on modules like:
1. Check the JIT type for the attribute
2. Check the originating Python class, in order to fulfill requests for, e.g. static methods or ignored methods.

In the case where you do:
```
inner_module = torch.jit.load("inner.pt")
wrapped = Wrapper(inner_module)  # wrap the loaded ScriptModule in an nn.Module
torch.jit.script(wrapped)
```

The IR emitter may check for attributes on `inner_module`. There is no
originating Python class for `inner_module`, since it was directly
compiled from the serialized format.

Due to a bug in the code, we don't guard for this case an a segfault
results if the wrapper asks for an undefined attribute. The lookup in
this case looks like:
1. Check the JIT type for the attribute (not there!)
2. Check the originating Python class (this is a nullptr! segfault!)

This PR guards this case and properly just raises an attribute missing
compiler error instead of segfaulting.

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D23224337

Pulled By: suo

fbshipit-source-id: 0cf3060c427f2253286f76f646765ec37b9c4c49
2020-09-03 15:01:59 -07:00
Nikolay Korovaiko
f91bdbeabd Enable function calls in TEFuser and SpecializeAutogradZero (#43866)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43866

Reviewed By: ezyang

Differential Revision: D23452798

Pulled By: Krovatkin

fbshipit-source-id: 2cff4c905bf1b5d9de56e7869458ffa6fce1f1b5
2020-09-03 14:42:52 -07:00
Lillian Johnson
e3cb582e05 Error printing extension support for multiline errors (#43807)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/43807

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D23407457

Pulled By: Lilyjjo

fbshipit-source-id: 05a6a50dc39c00474d9087ef56028a2c183aa53a
2020-09-01 10:02:43 -07:00
Mikhail Zolotukhin
98b846cd1d [JIT] Remove loop peeling from the profiling executor pipeline. (#43847)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43847

It seems to slowdown two fastRNN benchmarks and does not speed up
others.

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D23416197

Pulled By: ZolotukhinM

fbshipit-source-id: 598144561979e84bcf6bccf9b0ca786f5af18383
2020-08-31 17:26:55 -07:00
Meghan Lele
87d7c362b1 [JIT] Add JIT support for torch.no_grad (#41371)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41371

**Summary**
This commit enables the use of `torch.no_grad()` in a with item of a
with statement within JIT. Note that the use of this context manager as
a decorator is not supported.

**Test Plan**
This commit adds a test case to the existing with statements tests for
`torch.no_grad()`.

**Fixes**
This commit fixes #40259.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D22649519

Pulled By: SplitInfinity

fbshipit-source-id: 7fa675d04835377666dfd0ca4e6bc393dc541ab9
2020-08-27 15:32:57 -07:00
Elias Ellison
a4cf4c2437 refactor tests (#43631)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43631

I added a new test for just profiler stuff - I don't think the test should go in test_jit.py. Maybe this should just go in test_tensorexpr_fuser, but I'm not really testing tensorexpr stuff either... LMK

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D23358810

Pulled By: eellison

fbshipit-source-id: 074238e1b60e4c4a919a052b7a5312b790ad5d82
2020-08-27 14:35:33 -07:00
aizjForever
cdc3e232e9 Add __str__ and __repr__ bindings to SourceRange (#43601)
Summary:
Added the bindings for `__str__` and `__repr__` methods for SourceRange

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43601

Test Plan:
`python test/test_jit.py`

cc gmagogsfm

Reviewed By: agolynski

Differential Revision: D23366500

Pulled By: gmagogsfm

fbshipit-source-id: ab4be6e8f9ad5f67a323554437878198483f4320
2020-08-27 12:30:47 -07:00
Yuxin Wu
825ec18eed [jit] better error message (#43093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43093

without this it's hard to tell which module is going wrong

Test Plan:
```
> TypeError:
> 'numpy.int64' object in attribute 'Linear.in_features' is not a valid constant.
> Valid constants are:
> 1. a nn.ModuleList
> 2. a value of type {bool, float, int, str, NoneType, torch.device, torch.layout, torch.dtype}
> 3. a list or tuple of (2)
```

Reviewed By: eellison

Differential Revision: D23148516

fbshipit-source-id: b86296cdeb7b47c9fd69b5cfa479914c58ef02e6
2020-08-17 14:57:56 -07:00
taivu
02c8ad70f2 Reconstruct scopes (#41615)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41615

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D22611331

Pulled By: taivu1998

fbshipit-source-id: d4ed4cf6360bc1f72ac9fa24bb4fcf6b7d9e7576
2020-08-13 22:38:16 -07:00
Mike Ruberry
bee174dc3f Adds linalg.det alias, fixes outer alias, updates alias testing (#42802)
Summary:
This PR:

- updates test_op_normalization.py, which verifies that aliases are correctly translated in the JIT
- adds torch.linalg.det as an alias for torch.det
- moves the torch.linalg.outer alias to torch.outer (to be consistent with NumPy)

The torch.linalg.outer alias was put the linalg namespace erroneously as a placeholder since it's a "linear algebra op" according to NumPy but is actually still in the main NumPy namespace.

The updates to test_op_normalization are necessary. Previously it was using method_tests to generate tests, and method_tests assumes test suites using it also use the device generic framework, which test_op_normalization did not. For example, some ops require decorators like `skipCPUIfNoLapack`, which only works in device generic test classes. Moving test_op_normalization to the device generic framework also lets these tests run on CPU and CUDA.

Continued reliance on method_tests() is excessive since the test suite is only interested in testing aliasing, and a simpler and more readable `AliasInfo` class is used for the required information. An example impedance mismatch between method_tests and the new tests, for example, was how to handle ops in namespaces like torch.linalg.det. In the future this information will likely be folded into a common 'OpInfo' registry in the test suite.

The actual tests performed are similar to what they were previously: a scripted and traced version of the op is run and the test verifies that both graphs do not contain the alias name and do contain the aliased name.

The guidance for adding an alias has been updated accordingly.

cc mattip

Note:

ngimel suggests:
- deprecating and then removing the `torch.ger` name
- reviewing the implementation of `torch.outer`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42802

Reviewed By: zou3519

Differential Revision: D23059883

Pulled By: mruberry

fbshipit-source-id: 11321c2a7fb283a6e7c0d8899849ad7476be42d1
2020-08-11 21:48:31 -07:00
Yanan Cao
43613b4236 Fix incorrect aten::sorted.str return type (#42853)
Summary:
aten::sorted.str output type was incorrectly set to bool[] due to a copy-paste error. This PR fixes it.

Fixes https://fburl.com/0rv8amz7

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42853

Reviewed By: yf225

Differential Revision: D23054907

Pulled By: gmagogsfm

fbshipit-source-id: a62968c90f0301d4a5546e6262cb9315401a9729
2020-08-11 14:01:23 -07:00
Yanan Cao
317b9d3bfc Implement sort for string in aten (#42398)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42375

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42398

Reviewed By: ailzhang

Differential Revision: D22884849

Pulled By: gmagogsfm

fbshipit-source-id: e53386949f0a5e166f3d1c2aa695294340bd1440
2020-08-04 15:25:35 -07:00
Yanan Cao
bdcf320bed Support custom exception message (#41907)
Summary:
Raise and assert used to have a hard-coded error message "Exception". User provided error message was ignored. This PR adds support to represent user's error message in TorchScript.

This breaks backward compatibility because now we actually need to script the user's error message, which can potentially contain unscriptable expressions. Such programs can break when scripting, but saved models can still continue to work.

Increased an op count in test_mobile_optimizer.py because now we need aten::format to form the actual exception message.

This is built upon an WIP PR:  https://github.com/pytorch/pytorch/pull/34112 by driazati

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41907

Reviewed By: ngimel

Differential Revision: D22778301

Pulled By: gmagogsfm

fbshipit-source-id: 2b94f0db4ae9fe70c4cd03f4048e519ea96323ad
2020-08-01 13:03:45 -07:00
Elias Ellison
2285a2fc11 refactor canonical ordering to also be able to do isAfter checks (#42140)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42140

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D22798378

Pulled By: eellison

fbshipit-source-id: d1a549f43b28fe927729597818a46674c58fe81d
2020-07-31 15:11:40 -07:00
Elias Ellison
0a64f99162 [JIT] Dont include view ops in autodiff graphs (#42027)
Summary:
View ops as outputs of differentiable subgraphs can cause incorrect differentiation. For now, do not include them in the subgraph. This was observed with our autograd tests for MultiheadAttention and nn.Transformer, which currently fail with the legacy executor. This commit fixes those test failures.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42027

Reviewed By: pbelevich

Differential Revision: D22798133

Pulled By: eellison

fbshipit-source-id: 2f6c08953317bbe013933c6faaad20100376c039
2020-07-29 10:17:33 -07:00
Yanan Cao
890b52e09f Reduce instability in runCleanUpPasses by reordering passes. (#41891)
Summary:
Currently constant pooling runs before const propagation, which can create more constants that need pooling. This can get in the way of serialization/deserialization stability because each time user serializes and deserializes a module, runCleanUpPasses is called upon it. Doing so multiple times would lead to different saved module.

This PR moves constant pooling after const propagation, which may slow down const propagation a little bit, but would otherwise side-step aforementioned problem.

test_constant_insertion in test_jit.py is also updated because after fixing the pass ordering, the number of constants is no longer a constant and it is extremely difficult to get the exact number with the current convoluted test structure. So for now, I changed the test to check only that CSE doesn't change number of "prim::constant" rather than comparing against a known number. Also left a TODO to improve this test.

ConstantPropagation pass is replaced by ConstantPropagationImmutableTypes because the latter is used in runCleanUpPasses. If not replaced, the former would create new CSE opportunities by folding more constants. This voids the purpose of the test case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41891

Reviewed By: colesbury

Differential Revision: D22701540

Pulled By: gmagogsfm

fbshipit-source-id: 8e60dbdcc54a93dac111d81b8d88fb39387224f5
2020-07-24 11:39:20 -07:00
Elias Ellison
da3ff5e473 [JIT] dont count constants in subgraph size (#41436)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41436

Constants are not executed as instructions, we should ignore them when counting subgraph size, as we ignore them in counting block size for loop unrolling.

Test Plan: Imported from OSS

Reviewed By: Krovatkin, ZolotukhinM

Differential Revision: D22600608

Pulled By: eellison

fbshipit-source-id: 9770b21c936144a3d6a1df89cf3be5911095187e
2020-07-23 14:48:25 -07:00
Elias Ellison
6161730174 [JIT] move remove mutation to its own test file (#41502)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41502

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D22629270

Pulled By: eellison

fbshipit-source-id: fcec6ae4ff8f108164539d67427ef3d72fa07494
2020-07-20 12:03:28 -07:00
Yanan Cao
4a3aad354a [1/N] Implement Enum JIT support (#41390)
Summary:
* Add EnumType and AnyEnumType as first-class jit type
* Add Enum-typed IValue
* Enhanced aten::eq to support Enum

Supported:
Enum-typed function targuments
using Enum type and comparing them

TODO:
Add PyThon sugared value for Enum
Support getting name/value attrs of enums
Support Enum-typed return values
Support enum values of different types in same Enum class
Support serialization and deserialization

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41390

Reviewed By: eellison

Differential Revision: D22524388

Pulled By: gmagogsfm

fbshipit-source-id: 1627154a64e752d8457cd53270f3d14aea4b1150
2020-07-18 22:15:06 -07:00
Ilia Cherniavskii
e7a09b4d17 RecordFunction in Dispatcher (#37587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37587

Lifting RecordFunction up into the dispatcher code

Test Plan: Imported from OSS

Differential Revision: D21374246

fbshipit-source-id: 19f9c1719e6fd3990e451c5bbd771121e91128f7
2020-07-17 22:20:05 -07:00
Meghan Lele
f85a27e100 [JIT] Replace "blacklist" in test_jit.py (#41453)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41453

**Test Plan**
`python test/test_jit.py`

**Fixes**
This commit partially addresses #41443.

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D22544268

Pulled By: SplitInfinity

fbshipit-source-id: 8b6b94211a626209c3960fda6c860593148dcbf2
2020-07-17 11:30:27 -07:00
Mikhail Zolotukhin
5d7046522b [JIT] Teach IRPrinter and IRParser to handle 'requires_grad' and 'device' as a part of type info. (#41507)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41507

These fields have always been a part of tensor types, this change just
makes them serializable through IR dumps.

Test Plan: Imported from OSS

Reviewed By: Krovatkin, ngimel

Differential Revision: D22563661

Pulled By: ZolotukhinM

fbshipit-source-id: f01aaa130b7e0005bf1ff21f65827fc24755b360
2020-07-17 10:27:04 -07:00
Yuxin Wu
488ee3790e Support @torch.jit.unused on a @torch.no_grad decorated function (#41496)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41496

use the wrapped function (instead of the wrapper) to obtain argument names

Test Plan:
```
buck test mode/dev-nosan //caffe2/test:jit -- 'test_unused_decorator \(test_jit\.TestScript\)'
```

Before:
```
> Traceback (most recent call last):
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/test_jit.py", line 3014, in test_unused_decorator
>     torch.jit.script(MyMod())
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_script.py", line 888, in script
>     obj, torch.jit._recursive.infer_methods_to_compile
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_recursive.py", line 317, in create_script_module
>     return create_script_module_impl(nn_module, concrete_type, stubs_fn)
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_recursive.py", line 376, in create_script_module_impl
>     create_methods_from_stubs(concrete_type, stubs)
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_recursive.py", line 292, in create_methods_from_stubs
>     concrete_type._create_methods(defs, rcbs, defaults)
> RuntimeError:
> Non-static method does not have a self argument:
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/test_jit.py", line 3012
>             def forward(self, x):
>                 return self.fn(x)
>                        ~~~~~~~ <--- HERE
>
```

Reviewed By: eellison

Differential Revision: D22554479

fbshipit-source-id: 03e432ea92ed973cc57ff044da80ae7a36f6af4c
2020-07-15 16:54:43 -07:00
Michael Suo
ca1b8ebbcb move misc implementation out of jit/__init__.py (#41154)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41154

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22445213

Pulled By: suo

fbshipit-source-id: 200545715c5ef13beb1437f49e01efb21498ddb7
2020-07-13 16:59:55 -07:00
Kimish Patel
c5dcf056ee JIT pass for add relu fusion. (#39343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39343

Building on top of previous PR that adds fused add_relu op, this PR adds
a JIT pass to transform input graph to find all fusable instancs of add
+ relu and fuses them.

Test Plan:
python test/test_jit.py TestJit.test_add_relu_fusion

Imported from OSS

Differential Revision: D21822396

fbshipit-source-id: 12c7e8db54c6d70a2402b32cc06c7e305ffbb1be
2020-07-09 16:25:13 -07:00
Zino Benaissa
690946c49d Generalize constant_table from tensor only to ivalue (#40718)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40718

Currently only constant except tensor must be inlined during serialization.
Tensor are stored in the contant table. This patch generalizes this capability
to any IValue. This is particularly useful for non ASCII string literal that
cannot be inlined.

Test Plan: Imported from OSS

Differential Revision: D22298169

Pulled By: bzinodev

fbshipit-source-id: 88cc59af9cc45e426ca8002175593b9e431f4bac
2020-07-09 09:09:40 -07:00
Dmytro Dzhulgakov
8e2841781e [easy] Use torch.typename in JIT error messages (#41024)
Summary:
Noticed while trying to script one of the models which happened to have numpy values as constants. Lacking the numpy prefix in the error message was quite confusing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41024

Differential Revision: D22426399

Pulled By: dzhulgakov

fbshipit-source-id: 06158b75355fac6871e4861f82fc637c2420e370
2020-07-08 21:49:37 -07:00
Michael Suo
c93e96fbd9 [jit] move script-related implementation out of torch/jit/__init__.py (#40902)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40902

See the bottom of this stack for context.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22360210

Pulled By: suo

fbshipit-source-id: 4275127173a36982ce9ad357aa344435b98e1faf
2020-07-08 11:38:34 -07:00
Elias Ellison
37a572f33e fix grad thrashing of shape analysis (#40939)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40939

Previously, when we would do shape analysis by running the op with representative inputs, we would always set the grad property to false. This led to a wrong static analysis when we would create differentiable subgraphs, and propagate shapes without also propagating requires_grad, and then uninline them.

Test Plan: Imported from OSS

Differential Revision: D22394676

Pulled By: eellison

fbshipit-source-id: 254e6e9f964b40d160befe0e125abe1b7aa2bd5e
2020-07-06 17:12:13 -07:00
Elias Ellison
4af8424377 shape analysis fix for default dtype' (#40938)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40938

already accepted in https://github.com/pytorch/pytorch/pull/40645

Test Plan: Imported from OSS

Reviewed By: jamesr66a, Krovatkin

Differential Revision: D22394675

Pulled By: eellison

fbshipit-source-id: 1e9dbb24a4cb564d9a68280d2166329ca9fb0425
2020-07-06 17:10:01 -07:00
Ailing Zhang
e75f12ac15 Check statstical diff rather than exact match for test_dropout_cuda. (#40883)
Summary:
There's is a TODO tracked in https://github.com/pytorch/pytorch/issues/40882

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40883

Reviewed By: pbelevich

Differential Revision: D22346087

Pulled By: ailzhang

fbshipit-source-id: b4789ca3a10f6a72c6e77276bde45633eb6cf545
2020-07-06 13:11:48 -07:00
Michael Suo
300a3aaaad [jit] move private implementation out of jit/__init__.py (#40807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40807

We pack a lot of logic into `jit/__init__.py`, making it unclear to
developers and users which parts of our API are public vs. internal. This
is one in a series of PRs intended to pull implementation out into
separate files, and leave `__init__.py` as a place to register the
public API.

This PR moves all the tracing-related stuff out, and fixes other spots up
as necessary. Followups will move other core APIs out.

The desired end-state is that we conform to the relevant rules in [PEP 8](https://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces). In particular:
- Internal implementation goes in modules prefixed by `_`.
- `__init__.py` exposes a public API from these private modules, and nothing more.
- We set `__all__` appropriately to declare our public API.
- All use of JIT-internal functionality outside the JIT are removed (in particular, ONNX is relying on a number internal APIs). Since they will need to be imported explicitly, it will be easier to catch new uses of internal APIs in review.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22320645

Pulled By: suo

fbshipit-source-id: 0720ea9976240e09837d76695207e89afcc58270
2020-07-05 22:01:11 -07:00
Will Constable
8ecd4f36aa fix __len__, __contains__, getitem inherited from interface class derived from nn container (closes #40603) (#40789)
Summary:
Define static script implementation of __len__ and __contains__ on any subclass derived from a type such as ModuleList, Sequential, or ModuleDict.  Implement getitem for classes derived from ModuleDict.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40789

Reviewed By: eellison

Differential Revision: D22325159

Pulled By: wconstab

fbshipit-source-id: fc1562c29640fe800e13b5a1dd48e595c2c7239b
2020-07-04 15:45:18 -07:00
Nikolay Korovaiko
8223858cc1 shape inference of undefined for prim::grad (#40866)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40866

Reviewed By: pbelevich

Differential Revision: D22358988

Pulled By: Krovatkin

fbshipit-source-id: 7118d7f8d4eaf056cfb71dc0d588d38b1dfb0fc7
2020-07-04 14:10:22 -07:00
Nikolay Korovaiko
88c0d886e3 update requires_gard on loop inputs correctly (master) (#40926)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40926

Reviewed By: eellison

Differential Revision: D22359471

Pulled By: Krovatkin

fbshipit-source-id: 823e87674e2d2917f075255ec926e0485972f4e2
2020-07-04 13:58:29 -07:00
Elias Ellison
e1428cf41b [JIT] fix unfold shape analysis (#40749)
Summary:
unfold on a 0-dimensioned tensor returns a 1-dim tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40749

Differential Revision: D22361481

Pulled By: eellison

fbshipit-source-id: 621597e5f97f6e39953eb86f8b85bb4142527a9f
2020-07-02 13:32:37 -07:00
Mikhail Zolotukhin
871bfaaba1 [JIT] Fix shape analysis for aten::masked_select. (#40753)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40753

The reference says that this op always returns a 1-D tensor, even if
the input and the mask are 0-D.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22300354

Pulled By: ZolotukhinM

fbshipit-source-id: f6952989c8facf87d73d00505bf6d41573eff2d6
2020-06-30 11:04:50 -07:00
Mikhail Zolotukhin
50d55b9f2b [JIT] Update type of the unsqueeze's output in shape analysis. (#40733)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40733

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22298537

Pulled By: ZolotukhinM

fbshipit-source-id: a5d4597ed10bcf14d1b28e914bf898d0cae5b4c0
2020-06-30 11:01:45 -07:00
Jeff Daily
ac8c8b028d [ROCm] restore jit tests (#40447)
Summary:
Remove `skipIfRocm` from most jit tests and enable `RUN_CUDA_HALF` tests for ROCm.

These changes passed more than three rounds of CI testing against the ROCm CI.

CC ezyang xw285cornell sunway513
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40447

Differential Revision: D22190711

Pulled By: xw285cornell

fbshipit-source-id: bac44825a2675d247b3abe2ec2f80420a95348a3
2020-06-27 01:03:59 -07:00
Will Constable
d855528186 wconstab/38034-sliced-sequential (#40445)
Summary:
Partial support for slicing of Sequential containers.

- works around missing Sequential slice functionality
   by converting to tuple
- only supports iteration of resulting tuple values,
   not direct call() on the sliced sequential
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40445

Differential Revision: D22192469

Pulled By: wconstab

fbshipit-source-id: 61c85deda2d58f6e3bea2f1fa1d5d5dde568b9b5
2020-06-24 09:05:51 -07:00
Elias Ellison
6468bc4637 [JIT] script if tracing fix (#40468)
Summary:
Currently, torchvision annotates `batched_nms` with `torch.jit.script` so the function gets compiled when it is traced and ONNX will work. Unfortunately, this means we are eagerly compiling batched_nms, which fails if torchvision isn't built with `torchvision.ops.nms`. As a result, torchvision doesn't work on torch hub right now.

`_script_if_tracing` could solve our problem here, but right now it does not correctly interact with recursive compilation. This PR fixes that bug.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40468

Reviewed By: jamesr66a

Differential Revision: D22195771

Pulled By: eellison

fbshipit-source-id: 83022ca0bab6d389a48a478aec03052c9282d2b7
2020-06-23 17:14:28 -07:00
Jerry Zhang
cbd53bfee8 [jit] Remove unnecessary clone APIs for script::Module and RecursiveScriptModule (#40297)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40297

Test Plan: Imported from OSS

Differential Revision: D22191660

fbshipit-source-id: 4b338ca82caaca04784bffe01fdae3d180c192f4
2020-06-23 16:03:22 -07:00
Jerry Zhang
f652abc1dd [jit] Enable copy.deepcopy and copy.copy for RecursiveScriptModule (#32685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32685

att

Test Plan:
.

Imported from OSS

Differential Revision: D21220755

fbshipit-source-id: 5c71e9bb9f43032cf60563a9e67579118a8d7e33
2020-06-23 09:21:12 -07:00
Wanchao Liang
4b028a8e07 [jit] support pad_sequence/pack_sequence (#39844)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39844

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D22026720

Pulled By: wanchaol

fbshipit-source-id: cc51ea77eff3689e319ec7e89a54c788646b5940
2020-06-19 19:03:14 -07:00
Mike Ruberry
4f761f325c Back out "[pytorch][PR] Removes dunder div"
Summary: NVIDIA's Apex is updating to no longer rely on this behavior, but we're reverting this Python2->Python3 update to unblock internal apex users.

Test Plan: Sandcaslte + OSS CI.

Reviewed By: ngimel

Differential Revision: D22146782

fbshipit-source-id: f9483d2cbf9dc3a469ad48a6c863edea3ae51070
2020-06-19 18:31:20 -07:00
Meghan Lele
d58b8222b7 [JIT] Add support for with statements (#34705)
Summary:
**Summary**
This commit adds support for with statements to PyTorch JIT. Each
of the with items in a with statement is represented in the JIT IR
as a pair of `prim::Enter` and `prim::Exit` nodes that call the
`__enter__` and `__exit__` methods defined on the context manager objects
returned by the expressions in the with item.

**Testing**
This commit adds unit tests for with statements with named with items,
nameless with items, and with statements that encounter exceptions.
```
$ python test/test_jit.py TestWith.test_with_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.430s

OK
```

```
$ python test/test_jit.py TestWith.test_with_no_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.264s

OK
```

```
$ python test/test_jit.py TestWith.test_with_exceptions
Fail to import hypothesis in common_utils, tests are not derandomized
Couldn't download test skip set, leaving all tests enabled...
.
----------------------------------------------------------------------
Ran 1 test in 1.053s

OK
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34705

Differential Revision: D22095945

Pulled By: SplitInfinity

fbshipit-source-id: f661565a834786725259b8ea014b4d7532f9419d
2020-06-18 16:57:18 -07:00
Wanchao Liang
442ec1dd4e [test] split remaining quantization tests out of test_jit (#40144)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40144

as title, split remaining quantization tests out of test_jit to reduce
the size of test_jit

Test Plan: Imported from OSS

Differential Revision: D22085034

Pulled By: wanchaol

fbshipit-source-id: 0c8639da01ffc3e6a72e6f470837786c73a6b3f0
2020-06-18 13:39:13 -07:00
Wanchao Liang
693ab77c00 [test] split onnx export test out of test_jit (#40143)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40143

as titled, to reduce size of test_jit

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22085036

Pulled By: wanchaol

fbshipit-source-id: 424f189fd3849c111d06ebe2e341da50d98fe0ec
2020-06-17 17:27:50 -07:00
Wanchao Liang
27d789500b [test] split tracer related tests out of test_jit (#40142)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40142

test_jit is becoming huge again, which makes editor hard to load and
write new tests, this split out the tracer related tests.

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22085035

Pulled By: wanchaol

fbshipit-source-id: 696bee84985ecfbfeac8e2ee5c27f1bdda8de394
2020-06-17 17:26:33 -07:00
Mike Ruberry
9d588f7ce2 Removes dunder div (#39151)
Summary:
BC-breaking note:

If a user is using one of these dunders directly they will not longer be available. Users should update to Python3 compatible dunders.

Original PR note:

`__div__` (and `__idiv__` and `__rdiv__`) are no longer special dunders in Python3. This PR replaces them with the `__truediv__` (`__itrudediv__`, `__rtruediv__`) dunders, since we no longer support Python2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39151

Differential Revision: D22075713

Pulled By: mruberry

fbshipit-source-id: d318b47b51f7cc4c3728b1606a34d81e49ba0fa1
2020-06-16 23:02:20 -07:00
Shihao Xu
00651b8c93 [distribtued.nn] Implement TorchScript-compatible RemoteModule API (#37139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37139

See design doc in https://github.com/pytorch/pytorch/issues/37136

ghstack-source-id: 105926270

Test Plan:
TODO:

- Make the generated Interface usable. https://github.com/pytorch/pytorch/pull/37139#discussion_r434190978
-
- Avoid generating the same template instances for Module that is not scriptable.
- Remove "infer_module_interface_cls".
- Use Python format instead of a CodeTemplate
- Use Python tempfile to track and delete file. Does it work if there is crash.

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_scripted_remote_module_template

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_non_scripted_remote_module_template
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_spawn
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_async_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_sync_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_with_kwargs

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name
```

```
buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork
```

buck test mode/opt-asan //caffe2/test:jit -- 'test_script_forward_method_replacement

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_script_forward_method_replacement'

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_imported_classes'

Differential Revision: D20499658

fbshipit-source-id: dd9383ae4eb2343366c11127664f845b91ca3b0a
2020-06-15 19:07:35 -07:00
Nikita Shulga
c6b69a4e4d Delete Python <= 3.5 specific checks from the code (#39879)
Summary:
Remove PY3 and PY34 checks from `torch/testing/_internal/common_utils.py`
 Remove PY35 global var from `torch.jit.annotations`
Always call `try_get_real_signature` in `torch/jit/annotations.py`
Use `map` instead of `imap`, since Python-2 is no longer support, so map is always lazy.
Remove all pre Python-3.6 checks from `torch/_six.py` and `torch/_appdirs.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39879

Differential Revision: D22037811

Pulled By: malfet

fbshipit-source-id: af0c79f976569c2059d39ecb49c6b8285161734f
2020-06-15 08:16:06 -07:00
Nikolay Korovaiko
7f55197a57 Peel Loop (#39434)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39434

Differential Revision: D21857037

Pulled By: Krovatkin

fbshipit-source-id: 6583da167fe93d96e93f1c3d71f46f94e7f4e982
2020-06-10 13:48:18 -07:00
Yanan Cao
c22bbb2124 [JIT] Add Type::repr_str to return human-readable str (#39544)
Summary:
Clearly expressing a type is inferred by PyTorch instead of explicitly annotated by user makes many error messages more user-friendly

Currently Type has two string conversion methods. str() for IR printing and python_str() for serialization and error message generation. If we want to include more information in type printing while maintaining serialization/deserialization correctness, we need to split python_str() into annotation_str() and repr_str().

annotation_str is solely responsible for serialization, it strictly matches format of python type annotation. repr_str() is responsible for generating a human-readable error message that includes information like "this type is inferred, not explicitly annotated"

Closes https://github.com/pytorch/pytorch/issues/39449
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39544

Differential Revision: D21978759

Pulled By: gmagogsfm

fbshipit-source-id: 733566f5a62e748b5ca4bb3c5943ebb6d5b664d0
2020-06-10 12:01:24 -07:00
Elias Ellison
428bc90978 [JIT] add dtype as type annotation (#39741)
Summary:
make torch.dtype resolve as type annotation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39741

Reviewed By: jamesr66a

Differential Revision: D21956469

Pulled By: eellison

fbshipit-source-id: 492acd7403fa827a2e2c87fd08d31450fcb3a45e
2020-06-09 15:01:00 -07:00
James Reed
f1c60c04b8 [JIT] Fix module interface test (#39592)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39592

Test Plan: Imported from OSS

Differential Revision: D21909659

Pulled By: jamesr66a

fbshipit-source-id: 831ae6b158041d4241209cee50f7a4d09cd2fcb2
2020-06-09 12:13:58 -07:00
Nikolay Korovaiko
97a2918a07 reduce number of bailout nodes (#38281)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38281

Differential Revision: D21665509

Pulled By: Krovatkin

fbshipit-source-id: c2c34b759aec30d0a161e582030ba994192ee4ec
2020-06-05 13:45:37 -07:00
Yanan Cao
0031108b60 Support torch.Tensor subclass (like Parameter) input. (#39487)
Summary:
Currently torch.Tensor subclasses (like torch.nn.Parameter) isn't a supported type annotation to torch script inputs. This PR allows it to be treated like torch.Tensor for compilation.

Closes https://github.com/pytorch/pytorch/issues/38235
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39487

Differential Revision: D21885827

Pulled By: gmagogsfm

fbshipit-source-id: 1ec51829b132b7b0293a6c526d73497b23dae113
2020-06-05 11:58:20 -07:00
Edward Yang
da2004e132 Upgrade lint. (#39483)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39483

I fixed all of the new errors that occurred because of the upgrade.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21884575

Pulled By: ezyang

fbshipit-source-id: 45c8e1f1ecb410c8d7c46dd3922ad70e982a0685
2020-06-04 12:56:43 -07:00
Elias Ellison
49b69b2ade [JIT] fix broadcasting lists of ints (#39481)
Summary:
Previously, on conversion from python -> c++ it was casted to double list through bad copy pasta. It's pretty unusual for someone to script a broadcasting list function directly since it's an internal api, so it was unlikely to affect anyone.

Fix for https://github.com/pytorch/pytorch/issues/39450
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39481

Reviewed By: jamesr66a

Differential Revision: D21870557

Pulled By: eellison

fbshipit-source-id: e704e5e87d2702a270b7d65c4df444246a134480
2020-06-04 12:16:41 -07:00
Xiang Gao
ebd4125e7e [JIT] Make torch.unique_consecutive compatible (#39339)
Summary:
A `unique_consecutive` version of https://github.com/pytorch/pytorch/pull/38156
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39339

Differential Revision: D21823997

Pulled By: eellison

fbshipit-source-id: d14596a36ba36497e296da5a344e0376cef56f1b
2020-06-02 14:54:29 -07:00
Meghan Lele
f4365cf5ba [JIT] Add support for saving/loading of lowered modules (#38893)
Summary:
**Summary**
This commit adds support for seralization and deserialization of
`ScriptModules` that have been lowered to a specific backend. Nothing
special was required to accomplish this, other than removing some code
in `unpickler.cpp` that guarded against the deserialization of `Any`
type objects. Now that lists and dicts are tagged with their types
during serialization, this check is no longer necessary.

**Test Plan**
This commit adds a unit test for testing that a lowered module still
produces the same results as Python and regular JIT after saving and
loading.

**Fixes**
This pull request fixes part of https://github.com/pytorch/pytorch/issues/37841.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38893

Differential Revision: D21825813

Pulled By: SplitInfinity

fbshipit-source-id: 77a7b84504e0dddf14c89b3ed5dd6b438c086f66
2020-06-01 23:50:52 -07:00
xuewenc
7836eaceee [JIT] JIT should let people know we inferred an argument as a tensor (#38527)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38527

This PR solves issue (#37200).
Error is encountered during IR generation while trying to resolve the call to sum.
Should let user know it inferred the value for argument 'dim' to be of type 'Tensor'
because it was not annotated with an explicit type.

Test Plan:
Add code to reprodue the issue (#37200)
`python test/test_jit.py TestJit.test_inferred_as_tensor`

Differential Revision: D21743876

Pulled By: superwizard2019

fbshipit-source-id: 370ca32afea4d53b44d454f650f7d3006f86bcc6
2020-05-29 10:41:50 -07:00
Mike Ruberry
13120bf677 Updates assertEqual to require atol and rtol, removes positional atol (#38872)
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.

In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872

Differential Revision: D21740237

Pulled By: mruberry

fbshipit-source-id: acbc027aa1d7877a49664d94db9a5fff91a07042
2020-05-27 06:31:07 -07:00
Nikolay Korovaiko
9b95f757af move num_profiled_runs to common_utils (#38687)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38687

Differential Revision: D21634080

Pulled By: Krovatkin

fbshipit-source-id: 55513124caf3885e475ffecd9d9f3dbc4729a573
2020-05-27 01:14:01 -07:00
Rohan Varma
63e545e0fe Revert D21717199: [pytorch][PR] Updates assertEqual to require atol and rtol, removes positional atol
Test Plan: revert-hammer

Differential Revision:
D21717199

Original commit changeset: 9feb856f94ee

fbshipit-source-id: bfde9c39a5ce99f0ca6183a7dde703c65b7c8259
2020-05-26 18:23:59 -07:00
Mike Ruberry
6ddca30b2d Updates assertEqual to require atol and rtol, removes positional atol (#38872)
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.

In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872

Differential Revision: D21717199

Pulled By: mruberry

fbshipit-source-id: 9feb856f94eee911b44f6c7140a1d07c1b026d3a
2020-05-26 08:30:23 -07:00
Elias Ellison
cd5d7a34b8 [JIT] Factor out aliases to separate test (#38746)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38746

Factors out testing of op alias normalization so that there is a registry used for tests.

Test Plan: Imported from OSS

Differential Revision: D21673107

Pulled By: eellison

fbshipit-source-id: e06653cdf24f14a4253dd054e4d402d171d16a11
2020-05-21 21:47:24 -07:00
Elias Ellison
f90dc741eb [JIT] Normalize op aliases (#38735)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38735

Follow up to my comment https://github.com/pytorch/pytorch/pull/36597/#issuecomment-613674329

This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc.

Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as glaringlee of the C++ API thinks we should.

Bike shedding: maybe this should be `CanonicalizeOps` instead

Test Plan: Imported from OSS

Differential Revision: D21673108

Pulled By: eellison

fbshipit-source-id: c328618907de1af22e07f57fd27fa619978c2817
2020-05-21 21:47:17 -07:00
Mike Ruberry
64584573f9 Updates tests for integer division deprecation (#38621)
Summary:
Updates our tests in preparation of integer division using torch.div and torch.addcdiv throwing a runtime error by avoiding integer division using torch.div. This creates a brief period where integer division using torch.div is untested, but that should be OK (since it will soon throw a runtime error).

These callsites were identified using https://github.com/pytorch/pytorch/issues/36897.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38621

Differential Revision: D21612823

Pulled By: mruberry

fbshipit-source-id: 749c03a69feae02590b4395335163d9bf047e162
2020-05-19 19:28:00 -07:00
Ilia Cherniavskii
235f62417d Fixes for profiling JIT code (#38453)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38453

Two fixes:
 - RecordFunction in JIT interpreter should exist during the execution
   of the frame, and not just when we enter the frame
 - When creating a JIT continuation in wait instruction, we'd want to
   preserve the original thread local context, right now when we resume
   execution in continuation we preserve the thread local state of the
   thread that set future value (i.e. executed a forked task)

Test Plan: unittest, CI

Reviewed By: ngimel

Differential Revision: D21565959

Pulled By: ilia-cher

fbshipit-source-id: 206b98e3bfb0052fc8e4031da778e372cc71afc1
2020-05-19 15:50:42 -07:00
Michael Voznesensky
f6f1384811 [JIT] Refactor attributes to support buffers and parameters as first class citizens, add support for iterating over named_buffers() (#37905)
Summary:
First part of https://github.com/pytorch/pytorch/issues/36211 - still a WIP, but asking for commentary to ensure this is the direction we want to go in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37905

Differential Revision: D21633735

Pulled By: voznesenskym

fbshipit-source-id: f4e4302e40114513776c9e48867a90d72049e2e9
2020-05-18 23:23:43 -07:00
Elias Ellison
daa85cfe2e [JIT] Exit Transform Rewrite (#38282)
Summary:
After an early return, we conditionalize all further execution. This means that currently the pattern of
`if return elif return elif return` generates better code than `if return if return if return`. It's obviously not good to have semantically equivalent code generate worse IR, so we should rewrite the graph to handle this case. This came up in https://github.com/pytorch/pytorch/pull/37171

```
torch.jit.script
def test_foo(x: bool, y: bool):
    if x:
        return 1
    return 2
print(test_foo.code)
```
generates:
```
def test_foo(x: bool,
    y: bool) -> int:
  _0 = uninitialized(int)
  if x:
    _1, _2 = True, 1
  else:
    _1, _2 = False, _0
  if _1:
    _3 = _2
  else:
    _3 = 2
  return _3
```
while
```
torch.jit.script
def test_foo(x: bool, y: bool):
    if x:
        return 1
    else:
        return 2
print(test_foo.code)
```
generates:
```
def test_foo(x: bool,
    y: bool) -> int:
  if x:
    _0 = 1
  else:
    _0 = 2
  return _0
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38282

Differential Revision: D21576733

Pulled By: eellison

fbshipit-source-id: 80cf1ad7fbda6d8d58557abbfb21c90eafae7488
2020-05-15 12:22:28 -07:00
Michael Voznesensky
960f4b51e3 [JIT] Fix @staticmethod access from self on modules (#37702)
Summary:
Closes https://github.com/pytorch/pytorch/issues/30755
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37702

Differential Revision: D21389989

Pulled By: voznesenskym

fbshipit-source-id: f9b7e26a9eab7dc3d7762a5a28f85424dac5fbb3
2020-05-14 21:12:10 -07:00
Will Feng (FAIAR)
38d141ede5 Support having a different forward method when we are not in scripting mode (#38158)
Summary:
TorchScript currently doesn’t support `*args, **kwargs` in method signature, which is extensively used in DPER3 low-level modules’ forward method. In order to make DPER3 low-level modules scriptable, I was thinking about a solution of having a forward method *only* for TorchScript, and replace the forward method when we are not in scripting mode.

This solution works today, and I would like to add a test to make sure it will always work in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38158

Differential Revision: D21485657

Pulled By: yf225

fbshipit-source-id: df7368e8a5265418be7c305e6666ffd76e595466
2020-05-14 12:13:06 -07:00
David Reiss
7f7fdb1013 Remove a use of checkScript(str) (#35623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35623

Python 2 has reached end-of-life and is no longer supported by PyTorch.
This test case is valid syntax in Python 3.

Test Plan: CI

Differential Revision: D20842874

Pulled By: dreiss

fbshipit-source-id: 9f12e046f827d4f9d5eca99b0b0b46f73e06ff51
2020-05-14 10:07:58 -07:00
Hong Xu
336e1ec592 Clean up error handling in is_nonzero and where in TensorCompare.cpp (#38150)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38150

Differential Revision: D21539736

Pulled By: ezyang

fbshipit-source-id: e390c12f5948192a552d66dcd1bb89b2cb45f170
2020-05-13 20:19:40 -07:00
Elias Ellison
8d883f5c7c [JIT] [Easy] Add location to implicit conversions (#38442)
Summary:
Previously, we weren't adding the location to implicit conversions, so the error message wouldn't show location when these ops failed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38442

Differential Revision: D21563500

Pulled By: eellison

fbshipit-source-id: 19dd786ab8580f11ed919aac669efeed0ef52dcb
2020-05-13 18:02:41 -07:00
Michael Suo
2efa7e04c2 [jit] move torchbind tests to separate file (#37473)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37473

Test Plan: Imported from OSS

Differential Revision: D21297541

Pulled By: suo

fbshipit-source-id: 65c48094b1f26fbbf251021957257ce04279922b
2020-05-13 17:37:00 -07:00
anjali411
1676c7d618 Added autograd tests, disabled jit autograd tests for complex and added a separate list for tests for complex dtype only (#38399)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38399

Test Plan: Imported from OSS

Differential Revision: D21555941

Pulled By: anjali411

fbshipit-source-id: ea9f5a76590c5bab3df6a540617b074238bfb535
2020-05-13 16:41:09 -07:00
Michael Suo
167a978a03 Fix method stub creation for function attributes (#37994)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37994

Before, reassigning a method in a module (like `forward = _forward`)
didn't work, because we look at the function object's name for our def
name when building AST. Mkae that overrideable to handle cases like
reassignment

Test Plan: Imported from OSS

Differential Revision: D21444535

Pulled By: suo

fbshipit-source-id: 4f045f18b5a146edc8005689af525d7d7ed8dd5f
2020-05-12 23:20:35 -07:00
Elias Ellison
eb3e9872c9 [JIT] make torch.unique compilable (#38156)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/37986

Follows the stack in https://github.com/pytorch/pytorch/pull/33783 stack to make functions in `torch/functional.py` resolve to their python implementations. Because the return type of `torch.unique` depends on `return_inverse` and `return_counts` I had to refactor the implementation to use our boolean_dispatch mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38156

Differential Revision: D21504449

Pulled By: eellison

fbshipit-source-id: 7efb1dff3b5c00655da10168403ac4817286ff59
2020-05-12 18:37:53 -07:00
Kimish Patel
f954dd7823 Add dropout removal pass. (#38253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38253

This pass removes dropout and dropout_ nodes when training is false. It
requires to have run freeze_module pass which does both inlining and constant
propagation, without which training variable remains as attribute instead of
constant.
ghstack-source-id: 103939141

Test Plan: python test/test_jit.py TestScript.test_remove_dropout

Reviewed By: dreiss

Differential Revision: D21505863

fbshipit-source-id: 42ea45804e4653b625b6a254c8d8480757264aa8
2020-05-12 14:38:34 -07:00
James Reed
a553935e3c [JIT] Expose magic methods on script::Object (#38167)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38167

Test Plan: Imported from OSS

Differential Revision: D21486709

Pulled By: jamesr66a

fbshipit-source-id: 17b44d979fc658768b0d64f7d8af6fb684043ea3
2020-05-11 15:01:15 -07:00
Vitaly Fedyunin
57d01be92b Replacing assertEqual with assertEqualIgnoreType wherever types missmatch (#38102)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38102

Test Plan: Imported from OSS

Differential Revision: D21477060

Pulled By: VitalyFedyunin

fbshipit-source-id: 25e0fd837ca9bfccf0ce994c80f7790c894096d4
2020-05-09 14:48:55 -07:00
Ailing Zhang
e84aa0211d [JIT]Support List variable in adv indexing. (#37966)
Summary:
Followup of https://github.com/pytorch/pytorch/issues/37848 I realized that it's better to condition on `Value` type instead of token type. So now it also support indexing through list variables (used to be list literal only).
Also apparently our eager frontend accept indexing with float list as well, so matched this edge case behavior as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37966

Reviewed By: suo

Differential Revision: D21439642

Pulled By: ailzhang

fbshipit-source-id: cedb8431ef38747d4aa9909a6bbf8e954dbe0e25
2020-05-08 15:40:11 -07:00
James Reed
c1e7758b5e Back out "Revert D20229168: [quantization] Use torchbind for Linear PackedParams" (#38101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38101

Original commit changeset: 29e8a4d3b8bf
ghstack-source-id: 103730417

Test Plan: waitforsadcastle

Differential Revision: D21471381

fbshipit-source-id: a922cdf31ba32021e7264ae1454c646c0bfd7ef4
2020-05-08 10:53:06 -07:00
Ailing Zhang
9232356e5f remove uses of type() and type_as() part 1. (#38029)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38029

Differential Revision: D21468523

Pulled By: ailzhang

fbshipit-source-id: 14b7185d43eb03f630cfaa2d70e02d637ff8551b
2020-05-08 08:16:24 -07:00
Nikita Shulga
4bc0a7f86a Revert D20229168: [quantization] Use torchbind for Linear PackedParams
Test Plan: revert-hammer

Differential Revision:
D20229168

Original commit changeset: 3607cac9aa5b

fbshipit-source-id: 29e8a4d3b8bffd95ff6a58b46c4f1c1e23770304
2020-05-07 19:47:45 -07:00
James Reed
eaf9b28c55 [quantization] Use torchbind for Linear PackedParams (#34140)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34140

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D20229168

Pulled By: jamesr66a

fbshipit-source-id: 3607cac9aa5b4b044572329742baed03350491c6
2020-05-07 19:03:44 -07:00
eellison
d5df055bbb [WIP][JIT] Add JIT backend registration API (#35833)
Summary:
**Summary**
This commit adds `torch::jit::RegisterBackend`, an API that allows
external backends to be registered for the execution of JIT subgraphs
outside the JIT interpreter. In order to register an external backend,
one must extend the provided abstract class `PyTorchBackendInterface` and provide
two additional functions: one that creates an instance of the aforementioned subclass
of `PyTorchBackendInterface`, and another that preprocesses a `ScriptModule` so that
it can run on the backend. Then, a `ScriptModule` that can compile and execute a given
JIT subgraph using the functions provided at registration time is generated
for each registered backend.

**Testing**
This commit adds a unit test that uses a minimal test backend
to make sure that the registration endpoint and generated
`ScriptModule` work.

```
$ python test/test_jit.py TestBackends
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.183s

OK

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35833

Differential Revision: D21231955

Pulled By: SplitInfinity

fbshipit-source-id: 452db1123d0e5d83f97fe5da8a00fdfdb50dbef9
2020-05-07 18:15:26 -07:00
Elias Ellison
f5b3125af7 [JIT] Peephole optimize list ops (#37612)
Summary:
Peephole optimize  `len(li)` and `li[index]` patterns.

This changes the Profiled Graph IR for the following tests:
```
(Test Name, Num ifs loops, Num non-tensor nodes)
Before:
('test_nn_Conv1d_reflect_stride2_pad2', 3, 14)
('test_nn_Conv2d_reflect_stride2_pad2', 3, 14)
('test_nn_Conv1d_circular_stride2_pad2', 5, 31)
('test_nn_Conv2d_circular_stride2_pad2', 5, 31)
('test_nn_Conv3d_circular_stride2_pad2', 5, 31)
('test_nn_Conv1d_replicate_stride2_pad2', 3, 14)
('test_nn_Conv2d_replicate_stride2_pad2', 3, 14)
('test_nn_Conv3d_replicate_stride2_pad2', 3, 14)
After
('test_nn_Conv1d_reflect_stride2_pad2', 0, 2)
('test_nn_Conv2d_reflect_stride2_pad2', 0, 2)
('test_nn_Conv1d_circular_stride2_pad2', 0, 4)
('test_nn_Conv2d_circular_stride2_pad2', 0, 7)
('test_nn_Conv3d_circular_stride2_pad2', 0, 10)
('test_nn_Conv1d_replicate_stride2_pad2', 0, 2)
('test_nn_Conv2d_replicate_stride2_pad2', 0, 2)
('test_nn_Conv3d_replicate_stride2_pad2', 0, 2)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37612

Differential Revision: D21352676

Pulled By: eellison

fbshipit-source-id: f8a0e7653b7a6a4c769f075de9b3044242ca9336
2020-05-06 15:55:18 -07:00
Elias Ellison
28ac5cdc91 fix profiling test (#37961)
Summary:
this is failing in the profiling_executor job
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37961

Differential Revision: D21434341

Pulled By: eellison

fbshipit-source-id: b34f94b1595ef6f6edee76cd200f951a2ef21f22
2020-05-06 15:04:44 -07:00
Elias Ellison
0e3a05ec00 [JIT] rename enable_profiling_mode to enable_profiling_mode_for_profiling_tests (#37825)
Summary:
The existing contextmanager only conditionally enabled_profiling_mode, which was counter intuitive. When we changed the default executor it broke internal benchmarking as a result.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37825

Differential Revision: D21404611

Pulled By: eellison

fbshipit-source-id: 306b3c333ef4eb44ab6a6e5ab4e0682e5ce312ce
2020-05-06 11:30:02 -07:00
Ailing Zhang
dd618216c5 [JIT]Support adv indexing using list. (#37848)
Summary:
We used to only support indexing through
- numbers like `x[0, 1]`
- tuple like `x[(0, 1)]`
- tensor like `x[torch.tensor([0, 1])]`

This PR adds support for indexing through list which is equivalent to tensor.
- `x[[0, 1, 5]]`
- `x[[0, 1], [0, 1]]`
- `x[[[0, 1], [0, 1]], [[0, 1], [0, 1]]]`

Note for `x[[0, 1, 5]]` we had a bug in AST conversion code so we used to treat it like `x[0, 1, 5]` which means it might accidentally run and produce wrong result(fixes https://github.com/pytorch/pytorch/issues/37286 fixes https://github.com/pytorch/pytorch/issues/18616), now that it's fixed we probably want to mark it as BC breaking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37848

Reviewed By: suo

Differential Revision: D21409840

Pulled By: ailzhang

fbshipit-source-id: 6f2d962885c6dc009cb384d98be1822f5ca7a189
2020-05-06 10:44:48 -07:00
Jerry Zhang
70f375becf [quant] ConvPackedParams with TorchBind (#35923)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35923

(Note: this ignores all push blocking failures!)

Test Plan:
tbd

Imported from OSS

Differential Revision: D20957089

fbshipit-source-id: 74d8bd628ccba64e902ea6ebabc2b883924050b0
2020-05-05 20:18:36 -07:00
Michael Suo
bd220b336b [jit] fix trace checking reporting divergent names (#37842)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37842

Fixes https://github.com/pytorch/pytorch/issues/23993.

Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample1 = torch.ones(1)
sample2 = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample1, sample2,),))
> produces a graph with something like:
> %sample1, %sample2 = prim::TupleUnpack(%input)
```

This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
> produces a graph with something like
> %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.

Test Plan: Imported from OSS

Differential Revision: D21406478

Pulled By: suo

fbshipit-source-id: 3c7066b95d4a6e9b528888309954b02dadbc1a07
2020-05-05 13:39:41 -07:00
Elias Ellison
23d0441da7 [JIT] Fix GetAttr inconsistency (#37424)
Summary:
We were previously only looking at class attributes, so that didn't include methods etc, and would silently give wrong semantics. This makes hasAttr go through the same resolution as our other attribute lookups.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37424

Differential Revision: D21282633

Pulled By: eellison

fbshipit-source-id: 8e970f365c2740d137a02331739c2ed93747b918
2020-05-05 09:06:51 -07:00
Michael Suo
804e32a467 split out docs tests into separate job (#37793)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37793

Test Plan: Imported from OSS

Differential Revision: D21392798

Pulled By: suo

fbshipit-source-id: 172fb0522d0b168ca19a382e5fb1eb87b6390acc
2020-05-04 17:58:04 -07:00
Michael Suo
b7f258bbd3 add fmt to libtorch_python.so (#37560)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37560

Test Plan: Imported from OSS

Differential Revision: D21320059

Pulled By: suo

fbshipit-source-id: 95cfe7cf26c515fdfcb4621cc58266d838a38a3e
2020-05-04 10:14:37 -07:00
Nikolay Korovaiko
831c8f362f fix the incorrect merge of profiling information of two tensor types for the same value (#36806)
Summary:
as a part of moving to the dynamic shapes we are now passing `frame_id` to each profiling callback. The implementation of that requires copying profiling callbacks into Interpreter, so `first`s are actually different for every run. The dynamic shapes merging algorithm won't be using `first`, but in the meantime, while we get there, this should be a good enough fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36806

Differential Revision: D21307173

Pulled By: Krovatkin

fbshipit-source-id: 7dade56ebcc72ebd40bb7f3d636c7b83c99b628f
2020-05-01 12:53:25 -07:00
Michael Voznesensky
91e74fd843 [JIT] Adds a code_with_constants method to module printing (#37586)
Summary:
Closes https://github.com/pytorch/pytorch/issues/36625
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37586

Differential Revision: D21331385

Pulled By: suo

fbshipit-source-id: 752e63eac8bdd06c6719efb972cdc832ad7c1535
2020-04-30 20:44:01 -07:00
James Reed
d3d10cc14a Add tests for lower_graph and fix unpack() ops dispatch (#37540)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37540

ghstack-source-id: 103169129

Test Plan:
buck test mode/no-gpu mode/dev //caffe2/test:jit -- 'test_lower_graph_conv \(test_jit\.TestScript\)'
buck test mode/no-gpu mode/dev //caffe2/test:jit -- 'test_lower_graph \(test_jit\.TestScript\)'

Differential Revision: D21313433

fbshipit-source-id: bb9942272784e517b07537ee4c149b9dc4df4c2a
2020-04-30 10:55:05 -07:00
Michael Suo
896f8130a6 Revert D21297549: [jit] fix trace checking reporting divergent names
Test Plan: revert-hammer

Differential Revision:
D21297549

Original commit changeset: 981d5879a4a2

fbshipit-source-id: 9be6e88007c644914973a305f9e7a961ef11a815
2020-04-29 16:16:44 -07:00
Michael Suo
4bfa51d405 [jit] fix trace checking reporting divergent names (#37464)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37464

Fixes https://github.com/pytorch/pytorch/issues/23993.

There are two fixes here:
1. Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample, sample,),))
# produces a graph with something like
# %sample, %sample = prim::TupleUnpack(%input)
```
This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
# produces a graph with something like
# %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.

2. Previously, our input cloning for trace checking didn't do a memoized
deep copy. So:
```
_clone_inputs((sample, sample, sample))
```
produces a tuple with three non-aliased tensors. That's wrong! Use
copy.deepcopy with a memoization argument to fix this.

Test Plan: Imported from OSS

Differential Revision: D21297549

Pulled By: suo

fbshipit-source-id: 981d5879a4a244520dd68489767129ff357f1497
2020-04-28 23:52:57 -07:00
Elias Ellison
a55d80e1c5 [JIT] remove dominated guards of functional values (#37105)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37105

If a value isn't mutated anywhere and is guarded by a node, then we can remove all other guards that are dominated by the first guard.

This reduces the number of (test name, Ifs/Loops, non-tensor nodes excluding getAttr and Bailouts) from the previous PR for the following tests:
```
Before:  ('upsample', 0, 13)
After:  ('upsample', 0, 5)
Before:  ('upsample', 0, 2)
After:  ('upsample', 0, 1)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 21)
After:  ('interpolate', 1, 18)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 21)
After:  ('interpolate', 1, 20)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 13)
After:  ('interpolate', 1, 11)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 15)
After:  ('interpolate', 1, 13)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 25)
After:  ('interpolate', 1, 21)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 27)
After:  ('interpolate', 1, 23)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('test_nn_BatchNorm1d_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_3d_input', 2, 3)
After:  ('test_nn_BatchNorm1d_3d_input', 1, 2)
Before:  ('test_nn_BatchNorm1d_affine_simple_average', 2, 5)
After:  ('test_nn_BatchNorm1d_affine_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm1d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_3d_input_not_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_3d_input_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm1d_zero_batch', 1, 2)
Before:  ('test_nn_BatchNorm2d', 2, 3)
After:  ('test_nn_BatchNorm2d', 1, 2)
Before:  ('test_nn_BatchNorm2d_2d_simple_average', 2, 5)
After:  ('test_nn_BatchNorm2d_2d_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm2d_momentum', 2, 3)
After:  ('test_nn_BatchNorm2d_momentum', 1, 2)
Before:  ('test_nn_BatchNorm2d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm2d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm2d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm2d_zero_batch', 1, 2)
Before:  ('test_nn_BatchNorm3d', 2, 3)
After:  ('test_nn_BatchNorm3d', 1, 2)
Before:  ('test_nn_BatchNorm3d_3d_simple_average', 2, 5)
After:  ('test_nn_BatchNorm3d_3d_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm3d_momentum', 2, 3)
After:  ('test_nn_BatchNorm3d_momentum', 1, 2)
Before:  ('test_nn_BatchNorm3d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm3d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm3d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm3d_zero_batch', 1, 2)
Before:  ('test_nn_Transformer', 127, 467)
After:  ('test_nn_Transformer', 122, 450)
```

Test Plan: Imported from OSS

Differential Revision: D21215652

Pulled By: eellison

fbshipit-source-id: 0365fc2e351caca7e1ccaa25428908a26e3f5343
2020-04-28 23:28:18 -07:00
Elias Ellison
45e8451b33 optimize is_float_point calls (#37012)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37012

Removes an if statement in `torch.nn.functional.affine_grid`

Test Plan: Imported from OSS

Differential Revision: D21160755

Pulled By: eellison

fbshipit-source-id: 8b030936c9fbdb05b44abc9f254805d102f2acc2
2020-04-28 23:28:12 -07:00
Elias Ellison
cde1350a5d Add support for generic list constants (#36953)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36953

Add support for generic lists as a constant. generic dicts & tuples are already implemented. This is a pretty common pattern and cuts down on the number of non-tensor nodes executed in interpolate tests.

Test Plan: Imported from OSS

Differential Revision: D21160761

Pulled By: eellison

fbshipit-source-id: 1e6b7b25b7580f09067794772d44e615601c60c4
2020-04-28 23:28:07 -07:00
Elias Ellison
92129956cf Add size peephole optimziation (#36758)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36758

Test Plan: Imported from OSS

Differential Revision: D21160760

Pulled By: eellison

fbshipit-source-id: 9cdb8eeffa71fb4670a811347ae4fad2a82ae1d8
2020-04-28 23:27:52 -07:00
Michael Suo
92b9089fd9 [jit] Fix pretty printing of functions (#37432)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37432

Fixes https://github.com/pytorch/pytorch/issues/36803.

Test Plan: Imported from OSS

Differential Revision: D21284735

Pulled By: suo

fbshipit-source-id: 8c673099b3171070bff80fd1defc91487f66d4b3
2020-04-28 21:30:49 -07:00
mattip
ec8006cc16 [ONNX] fix provider_version and add consistency test (#36797)
Summary:
forward port the test from pr gh-36795, xref issue gh-32561
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36797

Differential Revision: D21257034

Pulled By: ezyang

fbshipit-source-id: d217da0e74f00a433c904defc0bf3eb5f594fd5e
2020-04-27 11:00:23 -07:00
Nikita Shulga
47c4dca1ab Remove python-2 or python<3.5 checks from unit tests (#37252)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37252

Test Plan: CI

Differential Revision: D21241083

Pulled By: malfet

fbshipit-source-id: 44164b822f7905288abb2beda0175d2162d86143
2020-04-24 17:42:04 -07:00
Zachary DeVito
b6bb644e41 Fix long line splitting issue in python_print (#37088)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37088

For an inlined expression tree like `(e_0, (e_1, e_long))` the previous
algoritm only scanned the same statement as `e_long`, splitting the
inlined expressions across lines. Because it did not scan `e_0`, `e_0`
would still get emitted inline, causing it to reverse order with `e_1` and
`e_long`. The new algorithm scans starting at `e_long` and going all
the way back up the expression until it reaches the end of the inlined
statement. Caching of what has already been scanned has been added so that
if there was a second long long `e_long2` after `e_long`, it would not
rescan and re-inline the statements that were already split.

Test Plan: Imported from OSS

Differential Revision: D21180394

Pulled By: zdevito

fbshipit-source-id: 4d142c83a04c89a47d04282f67a513f82cf153c0
2020-04-24 15:14:39 -07:00
moto
5a27ec09b8 Add Inverse Short Time Fourier Transform in ATen native (#35569)
Summary:
Ported `torchaudio`'s implementation (test, and documentation as well) to ATen.

Note
 - Batch packing/unpacking is performed in Python. ATen implementation expects 4D input tensor.
 - The way `hop_length` is initialized in the same way as `stft` implementation. [The Torchaudio's version tried to mimic the same behavior but slightly different](7da61a4bee/torchaudio/functional.py (L152-L157)).

Closes https://github.com/pytorch/pytorch/issues/34827
Relates https://github.com/pytorch/pytorch/issues/3775
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35569

Differential Revision: D21178090

Pulled By: mthrok

fbshipit-source-id: 2701a8b241a36a6fb1b740c2fb2b07cb938185d4
2020-04-24 12:14:55 -07:00
Vishwak Srinivasan
fd5b5cd604 Allowing casting str to int in JIT (#36016)
Summary:
Changelog:
- Allow int(str) in TorchScript
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36016

Test Plan:
- Added tests in test_jit.py

Closes https://github.com/pytorch/pytorch/issues/35948

Differential Revision: D21076438

Pulled By: driazati

fbshipit-source-id: d0753dc0e1c79f4f943c303b58b2d228856ba793
2020-04-23 14:26:24 -07:00
David Reiss
e75fb4356b Remove (most) Python 2 support from Python code (#35615)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35615

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well (though using side-by-side view and ignoring
whitespace change might be helpful).

Test Plan: CI

Differential Revision: D20842886

Pulled By: dreiss

fbshipit-source-id: 8cad4e87c45895e7ce3938a88e61157a79504aed
2020-04-22 09:23:14 -07:00
Mikhail Zolotukhin
359e7f4bba Teach IRParser to parse strides along with sizes in a tensor type. (#36951)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36951

Test Plan: Imported from OSS

Differential Revision: D21139940

Pulled By: ZolotukhinM

fbshipit-source-id: b56a1fddfc9de4684da3ba9a462e344d0985e8b6
2020-04-21 17:27:15 -07:00
Mike Ruberry
bcdb0727c2 Revert D20907254: Fix long line splitting issue in python_print
Test Plan: revert-hammer

Differential Revision:
D20907254

Original commit changeset: ebfc1a4eefc2

fbshipit-source-id: 76440a8649a17728c50e2f3eeb3744a2245f6daf
2020-04-21 16:24:32 -07:00
Zachary DeVito
bf676682e7 Fix long line splitting issue in python_print (#36188)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36188

* Need to remove n^2 behavior for scanning whether to split or not
  otherwise long inline chains will take a long time re-scanning.

Test Plan: Imported from OSS

Differential Revision: D20907254

Pulled By: zdevito

fbshipit-source-id: ebfc1a4eefc26d5806381e7afd75b7a9cd4cde97
2020-04-21 15:46:42 -07:00
Mike Ruberry
71ec8b2002 Switches test_jit to use float32 as its default scalar type (#36982)
Summary:
Our test suite used to set double as its default scalar type, and when it was switched to not do so (to be more consistent with how users experience PyTorch), a few tests had to still set the default scalar type to double to function properly. Now that the jit no longer creates double tensors so frequently, it appears that test_jit no longer needs to set double as its default scalar type, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36982

Differential Revision: D21152120

Pulled By: mruberry

fbshipit-source-id: ea6d3c1ad55552dc5affa1fe1bd0e5189849e6d7
2020-04-21 11:23:28 -07:00
Brian Vaughan
54ed6fd3ee Use both absolute and relative tolerance in testing (#34258)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34258

This PR allows both atol and rtol to be specified, uses defaults based on the prior analysis (spreadsheet attached to https://github.com/pytorch/pytorch/pull/32538), but retains the absolute tolerance behavior in cases where precision was previously specified explicitly.

Test Plan: Imported from OSS

Differential Revision: D21110255

Pulled By: nairbv

fbshipit-source-id: 57b3a004c7d5ac1be80ee765f03668b1b13f4a7e
2020-04-19 06:16:49 -07:00
Wanchao Liang
24aac32171 [jit] Add dictionary as output of tracer (#36696)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36696

This PR add dictionary as a supported output of tracer under the strict
flag.

Test Plan: Imported from OSS

Reviewed By: houseroad

Differential Revision: D21056962

Pulled By: wanchaol

fbshipit-source-id: ace498182d636de853cf8a1efb3dc77f5d53db29
2020-04-16 18:12:38 -07:00
David Reiss
63e5058c88 Fix naming of "strides" method in TensorType (#36727)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36727

Looks like this was renamed by accident in 0cbd7fa46f

Test Plan:
Unit test.
Lint.

Differential Revision: D21076697

Pulled By: dreiss

fbshipit-source-id: dbd18cb41c7b26479984a7a7b12ad41a4c5b7658
2020-04-16 17:07:27 -07:00
Elias Ellison
54a575c9bd [JIT] fix torch.tensor jit dtype (#36587)
Summary:
Previously we were always creating a double tensor from `torch.tensor(1.)`, whereas python eager uses the current default dtype. Fix for https://github.com/pytorch/pytorch/issues/36369
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36587

Differential Revision: D21043617

Pulled By: eellison

fbshipit-source-id: 38da303594f52e06941d86b6e57c4a06e7d36938
2020-04-16 10:55:49 -07:00
Elias Ellison
9cbeb0faed [JIT] Dont optimize shape peepholes on inline (#36404)
Summary:
With https://github.com/pytorch/pytorch/pull/35562, we are running peephole optimization on inlining to reduce the number of nodes that are copied.

The tracer encodes the sizes in the graph like:
```
graph(%0 : Double(7)):
  %1 : Function = prim::Constant[name="tensor_size"]()
  %2 : Tensor = prim::CallFunction(%1, %0)
  return (%2)
```

however people would like to reuse the graph with different shapes so running size invalidations would invalidate that. long term it might be better for the tracer to not include shape information but there are downstream users of that.

Separates out FuseAddMM from peephole so that now there is a single `disable_size_optimizations` parameter, and onnx explicitly invokes fuseaddmm.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36404

Differential Revision: D20968974

Pulled By: eellison

fbshipit-source-id: 56f8f1699e3b0adeeccdfd5a67bb975fd41a2913
2020-04-15 17:49:48 -07:00
davidriazati
8d66f88eb1 [jit] Fix bound method copying (#36546)
Summary:
Previously we were copying the bound method of the original class to the
new script module class, which causes `self` to be wrong. This PR
changes it so we fetch the unbound function, then bind it to the new
script module, then attach it to the module.

Fixes #28280
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36546

Pulled By: driazati

Differential Revision: D21023329

fbshipit-source-id: 6b3f8404700860151792f669a9c02fbd13365272
2020-04-15 17:38:20 -07:00
Lu Fang
67e0bf14b7 Add support of Dict as output when connecting script and tracing (#36265)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36265

Reviewed By: hl475

Differential Revision: D20927160

Pulled By: houseroad

fbshipit-source-id: 5a63022e92d234b97b57d60ef7f7aa3bc41c2d22
2020-04-14 16:06:53 -07:00
Wanchao Liang
999d7f6ab2 [jit] tracer flag to guard risky behaivors (#36277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36277

This PR introduce a flag to the tracer that guard the risky behaviors
like adding list/dict as output of the tracer. Currently to ensure not
BC breaking user, we throw warning if the tracer output is list, and
will throw error when the tracer output is dict to enforce using this
flag (next PR)

Test Plan: Imported from OSS

Differential Revision: D20998157

Pulled By: wanchaol

fbshipit-source-id: 0d2c55f1a263a48b1b92dd6ad54407815e0a6f72
2020-04-13 22:35:03 -07:00
Nikita Shulga
fd008bd170 Make patterns in test_unmatched_annotations more flexible (#36422)
Summary:
To make them compatible with python3.7 and python3.8
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36422

Test Plan: CI

Differential Revision: D21006399

Pulled By: malfet

fbshipit-source-id: 725df277ff3e4479fc2c39d16a30fbf301fde9e5
2020-04-13 17:53:37 -07:00
Wanchao Liang
3526627f46 Use unittest assertWarns instead (#36411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36411

This PR remove pytorch specific defined assertwarns and use the unit
test one, also format some tests

Test Plan: Imported from OSS

Differential Revision: D20998159

Pulled By: wanchaol

fbshipit-source-id: 1280ecff2dd293b95a639d13cc7417fc819c2201
2020-04-13 15:56:42 -07:00
Elias Ellison
8cb1950805 [JIT] fix alias assertion (#36178)
Summary:
AnyType wasn't listed as a mutable type, so the assertion triggered (yay!). Also update the `isMutableTypeInternal(from) != isMutableTypeInternal` logic to be more encompassing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36178

Differential Revision: D20922356

Pulled By: eellison

fbshipit-source-id: 7060a62b18e98dc24b6004a66225c196aadb566e
2020-04-09 18:25:18 -07:00
Jerry Zhang
358466f1da [quant] Move graph mode quantization tests to test_quantize_script.py (#36324)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36324

Test Plan:
.

Imported from OSS

Differential Revision: D20948046

fbshipit-source-id: 2dd8f0c6fbe8fd84293420b97592dc586d25def9
2020-04-09 16:10:18 -07:00
Mike Ruberry
62f9312abd Revert D20783298: Fix naming of "strides" method in TensorType
Test Plan: revert-hammer

Differential Revision:
D20783298

Original commit changeset: 8fcc146284af

fbshipit-source-id: 30e3cb6d7a30d82048534d4d2e794b7e08ae01bb
2020-04-09 04:24:43 -07:00
David Reiss
16980e455f Fix naming of "strides" method in TensorType (#35170)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35170

Looks like this was renamed by accident in 0cbd7fa46f

Test Plan:
Unit test.

Imported from OSS

Differential Revision: D20783298

fbshipit-source-id: 8fcc146284af022ec1afe8d651baf6721b190ad3
2020-04-08 15:59:28 -07:00
Edward Yang
83907ded1d Revert D20895316: [pytorch][PR] [JIT] List reland
Test Plan: revert-hammer

Differential Revision:
D20895316

Original commit changeset: 9a2bc0e6bdcb

fbshipit-source-id: d135f0038cf240a0973ecfcd540121cbd4ecb5a7
2020-04-08 14:40:10 -07:00
Elias Ellison
9ada7abc18 [JIT] fix comprehension scope writes (#36105)
Summary:
In a comprehension like:
```
    def f()->int:
        i = 1
        x = [i for i in range(7)]
        return i
```
the variables inside the comprehension do not write to the function environment.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36105

Differential Revision: D20880699

Pulled By: eellison

fbshipit-source-id: 40af0f7470e0baeff7ef158cb461bf85c816d169
2020-04-08 10:00:45 -07:00
Elias Ellison
2afe171538 [JIT] List reland (#36146)
Summary:
Relanding https://github.com/pytorch/pytorch/pull/33783
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36146

Differential Revision: D20895316

Pulled By: eellison

fbshipit-source-id: 9a2bc0e6bdcbd43f9abe51eadaa28f90bccafcc9
2020-04-07 16:18:30 -07:00
Elias Ellison
6bc8ffe824 [JIT] Optimize before inlining (#35562)
Summary:
Resubmit of https://github.com/pytorch/pytorch/pull/35424, only this time I run optimizations in the right order so the PR description is actually true.

This speeds up the inlining pass of FairSeq model from 180s -> 13s, and MaskRCNN model from 5s -> 1.5s.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35562

Differential Revision: D20738922

Pulled By: eellison

fbshipit-source-id: 1439cf9d1f0bc780e2d64a744694f8b3b7ba4b70
2020-04-07 09:42:26 -07:00
James Reed
3228939f23 [JIT] Fix fake_range() (#36083)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36083

Test Plan: Imported from OSS

Differential Revision: D20874745

Pulled By: jamesr66a

fbshipit-source-id: fc57defefbc8e9840b8d5bac89b4146179e00b06
2020-04-06 14:12:35 -07:00
davidriazati
71669f0249 Fix flake8 (#35968)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35968

Pulled By: driazati

Differential Revision: D20845617

fbshipit-source-id: 1b1cedb9c5c721f7f7edf94b91fbbb97d249bc2a
2020-04-03 14:02:37 -07:00
davidriazati
6e13a7787b [jit] Fix type comparisons segfault (#35929)
Summary:
Pybind will convert `None`s to `nullptr`s, so this adds a check to make
sure those don't get into the actual type comparison logic. Fixes #35778
](https://our.intern.facebook.com/intern/diff/20831278/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35929

Pulled By: driazati

Differential Revision: D20831278

fbshipit-source-id: 5800050e5eec280072afde58141ad00c1e8db8e2
2020-04-03 11:33:48 -07:00
Zachary DeVito
9097b55479 Propagate static_if more completely. (#35834)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35834

This handles the cases we did not handle before in AND and OR statements:

    static_true || <unknown> -> static_true
    static_false && <unknown> -> static_false

Test Plan: Imported from OSS

Differential Revision: D20801125

Pulled By: zdevito

fbshipit-source-id: 0ef94c3a14c7af91580fc5248a4ccfd9e8d6d481
2020-04-02 11:44:34 -07:00
Michael Suo
866d9d4e6a [jit] Fix name collision on load (#35720)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35720

When modules are saved, all relevant types are serialized according to
their qualified name with a compilation unit. Since qualified names are
guaranteed to be unique within a compilation unit, this normally works
fine.

On load, all types are registered in a compilation unit owned by the
script::Module. Type names are not unique across compilation units, so
if you load two modules with colliding type names, make them submodules
of yet another module, and save that module, there is the potential of a
name collision. See the added tests for examples if that description is
confusing.

The solution is to unique type names when serializing code by mangling
them if we detect a name collision.

Test Plan: Imported from OSS

Differential Revision: D20749423

Pulled By: suo

fbshipit-source-id: a8827ff1d4a89f3e7964dbbb49b4381863da3e6a
2020-04-01 00:02:38 -07:00
Elias Ellison
1ec0676a33 [JIT] register list prim ops cleanup (#35768)
Summary:
This is a follow up from https://github.com/pytorch/pytorch/pull/34520, which removed specialized list ops. This removes templating from list ops.

it also has one minor other change, which is to move `aten::len(t[]) -> int` to `aten::len(Any[]) -> int` so that heterogenous tuples can be called with `len()`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35768

Differential Revision: D20772943

Pulled By: eellison

fbshipit-source-id: bc36a00920bc94ca8c5aa9eb7d5d7a640388ffbb
2020-03-31 19:24:59 -07:00
Jerry Zhang
9650f465ce [quant][graphmode] Quantization support for at::sort (#35571)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35571

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20769874

fbshipit-source-id: 7d6805754416fd9c4a3d84d42af756e1926111c2
2020-03-31 14:54:16 -07:00
Jerry Zhang
4e19e02976 [quant][graphmode] Quantization support for quantized::add_scalar_relu and quantized::add_scalar_relu_out (#35509)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35509

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20742138

fbshipit-source-id: f6216d0af5da2bd5629aa4909f05dcde7853c8b8
2020-03-30 14:44:38 -07:00
Jerry Zhang
340048b67c [quant][graphmode] Remove unused patterns (#35385)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35385

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655298

fbshipit-source-id: bc5eda2640a809adb55d3d645c65fb02a6f2f444
2020-03-29 23:48:15 -07:00
Jerry Zhang
86be6443d8 [quant][graphmode] Quantization support for aten::conv3d (#35347)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35347

Test Plan:
python test/test_jit.py TestJit.test_quantized_conv3d

Imported from OSS

Differential Revision: D20655304

fbshipit-source-id: 2ab6a977eda9064fbb8051669738f37b90f13b6f
2020-03-29 17:39:06 -07:00
Jerry Zhang
efec027653 [quant][graphmode] prepare_script takes original qconfig_dict (#35335)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35335

We'll script the qconfig_dict in `prepare_script`

Test Plan:
regression tests in `python test/test_jit.py`

Imported from OSS

Differential Revision: D20655311

fbshipit-source-id: 002bfd905ff9a9b298a8073d42e12cfffcd1eb71
2020-03-28 18:36:46 -07:00
Jerry Zhang
444332710c [quant][graphmode] Quantization support for quantized::add_scalar (#35334)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35334

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655299

fbshipit-source-id: 66e1fa215a4a40f40dc7abe442c05bb5b6b20cfe
2020-03-28 14:00:44 -07:00
Nick Korovaiko
76d5102587 add a cuda/fuser job for legacy graph executor (#35419)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35419

Differential Revision: D20719013

Pulled By: Krovatkin

fbshipit-source-id: 745d9523a5a9b7b4b556a075351ea58a82501dff
2020-03-28 12:11:18 -07:00
Jerry Zhang
f1d69cb2f8 [quant][graphmode] Quantization support for permute and repeat_interleave (#35332)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35332

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655306

fbshipit-source-id: 43dce62ce178d5c7e68b27fd88ed5d2958014c7b
2020-03-27 22:40:25 -07:00
Jerry Zhang
df27b32014 [quant][graphmode] Make interpolate/upsample work again (#35130)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35130

Test Plan:
python test/test_jit.py TestJit.test_swap_dequantize_all_ops

Imported from OSS

Differential Revision: D20655303

fbshipit-source-id: 5ad8c6de28bcabffdfab4c9bc6a61f19f1d061cc
2020-03-27 22:38:57 -07:00
Jerry Zhang
76a8d30693 [quant][graphmode] Fold quantized prepacking ops (#35077)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35077

Fold the prepack ops: `quantized::linear_prepack` and `quantized::conv2d_prepack` after
`freeze`

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655301

fbshipit-source-id: fbb4223323f788c88db7b55cfafda46fad106d49
2020-03-27 17:51:51 -07:00
Nikolay Korovaiko
9e22d15f14 Enable tensorexpr cpp tests in CI. try #2 (#35454)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35454

Differential Revision: D20665160

Pulled By: Krovatkin

fbshipit-source-id: e04cbe92b2ee5a3288f3c4e5c83533bfea85bf85
2020-03-27 12:09:55 -07:00
Martin Yuan
da4e68faed Make operator names consistent between export_opnames and the lite interpreter (#34674)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34674

Two changes to make sure the op_names dumped in export_opnames() are consistent to what are actually used in bytecode.
* Inline graph before dumping the operator names.
* Use code of the graph (which is used in bytecode) instead of the nodes of graph.

Test Plan: Imported from OSS

Differential Revision: D20610715

Pulled By: iseeyuan

fbshipit-source-id: 53fa9c3b36f4f242b7f2b99b421f4adf20d4b1f6
2020-03-26 22:50:59 -07:00
Ailing Zhang
77bbbf042d [JIT]Support converting str to float. (#35352)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35352

Differential Revision: D20649286

Pulled By: ailzhang

fbshipit-source-id: e9b09bddd0fe3c962a7514d45fd069cd0b4e6df1
2020-03-26 20:24:59 -07:00
Edward Yang
e0c227d376 Revert D20655246: [jit] add module interface tests to test_jit
Test Plan: revert-hammer

Differential Revision:
D20655246

Original commit changeset: 9e1f865b3f2d

fbshipit-source-id: 241f10738df714efb662f1c53551617dd1558b13
2020-03-26 06:53:19 -07:00
Suraj Menon
aa01a95c6d Revert D20630760: [pytorch][PR] Enable NNC tests vol. i. add test_tensorexpr.py tests [WIP]
Test Plan: revert-hammer

Differential Revision:
D20630760

Original commit changeset: 7d2f27aca6b1

fbshipit-source-id: 28ac92b3390651a4a67061d6ebf208515b9b9463
2020-03-25 20:34:46 -07:00
Nikolay Korovaiko
f3a5081bd4 Enable NNC tests vol. i. add test_tensorexpr.py tests [WIP] (#34897)
Summary:
This  PR add tensorexpr cpp tests to test_jit.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34897

Differential Revision: D20630760

Pulled By: Krovatkin

fbshipit-source-id: 7d2f27aca6b1e23e3ffed1c765d8f590688118e3
2020-03-25 17:23:48 -07:00
Jerry Zhang
ccc0e35275 [quant][graphmode] quantization support for prim::CallFunction (#34855)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34855

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655305

fbshipit-source-id: 44cc3525967048fb9d9c145b342ac7d76b22e4db
2020-03-25 17:17:19 -07:00
Wanchao Liang
d7c255d2fc [jit] add module interface tests to test_jit (#35417)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35417

surprised it's not getting runned by test_jit, added it

Test Plan: Imported from OSS

Differential Revision: D20655246

Pulled By: wanchaol

fbshipit-source-id: 9e1f865b3f2d23b63d4d605aaf2dc3a483a4f0e1
2020-03-25 15:25:28 -07:00
Jerry Zhang
15e5453977 [reland][quant][graphmode] Add quantization support for aten::cat (#34346) (#35337)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35337

Test Plan: python test/test_jit.py

Differential Revision: D20648201

Pulled By: jerryzh168

fbshipit-source-id: f6570c3ee2f48a9bc6373d2af873824ac2c8ef62
2020-03-25 12:45:21 -07:00
Elias Ellison
5b2f8cef08 [JIT] Functional Graph Pass (#33020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33020

This is a pass to create functional blocks. The other PRs in the stack help avoid some of the limitations that are are often found in graphs. It's possible that this would work well with a graph that is frozen. Follow up work items that will help this pass:

- We don't currently have any capacity in alias analysis to tell whether a Value that came from the wildcard set "re-escapes" back into the wildcard set.
- More comments on the semantics of the graph and correctness conditions
- We could consider using dynamic dag if the perf of this is a limitation.
- potential make Functional Graphs Functional Blocks instead, so that we do not repeatedly copy constants, also to make IR read easier.

Test Plan: Imported from OSS

Differential Revision: D20603188

Pulled By: eellison

fbshipit-source-id: 6822a6e65f4cc2676f8f6445fe8aa1cb858ebeeb
2020-03-24 23:44:18 -07:00
Alban Desmaison
ee7cd84fac Revert D20589145: [quant][graphmode] Add quantization support for aten::cat
Test Plan: revert-hammer

Differential Revision:
D20589145

Original commit changeset: c9159fffa88c

fbshipit-source-id: c6b8db13ed1ed19f4437b2fa70d88ce139d445e1
2020-03-24 16:24:22 -07:00
Jerry Zhang
6b5740c5f6 [quant][graphmode] Add quantization support for aten::cat (#34346)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34346

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20589145

fbshipit-source-id: c9159fffa88cf25fcdccfcc4eef2622cf4b250b5
2020-03-24 13:56:43 -07:00
davidriazati
44622bbda9 [jit] Add lazy script decorator (#34935)
Summary:
Stacked PRs
 * #34938 - [jit] Remove stray `script`
 * **#34935 - [jit] Add lazy script decorator**

Some users maintain libraries of code that is largely trace-able but not
script-able. However, some functions may need to be `torch.jit.script`ed if
they contain control flow so the tracer will use the compiler version.
This however impacts library start up time as in #33418, so this PR adds
a workaround in the form of a `torch.jit._lazy_script_while_tracing`
that will only initialize the compiler if the function is called while
actually tracing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/34935

Pulled By: driazati

Differential Revision: D20569778

fbshipit-source-id: d87c88c02b1abc86b283729ab8db94285d7d4853
2020-03-24 13:43:18 -07:00
James Reed
618c6214aa [reapply][JIT] Namespaces for TorchBind (#35254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35254

Reapply D20541090 with some BC fixes
ghstack-source-id: 100733987

Test Plan: buck test mode/dev-nosan //caffe2/torch/fb/predictor/model_repo/tests:ai_infra_representative_model_shard_6_test -- 'RepresentativeModelTest\/ShardedRepresentativeModelTest\.RunModel\/0'

Reviewed By: zdevito

Differential Revision: D20607111

fbshipit-source-id: 80f148d860571208c93e9308128cd480ff089f74
2020-03-24 00:39:48 -07:00
Jerry Zhang
537fdd77d5 [quant][graphmode] quantization support for view, transpose, contiguos (#34854)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34854

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20524456

fbshipit-source-id: e6e8fc3db6cccbd32c210d04f921274d81996fe2
2020-03-23 22:33:39 -07:00
Jerry Zhang
4a96911629 [quant][graphmode] quantization support for aten::chunk (#34806)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34806

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20524454

fbshipit-source-id: 92ac9bc251581e963258cb90dc3de73f8508c822
2020-03-23 22:33:34 -07:00
Jerry Zhang
ac4a0224f3 [quant][graphmode] Replicate quantize node for prim::If (#34804)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34804

We want to replicate the quantize node for return values in blocks of prim::If
in order to create the quantization patterns.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20524453

fbshipit-source-id: 2268ac555f646158f4e1ffc98ccc8101d7504194
2020-03-23 21:20:45 -07:00
Jerry Zhang
eff68bc872 [quant][graphmode] quantization support for aten::add (#34572)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34572

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20519607

fbshipit-source-id: c57e062cffc24a47a76b73b58aff7f9ef80183fa
2020-03-23 17:52:28 -07:00
Elias Ellison
7ab25b2e6b [JIT] add id function (#34975)
Summary:
add `id` function so to give uses a way of keeping a `seen` set of nn modules.
n practice, this is only used between values of `T` and `T` or `T` and `Optional[T]`, so in this implementation I made it so that None is the only value that can be zero. Python also only guarantees `id()` gives semantically meaningful results for pointer types.

EDIT: now only allowing id on class types
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34975

Reviewed By: driazati

Differential Revision: D20599564

Pulled By: eellison

fbshipit-source-id: 3c6666a9b9b0258198adc70969dd6332e3375e4f
2020-03-23 17:10:13 -07:00
Jerry Zhang
a00e12e755 [quant][graphmode] weight/bias of linear/conv can be reused for multiple ops (#35221)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35221

When weight is reused, we only need to insert one observer for the weight

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20602492

fbshipit-source-id: e003e6316f6615f3526f0d00fb7b722148b4749b
2020-03-23 14:21:59 -07:00
Elias Ellison
4fae5a6721 Move module graph creation to testing utils (#34917)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34917

Test Plan: Imported from OSS

Differential Revision: D20539338

Pulled By: eellison

fbshipit-source-id: 5c46c0ce50e5bcccf5abee264f432ded7d36d040
2020-03-23 11:59:02 -07:00
Elias Ellison
77ccb5c14d Move functional graph creation to testing utils (#34916)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34916

Test Plan: Imported from OSS

Differential Revision: D20539337

Pulled By: eellison

fbshipit-source-id: 9b777e369facebbe68fe198ca3eec055cf9c5257
2020-03-23 11:57:25 -07:00
Jerry Zhang
3e4076aa9c [quant][graphmode] quantization work for prim::If (#34518)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34518

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20519606

fbshipit-source-id: 94d49e18d97df642cbcb446df12376f6d2a397bc
2020-03-23 09:54:24 -07:00
albanD
0e0386b434 Revert "[JIT] add id function (#34975)" (#35209)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35209

This reverts commit 62f11f0a35.

Test Plan: Imported from OSS

Differential Revision: D20596847

Pulled By: albanD

fbshipit-source-id: e6777e42356aac772e59f0466a92cc13258218c1
2020-03-23 08:42:09 -07:00
Jerry Zhang
28bf0038e5 [quant][graphmode][fix] Insert dequantize before use node (#34411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34411

To make sure dequantize and the node that uses the dequantized value reside in the
block, so that we can do quant fusion

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20519603

fbshipit-source-id: 3e4c68d0a73142716e19ea6a64ae3a5d6d51fa41
2020-03-23 08:07:33 -07:00
Lu Fang
a100cf5146 Revert D20541090: [JIT][torchbind] Namespaces for torchbind classes
Test Plan: revert-hammer

Differential Revision:
D20541090

Original commit changeset: ce3d9391dd3c

fbshipit-source-id: acc1d660fbda611941381315507dfe594c385db1
2020-03-21 12:20:44 -07:00
James Reed
e0496a70fc [JIT][torchbind] Namespaces for torchbind classes (#35054)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35054

Test Plan: Imported from OSS

Differential Revision: D20541090

Pulled By: jamesr66a

fbshipit-source-id: ce3d9391dd3cdf619042b8f6ba2645f4c1fc875c
2020-03-20 20:07:02 -07:00
Kimish Patel
3e58cba3c5 Fixes the Conv2d batch_norm folding for various cases. (#34932)
Summary:
This PR adds a preprocessing step in Conv2dBatchNorm folding.
It traverses the module to check if the bias of Conv2d module is set to
None. If it is, it assume that this a traced module and insert
Optional[Tensor] type bias.
Furthermore it insert getAttr for bias in the forward graph and fixes
_convolution op to take values from getAttr.
It also fixes parametere extraction from BN module which may not
have weight and bias attributes if affine was set to False. In scripted
mode such a BN module will get weight and bias attributes set to None.
For the case of eps it gets const propagated in tracing. This is also
fixed.
Few tests cases are added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34932

Test Plan:
python test/test_jit.py TestJit.test_foldbn_trivial
python test/test_jit.py TestJit.test_foldbn_trivial_nobias
python test/test_jit.py TestJit.test_foldbn_in_submodule
python test/test_jit.py TestJit.test_foldbn_shared_classtype
python test/test_jit.py TestJit.test_foldbn_complex_cases
python test/test_jit.py TestJit.test_nofoldbn_complex_cases

Differential Revision: D20536478

Pulled By: kimishpatel

fbshipit-source-id: 4e842976a380d0575a71001bb4481390c08c259e
2020-03-20 20:06:44 -07:00
Elias Ellison
62f11f0a35 [JIT] add id function (#34975)
Summary:
add `id` function so to give uses a way of keeping a `seen` set of nn modules.
n practice, this is only used between values of `T` and `T` or `T` and `Optional[T]`, so in this implementation I made it so that None is the only value that can be zero. Python also only guarantees `id()` gives semantically meaningful results for pointer types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34975

Differential Revision: D20549677

Pulled By: eellison

fbshipit-source-id: cca5ed4ef013f7540f93abf49f91f9830dfdca14
2020-03-20 20:03:10 -07:00
Elias Ellison
bcbde490e4 Fix flake (#34974)
Summary:
fix flake, add overload names
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34974

Differential Revision: D20519191

Pulled By: eellison

fbshipit-source-id: d08d36b64397287cad484690074e694d8a0e472e
2020-03-18 16:45:33 -07:00
Jerry Zhang
b2e5e0cad6 [quant][graphmode] quantization support for aten::rehshape (#34803)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34803

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20504457

fbshipit-source-id: 5ca691ef4880c72d30d62390e63e3288b2f06dce
2020-03-18 15:40:43 -07:00
Jerry Zhang
d77d907f0e [quant][graphmode] Add quantization support for aten::dropout (#34347)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34347

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20504453

fbshipit-source-id: 1bab29e21d0564ed88cdeb4894addfe00ebbd390
2020-03-18 14:35:27 -07:00
Michael
f3b8a470e1 Added functionality for all to take Lists as input (#34582)
Summary:
New pull request after rebase error in pull request https://github.com/pytorch/pytorch/issues/33923
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34582

Differential Revision: D20447689

Pulled By: eellison

fbshipit-source-id: 4296b64185eccb136b1b614b532deb3af20c7544
2020-03-18 12:01:30 -07:00
Jerry Zhang
841f7600bb [quant][graphmode] Quantization pattern for aten::linear (#33854)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33854

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20493031

fbshipit-source-id: bafd0a3ba5d07327d451b3915f043db33b012b53
2020-03-17 16:36:30 -07:00
Owen Anderson
a4224886f3 Eliminate guards through max_pool ops. (#34512)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34512

Differential Revision: D20478962

Pulled By: resistor

fbshipit-source-id: 86fc926305f95cae8b334ed344d8e0cdd1ef7b2b
2020-03-17 14:00:00 -07:00
James Reed
699a4ed8f5 [testing][do not land] (#34605)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34605

Test Plan: Imported from OSS

Differential Revision: D20393219

Pulled By: jamesr66a

fbshipit-source-id: c74d886f5f01061294203a002b72b75a3c446f09
2020-03-16 23:56:00 -07:00
peter
24c9e61e79 Enable JIT tests on Windows (#27029)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27029

Reviewed By: eellison

Differential Revision: D20458664

Pulled By: jamesr66a

fbshipit-source-id: 22be918543703869f471e89b3478423198351bf3
2020-03-16 11:26:21 -07:00
Jerry Zhang
cec9758afa [quant][graphmode] Add quantization pattern for quantized::add_relu (#33532)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33532

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20354880

fbshipit-source-id: ea608a5ace395a909851f9e577ffdcb51512a3af
2020-03-16 10:20:57 -07:00
Tugrul Ince
08bc3c6cbf Remove unnecessary import (#34778)
Summary:
https://github.com/pytorch/pytorch/issues/34563 accidentally introduced a lint error due to an unused import. This PR removes this import.

Jit tests run as expected after this change:
```
> python test/test_jit.py
.....
Ran 2435 tests in 100.077s

OK (skipped=140, expected failures=1)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34778

Differential Revision: D20459708

Pulled By: tugrulince

fbshipit-source-id: bb742085fafc849ff3d9507d1557556e01fbeb4b
2020-03-15 09:56:55 -07:00
Jerry Zhang
5710374e4e [reland][quant][graphmode] Add quantized conv2d-relu fusion pattern (#33279) (#34744)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34744

att

Test Plan: python test/test_jit.py

Differential Revision: D20449667

Pulled By: jerryzh168

fbshipit-source-id: 01bbc26604fac421dcaacaf4fa1b57731f1f08b7
2020-03-14 01:03:18 -07:00
Zachary DeVito
52005b551c invokeOperatorFromPython: support overloaded operator calling (#34671)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34671

Like the python arg parser, this tries to convert to the schema in order.
It introduces schema_match_exception which gets thrown when the schema doesn't match,
allowing the overload handler to try the next option.

Behavior will not 100% match the schema argument parser but should work for
simple cases using custom binding.

Test Plan: Imported from OSS

Differential Revision: D20432206

Pulled By: zdevito

fbshipit-source-id: 280839a2205ea3497db3a9b5741fccc1e2bff9a8
2020-03-13 18:46:03 -07:00
Jerry Zhang
e7910aa9e5 [fix] use non-inplace for insert observer pass (#34190)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34190

inplace modification of ClassType might affect other tests, so we want to do non-inplace modifications.
Actually the inplace argument will be removed soon.

Test Plan:
ci

Imported from OSS

Differential Revision: D20451765

fbshipit-source-id: e87ad528c4e7f84f5774b94a8e3e85568269682d
2020-03-13 17:25:07 -07:00
Tugrul Ince
c9023e3b12 Support left and right shift operators in JIT (#34563)
Summary:
With this PR, we can now support left and right shift operators in the JIT engine for <int, int> and <Tensor, int>.

Updated tests pass as expected:
```
> python test/test_jit.py
...
Ran 2427 tests in 84.861s

OK (skipped=139, expected failures=1)
```

Running the following code with Python results in the output below:
```
> cat ~/expressions.py
import torch

torch.jit.script
def fn(a, b):
    # type: (int, int)
    return (
        a << b,  # supported
        b >> a,  # supported
        a & b,
        a | b,
        a ^ b
    )
print(fn.graph)
```

```
> python ~/expressions.py
graph(%a.1 : int,
      %b.1 : int):
  %4 : int = aten::leftshift(%a.1, %b.1) # /home/ince/expressions.py:7:8
  %7 : int = aten::rightshift(%b.1, %a.1) # /home/ince/expressions.py:8:8
  %10 : int = aten::__and__(%a.1, %b.1) # /home/ince/expressions.py:9:8
  %13 : int = aten::__or__(%a.1, %b.1) # /home/ince/expressions.py:10:8
  %16 : int = aten::__xor__(%a.1, %b.1) # /home/ince/expressions.py:11:8
  %17 : (int, int, int, int, int) = prim::TupleConstruct(%4, %7, %10, %13, %16)
  return (%17)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34563

Differential Revision: D20434209

Pulled By: tugrulince

fbshipit-source-id: 886386c59755106e17b84778b8e495b80a6269cd
2020-03-13 13:00:33 -07:00
Jerry Zhang
e9a660a160 Revert D20354878: [quant][graphmode] Add quantized conv2d-relu fusion pattern
Test Plan: revert-hammer

Differential Revision:
D20354878

Original commit changeset: 2b19797d4b3f

fbshipit-source-id: 18f447074794af0d579e145df02af47d01746921
2020-03-12 21:29:08 -07:00
Jerry Zhang
0ff4d37933 [quant][graphmode] Add quantized conv2d-relu fusion pattern (#33279)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33279

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20354878

fbshipit-source-id: 2b19797d4b3fd96918164a58bfbd768211ad6c6d
2020-03-12 19:49:57 -07:00
Jerry Zhang
90ca7a1feb [quant][graphmode] Add Finalize function that inlines graph and produce quantized ops (#33927)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33927

Test Plan:
test will be added in later PRs

Imported from OSS

Differential Revision: D20354879

fbshipit-source-id: 03976f4b86c46dbdc4e45764a1e72f1a3855a404
2020-03-12 14:52:58 -07:00
Elias Ellison
514cba0661 [JIT] remove builtin interpolate functions (#34514)
Summary:
`torch.nn.functional.interpolate` was written as a builtin op when we scripted the standard library, because it has four possible overloads. As a result, whenever we make a change to `interpolate`, we need to make changes in two places, and it also makes it impossible to optimize the interpolate op. The builtin is tech debt.

I talked with ailzhang, and the symbolic script changes are good to remove (i guess that makes a third place we needed to re-implement interpolate).

I'm trying to get rid of unneccessary builtin operators because we're standardizing mobile bytecode soon, so we should try to get this landed as soon as possible.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34514

Differential Revision: D20391089

Pulled By: eellison

fbshipit-source-id: abc84cdecfac67332bcba6b308fca4db44303121
2020-03-12 09:21:33 -07:00
James Reed
1f834b5c2a [JIT] Torchbind error if python instantiate class that doesnt exist (#34568)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34568

Test Plan: Imported from OSS

Differential Revision: D20378106

Pulled By: jamesr66a

fbshipit-source-id: 395a3b05d23727b9cfd074440b2d0e8ef002ec09
2020-03-11 13:13:08 -07:00
ettiee
2cf576e9ea small typos (#34589)
Summary:
Spotted a couple of small typos 🙏
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34589

Differential Revision: D20387653

Pulled By: ngimel

fbshipit-source-id: 3089fe606ccb8c8ee57cf7a900aba714fd0ce567
2020-03-11 11:01:31 -07:00
Nikolay Korovaiko
e16908cb1f profile block outputs; helps guard elimination (#33889)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33889

Reviewed By: zdevito

Differential Revision: D20294979

Pulled By: Krovatkin

fbshipit-source-id: 2a68710ec8f8f854c99dfe173f49da442a39e498
2020-03-09 17:12:58 -07:00
Nikolay Korovaiko
0a4a558c2c Dictionary Constants (#32869)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32869

Differential Revision: D19909339

Pulled By: Krovatkin

fbshipit-source-id: 6fe2a9b470768f84b957c69cdf9af3a1bd9b1ca9
2020-03-09 16:12:36 -07:00
Jerry Zhang
2e7eef41ac [quant][graphmode] Swap quantized functional linear with aten::linear (#33853)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33853

Quant fusion relies on inline, but inline will break the CallFunction("linaer", ...) into a if block
it will be hard to recognize this block and swap it with quantized::linear, in order to
preserve the op, we will swap all quantized functional linear into aten::linear.
They might produce different backward graph, but this is called in the step before we get quantized
model, so it shouldn't affect anything.
We'll integrate this with convert_script later in the new "finalize_quant" API

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20343873

fbshipit-source-id: 423e03bf893b79267d2dc97bc997ee1bfe54ec0f
2020-03-09 15:45:20 -07:00
davidriazati
2c0f3536b6 [jit] Make ModuleLists a sugared value (#34320)
Summary:
Previously when emitting subscripts we only emitted actual values, but
now they may sometimes emit a `ModuleValue`, so it should stay as a
`SugaredValue`. This allows for the result of the subscript to be
treated as a real module (i.e. you can just do `self.modlist[1](inputs)`
instead of `self.modlist[1].forward(inputs)`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34320

Pulled By: driazati

Differential Revision: D20345642

fbshipit-source-id: 2bedf9a454af747b704422f6bbb8370cbdf4bf61
2020-03-09 15:36:46 -07:00
Jerry Zhang
776d2a1e8f [quant][graphmode] Handling ops doesn't require observation in insertObservers (#33481)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33481

We have to propagate observed property of values through ops like max_pool2d, flatten and
avoid inserting duplicated observers.
For example:
```
x1 = self.conv(x)
x2 = maxpool(x1)
x3 = self.conv(x2)
```
If x1 is observed, we should propagate this information through maxpool and
we should consider x2 as observed as well.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20261897

fbshipit-source-id: 7de354a3ccb2b6e1708f5c743d4d9f7272691a93
2020-03-09 13:15:54 -07:00
Adam Paszke
e3d50c4dda Retain the order of parameters while generating ConcreteModuleTypes (#34131)
Summary:
`ConcreteModuleTypeBuilder` used to keep parameters together with all others attributes in an `unordered_map` often leading to reordering them while building up the type. Parameter order is semantically meaningful, so we need to preserve it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34131

Differential Revision: D20331542

Pulled By: suo

fbshipit-source-id: 5b860025f7902654d6099751d3fb14b12f6f5a67
2020-03-09 10:25:45 -07:00
Shen Li
30680196e4 Revert D20121915: [JIT] Add support for list()
Test Plan: revert-hammer

Differential Revision:
D20121915

Original commit changeset: c6c4ef444dbf

fbshipit-source-id: 829adb58780f4d0f41acebb3e7640a9c68bdbc1b
2020-03-06 07:16:40 -08:00
Elias Ellison
38857734f0 [JIT] fix py35 test (#34350)
Summary:
test_module_interfaces was using syntax only supported in >= 3.6
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34350

Reviewed By: mrshenli

Differential Revision: D20298869

Pulled By: eellison

fbshipit-source-id: 22319ca403113cff2eedf57767bb34d9580e6db3
2020-03-05 21:31:19 -08:00
Elias Ellison
78aebbcb88 [JIT] add other module apis (#34106)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34106

Test Plan: Imported from OSS

Differential Revision: D20283996

Pulled By: eellison

fbshipit-source-id: 88e7bc4547e96717d6c8efe0b25ede0d198d9e68
2020-03-05 16:12:29 -08:00
Elias Ellison
f218842f2e [JIT] Add support for list() (#33818)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33818

Test Plan: Imported from OSS

Differential Revision: D20121915

Pulled By: eellison

fbshipit-source-id: c6c4ef444dbf1d4134dccb28c13315e225945b64
2020-03-05 14:48:20 -08:00
Elias Ellison
479c3b0aa5 [JIT] add support for torch.norm (#33783)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33783

Fix for https://github.com/pytorch/pytorch/issues/20113

Test Plan: Imported from OSS

Differential Revision: D20121917

Pulled By: eellison

fbshipit-source-id: ffedcc40678cd80f5529ff9323088eed544e5158
2020-03-05 14:46:24 -08:00
Jerry Zhang
6f52562e75 [quant][graphmode] Add add_relu pattern in skip values (#32816)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32816

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20208786

fbshipit-source-id: ef84b77f46f88b192a75c123aabaa203836a7dfb
2020-03-04 09:36:02 -08:00
Jerry Zhang
e5bbd23ca7 [quant][graphmode] Skip quantizing input and output in matched module (#32814)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32814

We skip quantization for the intermediate values for patterns like `Conv - ReLU`,
but currently we didn't skip quantizing the input/output of the graphs of matched modules,
since we now changed the way we add observers, this also needs to be updated.

Test Plan:
python test/test_jit.py -- 'TestJit.test_insert_observers_skip_values'

Imported from OSS

Differential Revision: D20208785

fbshipit-source-id: ce30f2c4c8ce737500d0b41357c80ec8b33aecf9
2020-03-03 18:38:36 -08:00
Elias Ellison
04378eb618 [JIT] Add modulelist indexing for integer literal (#29236)
Summary:
Allow indexing into modulelists for integer literals.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29236

Differential Revision: D19583935

Pulled By: eellison

fbshipit-source-id: 24d54051422a69769dac5e82f3bf622ded2bd8a6
2020-03-03 14:47:31 -08:00
Jerry Zhang
f26bbb5f86 [fix] flake8 lint error (#34146)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34146

Test Plan:
.

Imported from OSS

Differential Revision: D20228830

fbshipit-source-id: 41de3c27c10256939ae6309d25b0499f708a3dca
2020-03-03 13:15:27 -08:00
Zachary DeVito
358450e02b improved TorchScript traceback (#33834)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33834

This changes how we report Tracebacks to make them more clear when
there are both serialized and non-serialized ranges. It now looks like:

```
Traceback (most recent call last):
  File "foo.py", line 25, in <module>
    s2(a, b)
  File "/scratch/zdevito/pytorch/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__.py", line 7, in forward
    x: Tensor,
    y: Tensor) -> Tensor:
    return (self).bar(x, y, )
            ~~~~~~~~~ <--- HERE
  def bar(self: __torch__.Moo,
    x: Tensor,
  File "code/__torch__.py", line 11, in bar
    x: Tensor,
    y: Tensor) -> Tensor:
    _0 = (self).baz(x, y, )
          ~~~~~~~~~ <--- HERE
    _1 = torch.ones([3], dtype=None, layout=None, device=None, pin_memory=None)
    return torch.add(_0, _1, alpha=1)
  File "code/__torch__.py", line 17, in baz
    x: Tensor,
    y: Tensor) -> Tensor:
    return torch.add(x, y, alpha=1)
           ~~~~~~~~~ <--- HERE

Traceback of TorchScript, original code (most recent call last):
  File "foo.py", line 11, in forward
    def forward(self, x, y):
        return self.bar(x, y)
               ~~~~~~~~ <--- HERE
  File "foo.py", line 9, in bar
    def bar(self, x, y):
        return self.baz(x, y) + torch.ones(3)
               ~~~~~~~~ <--- HERE
  File "foo.py", line 7, in baz
    def baz(self, x, y):
        return x + y
               ~~~~~ <--- HERE
RuntimeError: The size of tensor a (4) must match the size of tensor b (5) at non-singleton dimension 1
```

It follows Python convension of having the most important information last
and reading from the bottom up.

Changes:
* Moved the error message to the end, to copy Python
* Report original traceback separate from serialized traceback
* Make sure root functions have names in the interpreter trace.

Test Plan: Imported from OSS

Differential Revision: D20126136

Pulled By: zdevito

fbshipit-source-id: fd01f9985e5d74e04c4d064c02e8bc320f4fac13
2020-03-03 12:27:38 -08:00
Jerry Zhang
5b9f1ada30 [quant][graphmode] Observing input/output values in call site (#33277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33277

Currently we insert observer in the called graph, which is incorrect since graphs can be shared
and the decision of whether to insert observer or not might dependend on where the graph is called.
For example, for a call sequence `self.conv1(self.conv2(x))`, we can't inserting observer correctly
if `self.conv1` and `self.conv2` are sharing the same type in the current implementation, because we insert
observer in the graph of the forward method of Conv2d right now and this call sequence requires us to insert
only one observer for the output of self.conv1/input of self.conv2.
We'll need to insert observers for input/output values of the graph in call site instead.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20208787

fbshipit-source-id: 739e1d877639c0d0ed24e573bbd36211defa6836
2020-03-03 10:53:24 -08:00
davidriazati
11843049d5 [jit] Fix flipped PackedSequence outputs in script (#32955)
Summary:
Stacked PRs
 * **#32955 - [jit] Fix flipped PackedSequence outputs in script**
 * #32953 - [jit] Support properties on `Device`

Fixes #32605

Pull Request resolved: https://github.com/pytorch/pytorch/pull/32955

Pulled By: driazati

Differential Revision: D20165514

fbshipit-source-id: a130c438b40e51ec27d36f021b0dc7869570aa6a
2020-03-02 13:50:36 -08:00
Zino Benaissa
cab8772c6c Freezing Torchscript modules (#32178)
Summary:
This patch enables folding GetAttr nodes with their corresponding
values. _jit_pass_freeze_module API returns a new TorchScipt module
where all function calls and get attributes are inlined.
Usage:

frozen_model = torch._C._freeze_module(scrited_model._c)
frozen_model.forward(...)

This API currently optimizes the forward method. We will follow up to
to preserve and optimize methods and attributes that are annotated as
 torch.jit.interface.

Several future improvements to JIT optimizations are required to maximize
clean up/de-sugar the graph and eliminate redundancies.
Ideally, we want to produce a graph that can easily be lowered to
GLOW and other low-level backends.
__
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32178

Differential Revision: D19419640

Pulled By: bzinodev

fbshipit-source-id: 52baffaba9bca2cd60a8e747baa68d57711ad42b
2020-03-02 11:38:36 -08:00
Basil Hosmer
ad769d74d9 Collapse _like overloads into a single overload. (#33705)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33705

The fact that there were two overloads appears to be a historical
artifact that dates back to when goldsborough originally added these
bindings in the first place.  If TensorOptions is made optional,
then you only need one overload, not two, as they are exactly redundant
with each other.  When MemoryFormat was added, it was made a little
harder to do this, as the C++ syntax at::empty_like(t, memory_format) would
not work if you collapsed the overload; but now it works because TensorOptions
supports MemoryFormat.

The upshot is, I can get rid of all the overloads and just have one overload.
Amazingly, this change is backwards compatible, as the test attests.  While
I was at it, I also deleted the overload name from the functions entirely.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D20073355

Pulled By: bhosmer

fbshipit-source-id: c6a8908213b32ccf6737ea864d135e2cce34f56b
2020-03-01 19:40:22 -08:00
Elias Ellison
85b1c45a45 [JIT] fix alias assertion (#33952)
Summary:
This bug has been hit a couple times recently. We need to handle all bivariant types, not just optional, when asserting mutability/immutability of pointed-to elements in alias analysis.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33952

Differential Revision: D20166025

Pulled By: eellison

fbshipit-source-id: cf3df9897a639641ef8303a08ba2b13523d01ef1
2020-02-28 19:54:29 -08:00
davidriazati
2111c4ff0c [jit] Add missing tensor properties (#33906)
Summary:
Fixes #30775

This adds TorchScript implementations (copied from `python_variable.cpp`) for the remainin `Tensor` properties that were missing from the jit, in addition to a test that ensures new properties will trigger a failure so we can decide whether we want to add them as well.

For `some_tensor`, adds:

* `some_tensor.T`
* `some_tensor.ndim`
* `some_tensor.is_leaf`
* `some_tensor.name`
](https://our.intern.facebook.com/intern/diff/20153288/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33906

Pulled By: driazati

Differential Revision: D20153288

fbshipit-source-id: 2ddc48a14267077bc176065267e5ce52181b3d6b
2020-02-28 19:06:11 -08:00
Michael Suo
bd7e9c490a [jit] stop printing crap in test_jit (#33917)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33917

Test Plan: Imported from OSS

Differential Revision: D20150750

Pulled By: suo

fbshipit-source-id: 9a35298a8856d423fb6b9043174853cccf968706
2020-02-27 19:06:43 -08:00
Brian Vaughan
910acafc79 Revert D20124224: [jit] stop printing crap in test_jit
Test Plan: revert-hammer

Differential Revision:
D20124224

Original commit changeset: 9241d21fdf94

fbshipit-source-id: 0680f9db922f9a33a4e859eedd142b87a51bbede
2020-02-27 13:40:34 -08:00
Brian Vaughan
243af17d65 Revert D20103905: [jit] Fix flipped PackedSequence outputs in script
Test Plan: revert-hammer

Differential Revision:
D20103905

Original commit changeset: 84081213ed21

fbshipit-source-id: 2b260654fac87e52fbaf8035018e4ea484928af1
2020-02-27 13:29:35 -08:00
Jerry Zhang
afbd04449e [quant][graphmode] Swap dequantize after inline for ops that doesn't require observation (#33173)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33173

How to deal with ops that’s defined for both floating point and quantized Tensor?

Category of ops: the ones that doesn’t require observers, which means the quantization parameters(scale/zero_point) of the output of this op can be inferred from the quantization parameters of inputs.
For example:
avg_pool, max_pool, flatten, transpose, upsample

Another related topic to previous one is how do we deal with things like adaptive_avg_pool2d that does not require to be observed and it works with quantized tensor as well? If we insert quant/dequant for them, even the quant fusion becomes a numerically changing operation because the scale/zero_point for input and output are different.

Proposal

We can swap the operator with dequantize whenever we see it. For example, for pattern
Let’s say aten::general_op is defined for both floating point and quantized

%r = aten::conv(...)
%q = quantize(%r)
%dq = dequantize(%q)
%f = aten::general_op(%dq)
...

We detect that all inputs of aten::general_op is produced by dequantize, we’ll first delete all the dequantize for the inputs and then insert dequantize for each use of the output of the aten::general_op, note that this should work generally for all the case we might encounter.

After transformation we’ll have:

%r = aten::conv(...)
%q = quantize(%r)
%x = aten::general_op(%q)
%f = dequantize(%x)
...

1. Multiple inputs
    1. We need to make sure all inputs of the aten::general_op are produced by dequantize before we do this transformation
2. Input used by multiple operators
    1. We already did this by inserting dequantize for each use of the value
3. Output used by multiple operators
    1. We’ll reuse the code that inserts dequantize(might need some refactor)

Note that current concat does not belong to this category right now since it does not inherit quantization parameters from inputs.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20123590

fbshipit-source-id: de2febe1f37e4079457a23acaeccbc6d9c9e1f8a
2020-02-27 12:42:29 -08:00
Shihao Xu
9733711394 [JIT] Support calling Tensor.element_size() in TorchScript (#33808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33808

# Problem

https://github.com/pytorch/pytorch/issues/33620
ghstack-source-id: 99073701

Test Plan:
```
buck test mode/dev-nosan //caffe2/test:jit -- test_numel

buck test mode/dev-nosan //caffe2/test:jit -- test_element_size

buck build mode/dev-nosan //caffe2/test:jit \
&& buck-out/gen/caffe2/test/jit\#binary.par -r test_numel

buck build mode/dev-nosan //caffe2/test:jit \
&& buck-out/gen/caffe2/test/jit\#binary.par -r test_element_size
```

Compile error

P126667043

Generated code,
```
buck-out/dev/gen/caffe2/generate-code=register_aten_ops_0.cpp/register_aten_ops_0.cpp

buck-out/dev/gen/caffe2/generate-code=register_aten_ops_2.cpp/register_aten_ops_2.cpp
```
P126667064

Differential Revision: D7050644

fbshipit-source-id: 20dbdb9c500b6d7683c23e3049d43ed0ca06d831
2020-02-26 22:30:44 -08:00
davidriazati
cea0cc8ca8 [jit] Unify augmented assign handling (#33578)
Summary:
Stacked PRs
 * **#33578 - [jit] Unify augmented assign handling**
 * #32993 - [jit] Fix aug assign for non-tensor attributes

We handle augmented assignments to `Select` and `Var` statements differently, but the actual in place update is the same for both, so this PR factors it out into a method so we don't have 2 code paths doing the same thing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/33578

Pulled By: driazati

Differential Revision: D20127647

fbshipit-source-id: 94f37acbd2551498de9d2ca09a514508266f7d31
2020-02-26 19:13:15 -08:00
Jerry Zhang
4c33222c51 [quant][graphmode] Replicate dequantize nodes (#33531)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33531

We already insert dequantize for each use of the value, but there might still be cases where we only
see the value is used multiple times after inline. This pass adds the support to replicate dequantize
after inline to ensure output of dequantize is only used by one node, which is necessary to preserve all
quantization patterns like `dequant - conv - quant`

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20123591

fbshipit-source-id: 6edb10a4566538bcf9379d332233f870372b7a63
2020-02-26 18:59:16 -08:00
davidriazati
2b9fa4a756 [jit] Fix flipped PackedSequence outputs in script (#32955)
Summary:
Stacked PRs
 * **#32955 - [jit] Fix flipped PackedSequence outputs in script**
 * #32953 - [jit] Support properties on `Device`

Fixes #32605
](https://our.intern.facebook.com/intern/diff/20103905/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32955

Pulled By: driazati

Differential Revision: D20103905

fbshipit-source-id: 84081213ed214846e563b9f05bcab0210bb1a71b
2020-02-26 18:53:27 -08:00
Michael Suo
150e025be8 [jit] stop printing crap in test_jit (#33779)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33779

This should eliminate random warnings and print spew from test_jit.

It also fixes a bug where we weren't properly comparing captured outputs
(!)

Test Plan: Imported from OSS

Differential Revision: D20124224

Pulled By: suo

fbshipit-source-id: 9241d21fdf9470531b0437427b28e325cdf08d3a
2020-02-26 18:46:03 -08:00
Elias Ellison
857eb4145e [JIT] add support for torch.cdist (#33737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33737

Test Plan: Imported from OSS

Differential Revision: D20121916

Pulled By: eellison

fbshipit-source-id: b0427bbfd3ade1f3129c4a95a542fbc32c3abd76
2020-02-26 18:37:37 -08:00
Elias Ellison
f31b1d3453 [JIT] add support for lu_unpack (#33736)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33736

Test Plan: Imported from OSS

Differential Revision: D20121914

Pulled By: eellison

fbshipit-source-id: 1136f4d7678a2233129aefe3e30234af385b8353
2020-02-26 18:37:33 -08:00
Elias Ellison
4543cf4eb1 [JIT] add support for torch.lu to torchscript (#33724)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33724

Fix for https://github.com/pytorch/pytorch/issues/33381, partial fix of https://github.com/pytorch/pytorch/issues/30786

Test Plan: Imported from OSS

Differential Revision: D20077321

Pulled By: eellison

fbshipit-source-id: a1e6a0370712b36c9f66979098ac2f9d500ca5f6
2020-02-26 18:37:28 -08:00
Elias Ellison
fddf73250d [JIT] fix resolving of functions in torch/functional. fix compilation of torch.stft (#33504)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33504

Fix resolution fo functions that are bound onto torch in torch/functional.py. This does not fix compilation of all of those functions, those will be done in follow ups. Does torch.stft as a start.

Fixes #21478

Test Plan: Imported from OSS

Differential Revision: D20014591

Pulled By: eellison

fbshipit-source-id: bb362f1b5479adbb890e72a54111ef716679d127
2020-02-26 18:35:43 -08:00
Elias Ellison
057fd5e10d add support for _modules, reducing special casing of nn.Sequential (#29495)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29495

This PR adds support for `_modules`, making it so we no longer need to special case support for `nn.Sequential`. I was getting internal errors around the previous approach using `self.define()`, so i am adding this PR as part of the stack.

Fix for https://github.com/pytorch/pytorch/issues/28998

Test Plan: Imported from OSS

Differential Revision: D18412561

Pulled By: eellison

fbshipit-source-id: a8b24ebee39638fccf63b2701f65f8bb0de84faa
2020-02-26 18:07:19 -08:00
David Riazati
51e405743f Revert D20010383: [jit] Unify augmented assign handling
Test Plan: revert-hammer

Differential Revision:
D20010383

Original commit changeset: 52e559ce907e

fbshipit-source-id: 7ca938070d5e98c91e7a7b8485a3c1e790c3ceb2
2020-02-26 14:22:14 -08:00
davidriazati
867990dc17 [jit] Unify augmented assign handling (#33578)
Summary:
Stacked PRs
 * **#33578 - [jit] Unify augmented assign handling**
 * #32993 - [jit] Fix aug assign for non-tensor attributes

We handle augmented assignments to `Select` and `Var` statements differently, but the actual in place update is the same for both, so this PR factors it out into a method so we don't have 2 code paths doing the same thing.
](https://our.intern.facebook.com/intern/diff/20010383/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33578

Pulled By: driazati

Differential Revision: D20010383

fbshipit-source-id: 52e559ce907e95e5c169ab9d9690d0d235db36f3
2020-02-26 14:09:40 -08:00
Jerry Zhang
479e474a37 [quant][graphmode] FoldConvBatchNorm2d support shared ClassTypes (#32379)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32379

Folding Conv2d - BatchNorm2d modules means recalculate the weight and bias of Conv2d module by incorproating the parameters
of BatchNorm2d, and also change the method calls to calling only forward of Conv2d module, this involves change of both module
types and graph because the bias of Conv2d is a parameter when it has value and is an attribute when it is
None(since JIT code has assumption of prameter being Tensor in multiple places), therefore
we'll need to remove the bias attribute when it is None and add a bias attribute later. Since ClassType might be shared, we separate
remove and add in separate steps and also keep track of the processed graph to avoid modifying the graph and type multiple times.
However we'll have to record the slot index of bias as well so we can replay the slot removal on other instances of Conv2d module.

Test Plan:
tbd

Imported from OSS

Differential Revision: D20078719

fbshipit-source-id: cee5cf3764f3e0c0a4a2a167b78dbada2e3835cc
2020-02-24 17:29:13 -08:00
Thomas Viehmann
481e7f2e78 catch and propagate warnings for JIT ScriptMethods (#33010)
Summary:
We align it with ScriptFunctions by using the HANDLE_TH_ERRORS/END_HANDLE_TH_ERRORS_PYBIND macros.

Fixes https://github.com/pytorch/pytorch/issues/24155  or https://github.com/pytorch/pytorch/issues/24828 ?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33010

Differential Revision: D20053585

Pulled By: suo

fbshipit-source-id: c8876b54069285ba9638bb2328fd8738b59c396d
2020-02-24 10:28:17 -08:00
Nikolay Korovaiko
a7e22b4c6a add bailout checks to checkScript (#32802)
Summary:
this adds enough infrastructure to run bailout checks in `checkScript`. I'll need to figure out the best way to enable it for nightly builds now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32802

Differential Revision: D19974718

Pulled By: Krovatkin

fbshipit-source-id: 40485503f6d3ae14edcce98e1eec1f0559f3ad08
2020-02-21 21:18:54 -08:00
davidriazati
ee28831341 [jit] Fix aug assign for non-tensor attributes (#32993)
Summary:
Instead of erroring out this de-sugars augmented assignments to class
members from `self.a += 1` to `self.a = self.a + 1`.

Fixes #32973
](https://our.intern.facebook.com/intern/diff/19737636/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32993

Pulled By: driazati

Differential Revision: D19737636

fbshipit-source-id: 07307cde88d8c348a7affdafe26db21c74e28ec0
2020-02-21 08:42:35 -08:00
Hong Xu
a6a72ac68f Fix all occurrences of C416. (#33429)
Summary:
C416: Unnecessary (list/set) comprehension - rewrite using list/set().

See https://pypi.org/project/flake8-comprehensions/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33429

Differential Revision: D19972858

Pulled By: ezyang

fbshipit-source-id: faac042a94c59d737bd5ae983121a0a029346e23
2020-02-21 08:32:22 -08:00
Elias Ellison
faa800eb5b [JIT] remove inline everything jitter skip (#33468)
Summary:
The `not inline_everything` check was causing the jitter check to be skipped whenever we emitted a function. thanks SplitInfinity for pointing this out.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33468

Differential Revision: D19975934

Pulled By: eellison

fbshipit-source-id: 03faf8d2fd93f148100d8cf49cb67b8e15cf1f04
2020-02-20 15:58:25 -08:00
Michael Suo
416413dec4 [jit] add inlined_graph method to ScriptFunctions (#33508)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33508

Ever since we switched to not inlining by default, some users have
complained since they relied on inlining occuring to, e.g. process the
graph with some other tool. Add an inlined_graph for convenience in
those cases.

Test Plan: Imported from OSS

Differential Revision: D19977638

Pulled By: suo

fbshipit-source-id: fe1fa92ff888959203d5d1995930d488b5f9e24c
2020-02-19 15:41:25 -08:00
Zachary DeVito
83c347ff4a Remove prim::Constant op (#32804)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32804

Constants are interpreter primitives so the op was not actually used.
This cleans up some of the logic around it.

This also fixes constant prop such that failures to look up an op
do not silently stop constant propagation. Instead, only errors
inside the op implementation itself will do this.

Test Plan: Imported from OSS

Differential Revision: D19673156

Pulled By: zdevito

fbshipit-source-id: 7beee59a6a67a6c2f8261d86bd505280fefa999e
2020-02-18 15:06:56 -08:00
Owen Anderson
1d743e3154 Add guard elimination support for aten::unsqueeze. (#33371)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33371

Differential Revision: D19920041

Pulled By: resistor

fbshipit-source-id: 906af47676dba014c31eef069a4753207f2efc60
2020-02-18 13:22:58 -08:00
Owen Anderson
d35a4c202e Add support for aten::slice to guard elimination. (#33311)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33311

Differential Revision: D19911105

Pulled By: resistor

fbshipit-source-id: 402cfe5f2e03a62b78ed13157e1462cefd9eeafb
2020-02-14 22:54:37 -08:00
Elias Ellison
bf16688538 [JIT] peephole optimize values with NoneType (#33264)
Summary:
If a value has the type None, we can always replace it with a None constant.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33264

Differential Revision: D19878695

Pulled By: eellison

fbshipit-source-id: 5d0e7ffb37c5747997df093fec3183039d8dff4d
2020-02-13 12:03:49 -08:00
davidriazati
f61b45fc89 [jit] Support properties on Device (#32953)
Summary:
Stacked PRs
 * #32955 - [jit] Fix flipped PackedSequence outputs in script
 * **#32953 - [jit] Support properties on `Device`**

PyTorch devices have a `index` and `type` property. This PR adds support for both to TorchScript
](https://our.intern.facebook.com/intern/diff/19849320/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32953

Pulled By: driazati

Differential Revision: D19849320

fbshipit-source-id: ce845258c6110058dd9ea1f759ef74b7ed2e786e
2020-02-12 18:59:10 -08:00
Zachary DeVito
99349defc1 remove unnecessary Node* ops (#32760)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32760

Minor changes to the way ops are implemented to remove incidental use of Node*
in the operator implementation.

Current state for operators that previously took Node:

```
TBD:

USES NODE: prim::DifferentiableGraph(...) -> (...)
USES NODE: prim::profile(...) -> (...)
USES NODE: prim::FusionGroup(...) -> (...)
USES NODE: prim::PythonOp(...) -> (...)
USES NODE: prim::ImplicitTensorToNum(Tensor a) -> Scalar # next PR

Should be made interpreter primitives:

USES NODE: prim::TupleUnpack(...) -> (...)
USES NODE: prim::TupleSlice(...) -> (...)
USES NODE: prim::TupleConstruct(...) -> (...)
USES NODE: prim::ListUnpack(...) -> (...)
USES NODE: prim::ListConstruct(...) -> (...)
USES NODE: prim::DictConstruct(...) -> (...)
USES NODE: prim::Constant() -> (...)
USES NODE: prim::isinstance(...) -> (...)
USES NODE: prim::CreateObject(...) -> (...)
USES NODE: prim::fork(...) -> (...)
USES NODE: aten::warn(str message, *, int stacklevel=2) -> () # need stack level information, so ideally in interpreter so it can look at the stack

Should be made into vararg operators, i.e. the operators last argument should be an IValue
that contains the number of arguments.

USES NODE: prim::FusedConcat(...) -> (...)
USES NODE: prim::MMTreeReduce(...) -> (...)
USES NODE: prim::MMBatchSide(...) -> (...)
USES NODE: prim::ConstantChunk(...) -> (...)
USES NODE: prim::AutogradAnyNonZero(...) -> bool
USES NODE: prim::BroadcastSizes(...) -> (...)
USES NODE: prim::ChunkSizes(...) -> (...)
USES NODE: aten::format(str self, ...) -> str
USES NODE: prim::Print(...) -> (...)

fixed:

USES NODE: aten::extend(Tensor[](a!) self, Tensor [] other) -> ()
USES NODE: aten::copy(Tensor[](a) self) -> Tensor[]
USES NODE: aten::extend(int[](a!) self, int [] other) -> ()
USES NODE: aten::copy(int[](a) self) -> int[]
USES NODE: aten::extend(float[](a!) self, float [] other) -> ()
USES NODE: aten::copy(float[](a) self) -> float[]
USES NODE: aten::extend(bool[](a!) self, bool [] other) -> ()
USES NODE: aten::copy(bool[](a) self) -> bool[]
USES NODE: aten::extend(t[](a!) self, t [] other) -> ()
USES NODE: aten::copy(t[](a) self) -> t[]
USES NODE: aten::keys(Dict(str, t) self) -> str[](*)
USES NODE: aten::values(Dict(str, t) self) -> t[](*)
USES NODE: aten::dict((str, tVal)[] inputs) -> Dict(str, tVal)
USES NODE: aten::keys(Dict(int, t) self) -> int[](*)
USES NODE: aten::values(Dict(int, t) self) -> t[](*)
USES NODE: aten::dict((int, tVal)[] inputs) -> Dict(int, tVal)
USES NODE: aten::keys(Dict(float, t) self) -> float[](*)
USES NODE: aten::values(Dict(float, t) self) -> t[](*)
USES NODE: aten::dict((float, tVal)[] inputs) -> Dict(float, tVal)
USES NODE: aten::keys(Dict(Tensor, t) self) -> Tensor[](*)
USES NODE: aten::values(Dict(Tensor, t) self) -> t[](*)
USES NODE: aten::dict((Tensor, tVal)[] inputs) -> Dict(Tensor, tVal)
USES NODE: aten::test_vartype2(t a, t[] b) -> (t[])
USES NODE: aten::_ncf_unsqueeze(Tensor self, int ndim) -> Tensor
USES NODE: aten::_ncf_view(Tensor self, int[] input_shape, int normalized_ndim) -> Tensor
USES NODE: prim::is_none(int? a) -> bool
USES NODE: aten::__interpolate(Tensor input, int? size = None, float[]? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::__interpolate(Tensor input, int[]? size = None, float[]? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::__interpolate(Tensor input, int? size = None, float? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::__interpolate(Tensor input, int[]? size = None, float? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::sorted(t[](a) self) -> (t[])
USES NODE: aten::sort(t[](a!) self, bool reverse=False) -> ()
USES NODE: aten::test_vartype(t[] a, t b) -> (t)
USES NODE: prim::unchecked_unwrap_optional(t(a)? optional) -> t(a)
USES NODE: prim::unchecked_cast(...) -> (...)
USES NODE: aten::dict() -> Dict(str, Tensor)
USES NODE: prim::Load(...) -> (...)
USES NODE: prim::Store(...) -> (...)
USES NODE: prim::Drop(...) -> (...)
USES NODE: aten::tensor(t[] data, *, ScalarType? dtype=None, Device? device=None, bool requires_grad=False) -> Tensor
USES NODE: aten::as_tensor(t[] data, *, ScalarType? dtype=None, Device? device=None) -> Tensor
```

Test Plan: Imported from OSS

Differential Revision: D19615387

Pulled By: zdevito

fbshipit-source-id: 95298c3c4249b9f812c332d13f0fb79daeecb662
2020-02-12 14:49:02 -08:00
Hongyu Cai
de27f4261d [jit] remove redundant variables from JIT TestCase
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29091

Differential Revision: D19746083

Pulled By: suo

fbshipit-source-id: 76fd71740fe7a3f52da361d96a7b694ec208de24
2020-02-07 10:42:33 -08:00
Michael Suo
df1d68d52e [jit] fix parser for one-line functions (#32941)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32941

The Python grammar allows single-statement one-line functions. So we
should allow it in the string parser.

Test Plan: Imported from OSS

Differential Revision: D19704153

Pulled By: suo

fbshipit-source-id: 8c06cc9c600aa2a9567b484a1ecc0360aad443e3
2020-02-05 13:11:47 -08:00
James Reed
f393adc0ed [JIT] Fix python pickle serialization for torchbind (#32878)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32878
ghstack-source-id: 97736045

Test Plan: Imported from OSS

Differential Revision: D19669879

fbshipit-source-id: 23ea91cffe7344d1eed014e2509983c281dd18d3
2020-02-04 19:29:55 -08:00
James Reed
bc4790b3aa [JIT] Trace uses of torchbind classes as module attributes (#32833)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32833
ghstack-source-id: 97736046

Test Plan: Imported from OSS

Differential Revision: D19645714

fbshipit-source-id: 10a7271f13c3588aea666b44b916e90ba7b3c666
2020-02-04 19:28:37 -08:00
Elias Ellison
040bc1d0e1 [JIT] make is_scripting a condvalue (#32871)
Summary:
Add `torch.jit.is_scripting` to the list of CondValues, or values that if they are an input to a if statement we only compile one side of the if. I'm not sure if we actually want this PR.

Pros:
- Makes it easier to add features that are not yet supported in TorchScript (like has_torch_function)
- The current idiom of writing `torch.jit.is_scripting` and factoring out the block to a function annotated with `torch.jit.ignore` is functionally equivalent and much more cumbersome

Cons:
- Makes it easier to add features that are not yet supported in TorchScript
- Perhaps is confusing as a reader what is being compiled. Potentially could give all caps name or otherwise change name to make it more visually stand out.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32871

Differential Revision: D19670383

Pulled By: eellison

fbshipit-source-id: 5257b0bd23c66f199d59a7f2c911e948301e5588
2020-01-31 18:23:42 -08:00
Elias Ellison
10bd21d550 [JIT] fix nested select assign (#32877)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/31902

```
self.sub.a = 1
 ```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32877

Differential Revision: D19670322

Pulled By: eellison

fbshipit-source-id: 6d8f350b4d1169be1d2a56050fccd7c246ad9212
2020-01-31 16:58:26 -08:00
Hong Xu
b16dab8a41 Coding header is better specified in lowercase letters (#32850)
Summary:
The Python document <https://www.python.org/dev/peps/pep-0263/> gives
all examples using lowercase letters. Although it doesn't say
straightly, the following paragraph seems to indicate that uppercase
letters aren't legitimate:

> If a source file uses both the UTF-8 BOM mark signature and a magic encoding comment, the only allowed encoding for the comment is 'utf-8'.  Any other encoding will cause an error.

My Emacs also complains about the uppercase letters every time I save
the file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32850

Differential Revision: D19663281

Pulled By: ezyang

fbshipit-source-id: 48127d3c2fd6e22dd732a2766913735136ec2ebc
2020-01-31 10:02:30 -08:00
Michael Suo
3552be1090 [jit] fix the NoneType param/buffer hack (#32745)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32745

Some parameters (like `bias` in conv) are optional. To achieve this
previously, you had to add `bias` as a constant, which would invoke some
pretty weird behavior in the frontend, summarized as:
```
if bias is not None:
  add it as a parameter normally
else: # bias is None
  add it as a constant with the value None
```

There are several things bad about this:
1. Bias is not a constant. Marking it `__constants__` is confusing.
2. It basically relies on an implementation detail (the frontend
processes parameters before constants) to work.

Okay, whatever. I don't even know why we did this originally, but
getting rid of it doesn't break anything, so I assume improved NoneType
refinement has made this a non-issue.

Note on perf: this will make no difference; if bias was `None` it's still
folded out today, if bias is a Tensor it would be added as a parameter
both before and after this change

Test Plan: Imported from OSS

Differential Revision: D19628634

Pulled By: suo

fbshipit-source-id: d9128a09c5d096b938fcf567b8c23b09ac9ab37f
2020-01-29 17:04:39 -08:00
James Reed
465ebd58ba [JIT] pickle serialization for custom bound classes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32604

Test Plan: Imported from OSS

Differential Revision: D19566633

fbshipit-source-id: 9387d3ff45cbd6ccde49ce190a52859481cc301c
2020-01-28 11:02:59 -08:00
James Reed
1719da13f9 [JIT] Support for registering C++ lambdas as methods on custom C++ class
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32553

Test Plan: Imported from OSS

Differential Revision: D19543269

Pulled By: jamesr66a

fbshipit-source-id: 7e566650295e9d1c4f2f716470e061308a6210a0
2020-01-28 11:01:07 -08:00
Michael Suo
63170431f9 [jit] fix segfault on missing getstate (#32642)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32642

Previously, if we defined `__setstate__` but not `__getstate__`, we
would segfault. This PR turns that into a comprehensible error message
(and improves another error message as well).

Fixes https://github.com/pytorch/pytorch/issues/25886

Test Plan: Imported from OSS

Differential Revision: D19596463

Pulled By: suo

fbshipit-source-id: dbe76bc36bc747d65fb0223184c009e0e9ba072c
2020-01-28 01:25:37 -08:00
James Reed
d68592a440 [JIT] Fix classes as attributes in recursive scripting
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32594

Test Plan: Imported from OSS

Differential Revision: D19562951

Pulled By: jamesr66a

fbshipit-source-id: 3d5491c1c23456f107390a78be16da687de951e6
2020-01-27 20:37:48 -08:00
Jerry Zhang
91f10a1de1 [quant][graphmode][refactor] Better API for fold_convbn (#32380)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32380

We'll clone the module first and then fold conv bn and return a new
module

Test Plan:
.

Imported from OSS

Differential Revision: D19508033

fbshipit-source-id: 328e91a2c9420761c904a7f2b62dab4cfaaa31ac
2020-01-24 15:46:47 -08:00
Nikolay Korovaiko
7d0f0b62de API for testing bailouts (#32518)
Summary:
This API seems to be quite useful to make sure all bailouts in a graph are triggered. I used it for testing torchvision models and I was wondering if this might be something we might actually want to have? zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32518

Differential Revision: D19553147

Pulled By: Krovatkin

fbshipit-source-id: 7542c99051588b622091aec6d041c70731ca5d26
2020-01-24 11:19:41 -08:00
James Reed
6745bfc31c Revert "Remove __torch__ from custom class qualname" (#32514)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32514

This reverts commit c7fdf5b251.

Test Plan: Imported from OSS

Differential Revision: D19525532

Pulled By: jamesr66a

fbshipit-source-id: 126f4e87250a2ac739bd7aa161a0f7b39f143d38
2020-01-23 14:56:25 -08:00
James Reed
69f9bf8893 [JIT] Support returning tuple from custom bound C++ method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32477

Test Plan: Imported from OSS

Differential Revision: D19509927

Pulled By: jamesr66a

fbshipit-source-id: 7d407150402cc19344c3ec3b4a27b3d7c464e8ac
2020-01-23 14:56:15 -08:00
James Reed
7e14c420ae [JIT] Test __getstate__ and __setstate__ for custom bound C++ classes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32470

Test Plan: Imported from OSS

Differential Revision: D19508250

Pulled By: jamesr66a

fbshipit-source-id: 481299fb3c18fa874c2a1d2993984bb6b3193bac
2020-01-23 14:56:06 -08:00
James Reed
dbd29e5668 [JIT] Passing custom class as arg (#32260)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32260

This makes it so you can actually pass the custom class as an arg to ScriptFunctions

Test Plan: Imported from OSS

Differential Revision: D19424252

Pulled By: jamesr66a

fbshipit-source-id: c3530186619655781dedbea03c2ad321aaff1cb8
2020-01-23 14:54:59 -08:00
Elias Ellison
ef94496b36 [JIT] throw if no self arg on ignored methods (#32503)
Summary:
There was a user who did this and it would seg fault.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32503

Differential Revision: D19538481

Pulled By: eellison

fbshipit-source-id: dc3752028b9eff6ac88c025e8a2b5f8fd44ce32f
2020-01-23 14:27:00 -08:00
Pritam Damania
f050b16dd9 Move pytorch distributed tests to separate folder for contbuild. (#30445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445

Create distributed and rpc directories under caffe/test for better management
of unit tests.

Differential Revision: D18702786

fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
2020-01-22 21:16:59 -08:00
Yanli Zhao
193ac31441 [jit] Enable IValue to hold a PyObject (#32491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32491

This PR enables IValue to be able to hold a pure PyObject by adding a
new enum tag, a new jit_type to denote PyObject existance in IValue and
the JIT type system. We don't and not plan to expose this to user.

This is the basic piece that enable ivalue to be adopted broader like
making RRef always hold IValue, it might also simplify some compiler
logic
ghstack-source-id: 97039980

Test Plan: Imported from OSS

Differential Revision: D19502234

fbshipit-source-id: 90be001706d707d376cfbea25980fd82980df84a
2020-01-22 15:48:32 -08:00
Elias Ellison
38d122eca9 implement tuple constants (#31841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31841

Add Tuple Constants to JIT. The constraint here is that all elements of a tuple must themself be insertable as a a constant. Previously tuples were special cased in constant propagation, but now that there are more passes that are inserted constants, such as freezing, we should just have tuples be representable as constants.

Test Plan: Imported from OSS

Differential Revision: D19439514

Pulled By: eellison

fbshipit-source-id: 3810ba08ee349fa5598f4b53ea64525996637b1a
2020-01-22 12:13:31 -08:00
Elias Ellison
adf0916606 Add str[] float[] constants resubmit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31791

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D19439513

Pulled By: eellison

fbshipit-source-id: a04c7401687b051f0d4fb4794963931ebe004194
2020-01-22 12:11:58 -08:00
peter
b77c25dec0 Fix dll load logic for Python 3.8 on Windows (#32215)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/31181 and https://github.com/pytorch/pytorch/pull/31162#discussion_r362495611.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32215

Differential Revision: D19501869

Pulled By: ezyang

fbshipit-source-id: 363824e52d2592ad968ecf1df345aa4c0daff915
2020-01-22 08:33:34 -08:00
Jerry Zhang
44b270d892 insert_quant_dequant pass support shared class types (#31408)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31408

We'll error out when a graph is quantized with different QSchemes.
This only occurs when we have two modules that have same types (e.g. two Conv2d modules initialized with
same arguments) and quantized with two configs that would produce different quantized graphs, for example
per tensor affine and per channel affine. This is a rare case, so it should be OK to skip for now.
Actual support will come later.

Test Plan:
test_jit.py, test_quantization.py

Imported from OSS

Differential Revision: D19162366

fbshipit-source-id: 798f06d0ddef0c8458237ce88b62159cc77eec8b
2020-01-21 22:18:49 -08:00
James Reed
1ecad2bb2b Test passing custom class instance to bound method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32320

Test Plan: Imported from OSS

Differential Revision: D19437335

Pulled By: jamesr66a

fbshipit-source-id: 8f5166dbe6fc5704b12b6224932460b12be0d39b
2020-01-17 23:09:38 -08:00
James Reed
c7078a1ce8 Fix returning instance of custom class from method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32312

Test Plan: Imported from OSS

Differential Revision: D19433511

Pulled By: jamesr66a

fbshipit-source-id: f048d5f60eaba992ee42fea2d318a59b3a156578
2020-01-17 23:09:34 -08:00
Elias Ellison
e7bc1663bd fix unchecked cast alias analysis (#32309)
Summary:
Unchecked cast just refines the type of a value, the value stays the same, so the output should alias the input.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32309

Differential Revision: D19439037

Pulled By: eellison

fbshipit-source-id: fe6902d0d9a5a9ef5e9c13e1dbd056576d8c327e
2020-01-17 12:29:28 -08:00
Nikolay Korovaiko
53708e21ed classic fixed-point liveness
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31724

Differential Revision: D19426570

Pulled By: Krovatkin

fbshipit-source-id: 3387dfb25e6e9456d5d0517eac1d2e44e61d6813
2020-01-16 15:13:22 -08:00
Michael Suo
90c65b81c3 Define repr() on IValues (#32232)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32232

Previously, we were using `operator<<` as the default way of printing
IValue constants during serialization. The semantics of `operator<<`
were ill-defined; and this bit us in particular with strings and lack of
quoting.

This PR defines the role of `operator<<`: much like Python `str()`, it
is intended to produce a human-readable-ish representation for
debugging purposes.

This PR also defines a new `repr()` function on IValue that is intended
to produce a valid Python expression that can be used to recreate an
object with the same value. `repr()` is not defined on all IValue kinds
(notably tensors!) for this reason.

Test Plan: Imported from OSS

Differential Revision: D19417036

Pulled By: suo

fbshipit-source-id: c102d509eaf95a28b6a62280bc99ca6f09603de5
2020-01-15 17:35:41 -08:00
Richard Zou
19bbb4fccb Stop building documentation in pytorch_linux_xenial_cuda*_build (#32187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32187

Fixes #32058. Previously we would build documentation during the pytorch
linux cuda build. We don't actually need to do this because we have a
dedicated python_doc_build job that builds the docs. With this change,
the CUDA build should run ~10 minutes faster, giving devs faster signal.

Test Plan: - Check the CUDA (10.1) build on this PR, make sure it doesn't build the docs.

Differential Revision: D19400417

Pulled By: zou3519

fbshipit-source-id: e8fb2b818146f33330e06760377a9afbc18a71ed
2020-01-15 07:48:42 -08:00
Nikolay Korovaiko
02c3493a84 Fix an invalid peephole transformation if input/output values are written to (#28455)
Summary:
fixes https://github.com/pytorch/pytorch/issues/28360
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28455

Differential Revision: D19374601

Pulled By: Krovatkin

fbshipit-source-id: 622f24b40aba03e79e55a6b8d25d88417f7d8bad
2020-01-14 16:28:07 -08:00
davidriazati
61e509b992 Skip un-runnable tests (#31965)
Summary:
`test_init_ops` calls `orthogonal_` which fails without lapack (this test was just missing a skip condition)

The cpp tests would fail with a `undefined symbol` error if run with `BUILD_TESTS=0`, so this PR skips them if that flag is `0`
](https://our.intern.facebook.com/intern/diff/19320064/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31965

Pulled By: driazati

Differential Revision: D19320064

fbshipit-source-id: d1dcd36714107688ded25a414e8969abe026bd03
2020-01-14 11:36:52 -08:00
Jerry Zhang
1f34801460 More robust mangling (#31978)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31978

Currently we keep a `mangleIndex_` that's intenral to compilation unit and
just increment the index when we found the original name is mangled, this doesn't
guarantee the new name is not defined.
This PR fixes the problem by querying whether the new name is defined or not.
fixes: https://github.com/pytorch/pytorch/issues/31268

Test Plan:
fixes the issue

Imported from OSS

Differential Revision: D19350535

fbshipit-source-id: fe3262b2838d4208ab72e2cd4a5970b3a792ae86
2020-01-13 11:11:50 -08:00
Elias Ellison
8ecd3f783d check for object equality in constant pooling (#31800)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31800

If we know that two constants are the same object, we can ignore other constraints and pool them together. This fixes an issue introduced by the other PR where quantization relied on constant pooling happening for correctness.

Test Plan: Imported from OSS

Differential Revision: D19269499

Pulled By: eellison

fbshipit-source-id: 9d4396125aa6899cb081863d463d4f024135cbf4
2020-01-08 16:47:07 -08:00
davidriazati
883fb5434a Use real argument names for Python functions (#29300)
Summary:
This hooks up `inspect` so that Python functions get their parameters
names attached instead of naming them `0, 1, 2, ...`. This also fixes
issue #28537 where `ignore` functions were improperly typing `self`.
](https://our.intern.facebook.com/intern/diff/19256434/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29300

Pulled By: driazati

Differential Revision: D19256434

fbshipit-source-id: 6a1fe7bd0afab708b8439517798955d0abfeb44c
2020-01-08 15:41:28 -08:00
Artem Volkhin
3a2757c682 Fix tracing for modules with List[Tensor] as output (#31343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31343

Fix an issue in TorchScript tracing for modules with `c10::List<at::Tensor>` as an output. TensorList was not supported properly.

Test Plan: unit tests

Reviewed By: wanchaol

Differential Revision: D18850722

fbshipit-source-id: 87a223104d1361fe754d55deceeb1e8bbcad629b
2020-01-07 11:57:25 -08:00
Jerry Zhang
5579611544 Enable foldbn tests (#29220)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29220

Support for accessing constant is added in previous
PRs, this PR re-enables the foldbn tests

Test Plan:
test_jit.py

Imported from OSS

Differential Revision: D18846848

fbshipit-source-id: 90ceaf42539ffee80b984e0d8b2420da66c263c3
2020-01-04 11:47:01 -08:00
Jerry Zhang
ebe69236d1 Expose class constant through attr and setattr in object (#29219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29219

We added class constant in previous PRs, this PR allows access to
class constant in the object API

Test Plan:
build/bin/test_jit
python test/test_jit.py

Imported from OSS

Differential Revision: D18846851

fbshipit-source-id: 888a6517d5f747d1f8ced283c0c2c30b2f6c72c6
2020-01-04 11:09:35 -08:00
BowenBao
c4f10e0fe7 Renaming scales parameter for interpolate (#31526)
Summary:
PR separated from https://github.com/pytorch/pytorch/pull/31274.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31526

Reviewed By: zou3519

Differential Revision: D19221931

Pulled By: gchanan

fbshipit-source-id: 81958a9910867ac9d62f2b47abc49384526c4e51
2020-01-02 08:19:30 -08:00
Lu Fang
cb1af5f61f Revert D19233558: add float[] str[] constants
Test Plan: revert-hammer

Differential Revision:
D19233558

Original commit changeset: 4f7c6d9ddbe7

fbshipit-source-id: a5020a9169e349a5970323471d673e8cd7818c66
2019-12-31 11:57:34 -08:00
Elias Ellison
dd0f2f0c19 add float[] str[] constants (#31503)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31503

Add support for float lists and string lists constants, which enables better constant propagation + constant pooling + freezing.

Test Plan: Imported from OSS

Differential Revision: D19233558

Pulled By: eellison

fbshipit-source-id: 4f7c6d9ddbe7623757a9a20606ce5f394e14e93d
2019-12-30 11:58:17 -08:00
davidriazati
6064223808 @slowTest some slow tests (#31706)
Summary:
These are all the jit tests that take > 10 seconds according to `pytest test/test_jit.py --durations=15`

```
32.76s call     test/test_jit.py::TestModels::test_super_resolution
32.20s call     test/test_jit.py::TestModels::test_neural_style
30.90s call     test/test_jit.py::TestJit::test_export_batchnorm
25.95s call     test/test_jit.py::TestJit::test_dropout_module_requires_grad
22.24s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Transformer
12.38s call     test/test_jit.py::TestScript::test_fuser_double_float_codegen
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31706

Pulled By: driazati

Differential Revision: D19251567

fbshipit-source-id: 8e76f717506b8bf28d1a63ce302feb0446dc9141
2019-12-30 11:45:24 -08:00
Mingbo Wan
647569e546 get rid of choco install (#30897)
Summary:
7zip and cmake are part of base image, no need to re-install. Remove the install step can make build/test more stable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30897

Differential Revision: D19232961

Pulled By: mingbowan

fbshipit-source-id: fa3bbd1325839a2a977bf13fdbd97fda43793b8d
2019-12-27 13:12:04 -08:00
davidriazati
446e9af5b9 Fix parsing of big float literals (#29940)
Summary:
Stacked PRs
 * **#29940 - [jit] Fix parsing of big float literals**
 * #29935 - [jit] Fix hex literal parsing
 * #29931 - [jit] Throw a better error for int too big for int64_t
](https://our.intern.facebook.com/intern/diff/19186604/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29940

Pulled By: driazati

Differential Revision: D19186604

fbshipit-source-id: 6ef66588a5cf956f281e7bd1e5584ef06f5296e9
2019-12-23 17:21:07 -08:00
Gregory Chanan
68e5172382 Support optional float parameters (float?, optional<double>). (#31517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31517

This is going to be used by upsample (which currently uses magic values to represent optionals).

For now, we just introduce a fake function for testing (torch._test_optional_float(x)).

Test Plan: Imported from OSS

Differential Revision: D19198721

Pulled By: gchanan

fbshipit-source-id: 0a1382fde0927c5d277d02d62bfb31fb574b8c74
2019-12-23 08:33:39 -08:00
James Reed
7d630278da Separate torchbind from Python (#30242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30242

Pull Request resolved: https://github.com/pytorch/pytorch/pull/29501

Currently blocked on schema serialization issue

Test Plan: Imported from OSS

Differential Revision: D18463063

Pulled By: jamesr66a

fbshipit-source-id: c12a1b644eb9bf04e68ff93cccf91d6cb3e75359
2019-12-21 22:52:40 -08:00
Martin Yuan
11854bcd38 Add test to torch.jit.export_opnames, make the _C function private
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31446

Test Plan: Imported from OSS

Differential Revision: D19172851

Pulled By: iseeyuan

fbshipit-source-id: f06d8766ed73c9abe4ebf41c402ee64880d745be
2019-12-20 13:38:43 -08:00
Nikolay Korovaiko
5375ceae80 run optimizations on pre-profiled graph (#31392)
Summary:
This is the first stab at running profile-insensitive optimizations on pre-profiled graphs. Running those optimizations has a potential to simplify graphs greatly before GuardElimination and GuardElimination should be able to remove more guards.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31392

Differential Revision: D19173639

Pulled By: Krovatkin

fbshipit-source-id: 2485a2a598c10f9b5445efb30b16439ad4551b3f
2019-12-20 10:49:08 -08:00
Zachary DeVito
457286a383 fix missing type check in dictionary literal
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31375

Test Plan: Imported from OSS

Differential Revision: D19145440

Pulled By: zdevito

fbshipit-source-id: 69909089586149ef766b4858d3420864a81b2493
2019-12-19 16:22:36 -08:00
Nikolay Korovaiko
fc3103b116 fixing a naming issue in creating a residual loop node in a bailout graph (#31400)
Summary:
This addresses the issue of differentiating between `%4` in
`%12 : int, %y.1 : Tensor = prim::Loop(%9, %6, %4, %3)` and `%y.5 : Double(3) = aten::cat(%22, %4) # test_jit.py:3772:24` in `%4` loop's body in a residual continuation loop, because these should be different values.

```
[DUMP profiling_graph_executor_impl.cpp:124] with prim::BailoutTemplate_0 = graph(%z.1 : int,
[DUMP profiling_graph_executor_impl.cpp:124]       %size.1 : int):
[DUMP profiling_graph_executor_impl.cpp:124]   %2 : Tensor = prim::Constant[value= 1  1 [ CPUDoubleType{2} ]]()
[DUMP profiling_graph_executor_impl.cpp:124]   %3 : Double(2) = prim::BailOut[index=0](%2, %z.1, %size.1)
[DUMP profiling_graph_executor_impl.cpp:124]   %4 : int = prim::Constant[value=0]() # test_jit.py:3772:54
[DUMP profiling_graph_executor_impl.cpp:124]   %5 : None = prim::Constant()
[DUMP profiling_graph_executor_impl.cpp:124]   %6 : bool = prim::Constant[value=1]() # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]   %counters.1 : int[] = prim::ListConstruct()
[DUMP profiling_graph_executor_impl.cpp:124]   %8 : int = prim::Constant[value=8]()
[DUMP profiling_graph_executor_impl.cpp:124]   %9 : int = aten::__round_to_zero_floordiv(%size.1, %8)
[DUMP profiling_graph_executor_impl.cpp:124]   %10 : int = aten::mul(%9, %8)
[DUMP profiling_graph_executor_impl.cpp:124]   %11 : int = aten::sub(%size.1, %10)
[DUMP profiling_graph_executor_impl.cpp:124]   %12 : int, %y.1 : Tensor = prim::Loop(%9, %6, %4, %3) # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]     block0(%i.2 : int, %15 : int, %y.7 : Tensor):
[DUMP profiling_graph_executor_impl.cpp:124]       %17 : Double(2) = prim::BailOut[index=1](%y.7, %z.1, %counters.1, %9, %11, %i.2, %15)
[DUMP profiling_graph_executor_impl.cpp:124]       %18 : int[] = aten::append(%counters.1, %15) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %19 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %20 : Tensor = aten::ones(%19, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %21 : Double(1) = prim::BailOut[index=2](%20, %z.1, %counters.1, %9, %11, %i.2, %15, %17)
[DUMP profiling_graph_executor_impl.cpp:124]       %22 : Tensor[] = prim::ListConstruct(%17, %21)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.5 : Double(3) = aten::cat(%22, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %24 : int = prim::Constant[value=1]()
[DUMP profiling_graph_executor_impl.cpp:124]       %25 : int = aten::add(%15, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %26 : int[] = aten::append(%counters.1, %25) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %27 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %28 : Tensor = aten::ones(%27, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %29 : Double(1) = prim::BailOut[index=3](%28, %z.1, %counters.1, %9, %11, %i.2, %y.5, %25)
[DUMP profiling_graph_executor_impl.cpp:124]       %30 : Tensor[] = prim::ListConstruct(%y.5, %29)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.9 : Double(4) = aten::cat(%30, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %32 : int = aten::add(%25, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %33 : int[] = aten::append(%counters.1, %32) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %34 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %35 : Tensor = aten::ones(%34, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %36 : Double(1) = prim::BailOut[index=4](%35, %z.1, %counters.1, %9, %11, %i.2, %y.9, %32)
[DUMP profiling_graph_executor_impl.cpp:124]       %37 : Tensor[] = prim::ListConstruct(%y.9, %36)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.10 : Double(5) = aten::cat(%37, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %39 : int = aten::add(%32, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %40 : int[] = aten::append(%counters.1, %39) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %41 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %42 : Tensor = aten::ones(%41, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %43 : Double(1) = prim::BailOut[index=5](%42, %z.1, %counters.1, %9, %11, %i.2, %y.10, %39)
[DUMP profiling_graph_executor_impl.cpp:124]       %44 : Tensor[] = prim::ListConstruct(%y.10, %43)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.11 : Double(6) = aten::cat(%44, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %46 : int = aten::add(%39, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %47 : int[] = aten::append(%counters.1, %46) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %48 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %49 : Tensor = aten::ones(%48, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %50 : Double(1) = prim::BailOut[index=6](%49, %z.1, %counters.1, %9, %11, %i.2, %y.11, %46)
[DUMP profiling_graph_executor_impl.cpp:124]       %51 : Tensor[] = prim::ListConstruct(%y.11, %50)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.12 : Double(7) = aten::cat(%51, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %53 : int = aten::add(%46, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %54 : int[] = aten::append(%counters.1, %53) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %55 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %56 : Tensor = aten::ones(%55, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %57 : Double(1) = prim::BailOut[index=7](%56, %z.1, %counters.1, %9, %11, %i.2, %y.12, %53)
[DUMP profiling_graph_executor_impl.cpp:124]       %58 : Tensor[] = prim::ListConstruct(%y.12, %57)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.13 : Double(8) = aten::cat(%58, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %60 : int = aten::add(%53, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %61 : int[] = aten::append(%counters.1, %60) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %62 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %63 : Tensor = aten::ones(%62, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %64 : Double(1) = prim::BailOut[index=8](%63, %z.1, %counters.1, %9, %11, %i.2, %y.13, %60)
[DUMP profiling_graph_executor_impl.cpp:124]       %65 : Tensor[] = prim::ListConstruct(%y.13, %64)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.14 : Double(9) = aten::cat(%65, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %67 : int = aten::add(%60, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %68 : int[] = aten::append(%counters.1, %67) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %69 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %70 : Tensor = aten::ones(%69, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %71 : Double(1) = prim::BailOut[index=9](%70, %z.1, %counters.1, %9, %11, %i.2, %y.14, %67)
[DUMP profiling_graph_executor_impl.cpp:124]       %72 : Tensor[] = prim::ListConstruct(%y.14, %71)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.15 : Tensor = aten::cat(%72, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %74 : int = aten::add(%67, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       -> (%6, %74, %y.15)
[DUMP profiling_graph_executor_impl.cpp:124]   %75 : Double(10) = prim::BailOut[index=10](%y.1, %z.1, %counters.1, %11, %12)
[DUMP profiling_graph_executor_impl.cpp:124]   %76 : int, %y : Tensor = prim::Loop(%11, %6, %12, %75) # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]     block0(%i.1 : int, %79 : int, %y.6 : Tensor):
[DUMP profiling_graph_executor_impl.cpp:124]       %81 : Double(*) = prim::BailOut[index=11](%y.6, %z.1, %counters.1, %11, %i.1, %79)
[DUMP profiling_graph_executor_impl.cpp:124]       %82 : int[] = aten::append(%counters.1, %79) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %83 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %84 : Tensor = aten::ones(%83, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %85 : Double(1) = prim::BailOut[index=12](%84, %counters.1, %11, %i.1, %79, %81)
[DUMP profiling_graph_executor_impl.cpp:124]       %86 : Tensor[] = prim::ListConstruct(%81, %85)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.4 : Tensor = aten::cat(%86, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %88 : int = prim::Constant[value=1]()
[DUMP profiling_graph_executor_impl.cpp:124]       %89 : int = aten::add(%79, %88)
[DUMP profiling_graph_executor_impl.cpp:124]       -> (%6, %89, %y.4)
[DUMP profiling_graph_executor_impl.cpp:124]   %90 : Double(12) = prim::BailOut[index=13](%y, %counters.1)
[DUMP profiling_graph_executor_impl.cpp:124]   %91 : (Tensor, int[]) = prim::TupleConstruct(%90, %counters.1)
[DUMP profiling_graph_executor_impl.cpp:124]   return (%91)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31400

Differential Revision: D19172750

Pulled By: Krovatkin

fbshipit-source-id: 85d3aac4e80b65b83b6be3c0bca8075a731a2b7e
2019-12-19 00:34:50 -08:00
Elias Ellison
fb24f7c4ad catch all exceptions in converting default values to ivalues (#31398)
Summary:
Previously we would only catch `py::cast_error` which led to incomprehensible error messages like: `TypeError: 'NoneType' object is not iterable`. We are running arbitrary pybind code here, and not doing anything with the error message, so we should be less restrictive with the types of errors we catch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31398

Differential Revision: D19166655

Pulled By: eellison

fbshipit-source-id: 84db8b3714c718b475913f2f4bb6f19e62f2d9ec
2019-12-18 20:27:46 -08:00
Jerry Zhang
fe707c7849 Use default_observer and default_weight_observer in tests (#31424)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31424

att

Test Plan:
test_jit.py

Imported from OSS

Differential Revision: D19162368

fbshipit-source-id: 33b95ba643eeeae942283bbc33f7ceda8d14c431
2019-12-18 18:35:07 -08:00
James Reed
a3cdb7eca3 Fix default instantation of dynamic quantized LSTM
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31433

Test Plan: Imported from OSS

Differential Revision: D19164539

Pulled By: jamesr66a

fbshipit-source-id: 7045817ab3dfb530c4480a10523c4c6bcdbfc7eb
2019-12-18 16:59:00 -08:00
davidriazati
148bcd3ee5 Add support for builtins as attributes (#31269)
Summary:
Fixes #27495

This adds builtins as another piece of a concrete type. They're separate from normal functions since they represent the `BuiltinFunction` sugared value (which is a direct call to a builtin op). It also moves the builtins related logic from `jit/__init__.py` to `jit/_builtins.py` so it can be used from `jit/_recursive.py` to look up functions in the builtins table.
](https://our.intern.facebook.com/intern/diff/19149779/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31269

Pulled By: driazati

Differential Revision: D19149779

fbshipit-source-id: d4e5e5d7d7d528b75a2f503e6004394251a4e82d
2019-12-18 15:24:45 -08:00
davidriazati
7692494c67 Fix hex literal parsing (#29935)
Summary:
Stacked PRs
 * #29940 - [jit] Fix parsing of big float literals
 * **#29935 - [jit] Fix hex literal parsing**
 * #29931 - [jit] Throw a better error for int too big for int64_t

Previously these were all parsed as `0`
](https://our.intern.facebook.com/intern/diff/19124944/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29935

Pulled By: driazati

Differential Revision: D19124944

fbshipit-source-id: 1ee0c1dee589933363a5efba069a2cfaf94373c5
2019-12-18 14:00:22 -08:00
davidriazati
1f50cfc24d Throw a better error for int too big for int64_t
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29931

Pulled By: driazati

Differential Revision: D19124934

fbshipit-source-id: 91841d7ba4f2f6142c51fba07b7faa14bb817e3a
2019-12-18 14:00:16 -08:00
Elias Ellison
fb30a48b4e add unsupported section (#31329)
Summary:
Add a section for unsupported ops, and modules. Automatically generate the properties and attributes that aren't bound, and for ops that have semantic mismatches set up tests so the docs stay up to date.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31329

Differential Revision: D19164472

Pulled By: eellison

fbshipit-source-id: 46290bb8a64d9de928cfb1eda5ff4558c3799c88
2019-12-18 13:56:02 -08:00
Alexander Stante
f30b14dead Fix handling of type comments in body (#30590)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/30477. Any type comment after `# type: (...) -> ` is ignored.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30590

Differential Revision: D18887351

Pulled By: driazati

fbshipit-source-id: 162c652f6d7610d14609bbcb25aaa27cdd947a76
2019-12-12 18:19:30 -08:00
Elias Ellison
bee6344d4e remove / rewrite weak module tests (#31193)
Summary:
Remove most of the testing for `weak_script`, since we removed it. Refactor a few of the existing tests to use recursive scripting api.

Fix for https://github.com/pytorch/pytorch/issues/23965
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31193

Differential Revision: D18966291

Pulled By: eellison

fbshipit-source-id: 6b1e18c293f55017868a14610d87b69be42bde12
2019-12-12 13:33:38 -08:00
Elias Ellison
56de8853da Resubmit overload v2 (#31123)
Summary:
Resubmit of https://github.com/pytorch/pytorch/pull/30356 and https://github.com/pytorch/pytorch/pull/31014 :'(

The last commit contains the fix. There was an internal FBcode error not able to compile the previous `impl_default->second.equal(default_val.second))` line. I tried various fixes in C++ internally but couldn't figure anything out. This is a good example of the programming costs of going from python -> c++ for different types of objects, because the conceptual overhead has expanded in scope from (python) -> (python, c++, pybind).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31123

Differential Revision: D18936128

Pulled By: eellison

fbshipit-source-id: 7d8fd66a6dd4a3e9838f3a0b68c219b6565a9462
2019-12-12 07:54:23 -08:00
Lara
97c1e90f46 ONNX Interpolate Add Scales Params (#28324)
Summary:
Fix for : https://github.com/pytorch/pytorch/issues/27176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28324

Reviewed By: hl475

Differential Revision: D18309133

Pulled By: houseroad

fbshipit-source-id: 348bb41393442c6b107d88fc2cd3224e0afa3ccf
2019-12-11 20:09:15 -08:00
davidriazati
679b20b1e4 Unify list elements for all list types (#30777)
Summary:
Previously list elements were only unified for tensor lists.
This improves error messages and expands the unification logic
to include all types.
](https://our.intern.facebook.com/intern/diff/18837726/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30777

Pulled By: driazati

Differential Revision: D18837726

fbshipit-source-id: c4d275562a8429700987569426d694faa8f6002e
2019-12-11 17:00:52 -08:00
David Riazati
1f87e823b8 Make nn.Transformer TorchScript compatible (#28561)
Summary:
This makes `nn.Transformer` usable from TorchScript. It preserves backwards compatibility via `__setstate__` on the encoder/decoder.

Fixes https://github.com/pytorch/pytorch/issues/24173
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28561

Differential Revision: D18124753

Pulled By: driazati

fbshipit-source-id: 7314843e5aa9c9bf974c4672e4edb24ed8ef4a6f
2019-12-11 10:57:31 -08:00
Alban Desmaison
717274c001 Add useful warnings for t.grad when it won't be populated for known reasons (#30531)
Summary:
Fix https://github.com/pytorch/pytorch/issues/2362 and https://github.com/pytorch/pytorch/issues/19778

To avoid issues with frozen model, we only consider warning for Tensors that require gradients and are neither leafs nor retain gradients.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30531

Differential Revision: D18832767

Pulled By: albanD

fbshipit-source-id: 743e863dc14ab57713e66da78b2e4d759dfba0ff
2019-12-11 09:47:18 -08:00