Commit Graph

1509 Commits

Author SHA1 Message Date
Jerry Zhang
a00e12e755 [quant][graphmode] weight/bias of linear/conv can be reused for multiple ops (#35221)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35221

When weight is reused, we only need to insert one observer for the weight

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20602492

fbshipit-source-id: e003e6316f6615f3526f0d00fb7b722148b4749b
2020-03-23 14:21:59 -07:00
Elias Ellison
4fae5a6721 Move module graph creation to testing utils (#34917)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34917

Test Plan: Imported from OSS

Differential Revision: D20539338

Pulled By: eellison

fbshipit-source-id: 5c46c0ce50e5bcccf5abee264f432ded7d36d040
2020-03-23 11:59:02 -07:00
Elias Ellison
77ccb5c14d Move functional graph creation to testing utils (#34916)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34916

Test Plan: Imported from OSS

Differential Revision: D20539337

Pulled By: eellison

fbshipit-source-id: 9b777e369facebbe68fe198ca3eec055cf9c5257
2020-03-23 11:57:25 -07:00
Jerry Zhang
3e4076aa9c [quant][graphmode] quantization work for prim::If (#34518)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34518

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20519606

fbshipit-source-id: 94d49e18d97df642cbcb446df12376f6d2a397bc
2020-03-23 09:54:24 -07:00
albanD
0e0386b434 Revert "[JIT] add id function (#34975)" (#35209)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35209

This reverts commit 62f11f0a35.

Test Plan: Imported from OSS

Differential Revision: D20596847

Pulled By: albanD

fbshipit-source-id: e6777e42356aac772e59f0466a92cc13258218c1
2020-03-23 08:42:09 -07:00
Jerry Zhang
28bf0038e5 [quant][graphmode][fix] Insert dequantize before use node (#34411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34411

To make sure dequantize and the node that uses the dequantized value reside in the
block, so that we can do quant fusion

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20519603

fbshipit-source-id: 3e4c68d0a73142716e19ea6a64ae3a5d6d51fa41
2020-03-23 08:07:33 -07:00
Lu Fang
a100cf5146 Revert D20541090: [JIT][torchbind] Namespaces for torchbind classes
Test Plan: revert-hammer

Differential Revision:
D20541090

Original commit changeset: ce3d9391dd3c

fbshipit-source-id: acc1d660fbda611941381315507dfe594c385db1
2020-03-21 12:20:44 -07:00
James Reed
e0496a70fc [JIT][torchbind] Namespaces for torchbind classes (#35054)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35054

Test Plan: Imported from OSS

Differential Revision: D20541090

Pulled By: jamesr66a

fbshipit-source-id: ce3d9391dd3cdf619042b8f6ba2645f4c1fc875c
2020-03-20 20:07:02 -07:00
Kimish Patel
3e58cba3c5 Fixes the Conv2d batch_norm folding for various cases. (#34932)
Summary:
This PR adds a preprocessing step in Conv2dBatchNorm folding.
It traverses the module to check if the bias of Conv2d module is set to
None. If it is, it assume that this a traced module and insert
Optional[Tensor] type bias.
Furthermore it insert getAttr for bias in the forward graph and fixes
_convolution op to take values from getAttr.
It also fixes parametere extraction from BN module which may not
have weight and bias attributes if affine was set to False. In scripted
mode such a BN module will get weight and bias attributes set to None.
For the case of eps it gets const propagated in tracing. This is also
fixed.
Few tests cases are added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34932

Test Plan:
python test/test_jit.py TestJit.test_foldbn_trivial
python test/test_jit.py TestJit.test_foldbn_trivial_nobias
python test/test_jit.py TestJit.test_foldbn_in_submodule
python test/test_jit.py TestJit.test_foldbn_shared_classtype
python test/test_jit.py TestJit.test_foldbn_complex_cases
python test/test_jit.py TestJit.test_nofoldbn_complex_cases

Differential Revision: D20536478

Pulled By: kimishpatel

fbshipit-source-id: 4e842976a380d0575a71001bb4481390c08c259e
2020-03-20 20:06:44 -07:00
Elias Ellison
62f11f0a35 [JIT] add id function (#34975)
Summary:
add `id` function so to give uses a way of keeping a `seen` set of nn modules.
n practice, this is only used between values of `T` and `T` or `T` and `Optional[T]`, so in this implementation I made it so that None is the only value that can be zero. Python also only guarantees `id()` gives semantically meaningful results for pointer types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34975

Differential Revision: D20549677

Pulled By: eellison

fbshipit-source-id: cca5ed4ef013f7540f93abf49f91f9830dfdca14
2020-03-20 20:03:10 -07:00
Elias Ellison
bcbde490e4 Fix flake (#34974)
Summary:
fix flake, add overload names
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34974

Differential Revision: D20519191

Pulled By: eellison

fbshipit-source-id: d08d36b64397287cad484690074e694d8a0e472e
2020-03-18 16:45:33 -07:00
Jerry Zhang
b2e5e0cad6 [quant][graphmode] quantization support for aten::rehshape (#34803)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34803

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20504457

fbshipit-source-id: 5ca691ef4880c72d30d62390e63e3288b2f06dce
2020-03-18 15:40:43 -07:00
Jerry Zhang
d77d907f0e [quant][graphmode] Add quantization support for aten::dropout (#34347)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34347

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20504453

fbshipit-source-id: 1bab29e21d0564ed88cdeb4894addfe00ebbd390
2020-03-18 14:35:27 -07:00
Michael
f3b8a470e1 Added functionality for all to take Lists as input (#34582)
Summary:
New pull request after rebase error in pull request https://github.com/pytorch/pytorch/issues/33923
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34582

Differential Revision: D20447689

Pulled By: eellison

fbshipit-source-id: 4296b64185eccb136b1b614b532deb3af20c7544
2020-03-18 12:01:30 -07:00
Jerry Zhang
841f7600bb [quant][graphmode] Quantization pattern for aten::linear (#33854)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33854

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20493031

fbshipit-source-id: bafd0a3ba5d07327d451b3915f043db33b012b53
2020-03-17 16:36:30 -07:00
Owen Anderson
a4224886f3 Eliminate guards through max_pool ops. (#34512)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34512

Differential Revision: D20478962

Pulled By: resistor

fbshipit-source-id: 86fc926305f95cae8b334ed344d8e0cdd1ef7b2b
2020-03-17 14:00:00 -07:00
James Reed
699a4ed8f5 [testing][do not land] (#34605)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34605

Test Plan: Imported from OSS

Differential Revision: D20393219

Pulled By: jamesr66a

fbshipit-source-id: c74d886f5f01061294203a002b72b75a3c446f09
2020-03-16 23:56:00 -07:00
peter
24c9e61e79 Enable JIT tests on Windows (#27029)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27029

Reviewed By: eellison

Differential Revision: D20458664

Pulled By: jamesr66a

fbshipit-source-id: 22be918543703869f471e89b3478423198351bf3
2020-03-16 11:26:21 -07:00
Jerry Zhang
cec9758afa [quant][graphmode] Add quantization pattern for quantized::add_relu (#33532)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33532

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20354880

fbshipit-source-id: ea608a5ace395a909851f9e577ffdcb51512a3af
2020-03-16 10:20:57 -07:00
Tugrul Ince
08bc3c6cbf Remove unnecessary import (#34778)
Summary:
https://github.com/pytorch/pytorch/issues/34563 accidentally introduced a lint error due to an unused import. This PR removes this import.

Jit tests run as expected after this change:
```
> python test/test_jit.py
.....
Ran 2435 tests in 100.077s

OK (skipped=140, expected failures=1)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34778

Differential Revision: D20459708

Pulled By: tugrulince

fbshipit-source-id: bb742085fafc849ff3d9507d1557556e01fbeb4b
2020-03-15 09:56:55 -07:00
Jerry Zhang
5710374e4e [reland][quant][graphmode] Add quantized conv2d-relu fusion pattern (#33279) (#34744)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34744

att

Test Plan: python test/test_jit.py

Differential Revision: D20449667

Pulled By: jerryzh168

fbshipit-source-id: 01bbc26604fac421dcaacaf4fa1b57731f1f08b7
2020-03-14 01:03:18 -07:00
Zachary DeVito
52005b551c invokeOperatorFromPython: support overloaded operator calling (#34671)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34671

Like the python arg parser, this tries to convert to the schema in order.
It introduces schema_match_exception which gets thrown when the schema doesn't match,
allowing the overload handler to try the next option.

Behavior will not 100% match the schema argument parser but should work for
simple cases using custom binding.

Test Plan: Imported from OSS

Differential Revision: D20432206

Pulled By: zdevito

fbshipit-source-id: 280839a2205ea3497db3a9b5741fccc1e2bff9a8
2020-03-13 18:46:03 -07:00
Jerry Zhang
e7910aa9e5 [fix] use non-inplace for insert observer pass (#34190)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34190

inplace modification of ClassType might affect other tests, so we want to do non-inplace modifications.
Actually the inplace argument will be removed soon.

Test Plan:
ci

Imported from OSS

Differential Revision: D20451765

fbshipit-source-id: e87ad528c4e7f84f5774b94a8e3e85568269682d
2020-03-13 17:25:07 -07:00
Tugrul Ince
c9023e3b12 Support left and right shift operators in JIT (#34563)
Summary:
With this PR, we can now support left and right shift operators in the JIT engine for <int, int> and <Tensor, int>.

Updated tests pass as expected:
```
> python test/test_jit.py
...
Ran 2427 tests in 84.861s

OK (skipped=139, expected failures=1)
```

Running the following code with Python results in the output below:
```
> cat ~/expressions.py
import torch

torch.jit.script
def fn(a, b):
    # type: (int, int)
    return (
        a << b,  # supported
        b >> a,  # supported
        a & b,
        a | b,
        a ^ b
    )
print(fn.graph)
```

```
> python ~/expressions.py
graph(%a.1 : int,
      %b.1 : int):
  %4 : int = aten::leftshift(%a.1, %b.1) # /home/ince/expressions.py:7:8
  %7 : int = aten::rightshift(%b.1, %a.1) # /home/ince/expressions.py:8:8
  %10 : int = aten::__and__(%a.1, %b.1) # /home/ince/expressions.py:9:8
  %13 : int = aten::__or__(%a.1, %b.1) # /home/ince/expressions.py:10:8
  %16 : int = aten::__xor__(%a.1, %b.1) # /home/ince/expressions.py:11:8
  %17 : (int, int, int, int, int) = prim::TupleConstruct(%4, %7, %10, %13, %16)
  return (%17)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34563

Differential Revision: D20434209

Pulled By: tugrulince

fbshipit-source-id: 886386c59755106e17b84778b8e495b80a6269cd
2020-03-13 13:00:33 -07:00
Jerry Zhang
e9a660a160 Revert D20354878: [quant][graphmode] Add quantized conv2d-relu fusion pattern
Test Plan: revert-hammer

Differential Revision:
D20354878

Original commit changeset: 2b19797d4b3f

fbshipit-source-id: 18f447074794af0d579e145df02af47d01746921
2020-03-12 21:29:08 -07:00
Jerry Zhang
0ff4d37933 [quant][graphmode] Add quantized conv2d-relu fusion pattern (#33279)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33279

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20354878

fbshipit-source-id: 2b19797d4b3fd96918164a58bfbd768211ad6c6d
2020-03-12 19:49:57 -07:00
Jerry Zhang
90ca7a1feb [quant][graphmode] Add Finalize function that inlines graph and produce quantized ops (#33927)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33927

Test Plan:
test will be added in later PRs

Imported from OSS

Differential Revision: D20354879

fbshipit-source-id: 03976f4b86c46dbdc4e45764a1e72f1a3855a404
2020-03-12 14:52:58 -07:00
Elias Ellison
514cba0661 [JIT] remove builtin interpolate functions (#34514)
Summary:
`torch.nn.functional.interpolate` was written as a builtin op when we scripted the standard library, because it has four possible overloads. As a result, whenever we make a change to `interpolate`, we need to make changes in two places, and it also makes it impossible to optimize the interpolate op. The builtin is tech debt.

I talked with ailzhang, and the symbolic script changes are good to remove (i guess that makes a third place we needed to re-implement interpolate).

I'm trying to get rid of unneccessary builtin operators because we're standardizing mobile bytecode soon, so we should try to get this landed as soon as possible.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34514

Differential Revision: D20391089

Pulled By: eellison

fbshipit-source-id: abc84cdecfac67332bcba6b308fca4db44303121
2020-03-12 09:21:33 -07:00
James Reed
1f834b5c2a [JIT] Torchbind error if python instantiate class that doesnt exist (#34568)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34568

Test Plan: Imported from OSS

Differential Revision: D20378106

Pulled By: jamesr66a

fbshipit-source-id: 395a3b05d23727b9cfd074440b2d0e8ef002ec09
2020-03-11 13:13:08 -07:00
ettiee
2cf576e9ea small typos (#34589)
Summary:
Spotted a couple of small typos 🙏
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34589

Differential Revision: D20387653

Pulled By: ngimel

fbshipit-source-id: 3089fe606ccb8c8ee57cf7a900aba714fd0ce567
2020-03-11 11:01:31 -07:00
Nikolay Korovaiko
e16908cb1f profile block outputs; helps guard elimination (#33889)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33889

Reviewed By: zdevito

Differential Revision: D20294979

Pulled By: Krovatkin

fbshipit-source-id: 2a68710ec8f8f854c99dfe173f49da442a39e498
2020-03-09 17:12:58 -07:00
Nikolay Korovaiko
0a4a558c2c Dictionary Constants (#32869)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32869

Differential Revision: D19909339

Pulled By: Krovatkin

fbshipit-source-id: 6fe2a9b470768f84b957c69cdf9af3a1bd9b1ca9
2020-03-09 16:12:36 -07:00
Jerry Zhang
2e7eef41ac [quant][graphmode] Swap quantized functional linear with aten::linear (#33853)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33853

Quant fusion relies on inline, but inline will break the CallFunction("linaer", ...) into a if block
it will be hard to recognize this block and swap it with quantized::linear, in order to
preserve the op, we will swap all quantized functional linear into aten::linear.
They might produce different backward graph, but this is called in the step before we get quantized
model, so it shouldn't affect anything.
We'll integrate this with convert_script later in the new "finalize_quant" API

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20343873

fbshipit-source-id: 423e03bf893b79267d2dc97bc997ee1bfe54ec0f
2020-03-09 15:45:20 -07:00
davidriazati
2c0f3536b6 [jit] Make ModuleLists a sugared value (#34320)
Summary:
Previously when emitting subscripts we only emitted actual values, but
now they may sometimes emit a `ModuleValue`, so it should stay as a
`SugaredValue`. This allows for the result of the subscript to be
treated as a real module (i.e. you can just do `self.modlist[1](inputs)`
instead of `self.modlist[1].forward(inputs)`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34320

Pulled By: driazati

Differential Revision: D20345642

fbshipit-source-id: 2bedf9a454af747b704422f6bbb8370cbdf4bf61
2020-03-09 15:36:46 -07:00
Jerry Zhang
776d2a1e8f [quant][graphmode] Handling ops doesn't require observation in insertObservers (#33481)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33481

We have to propagate observed property of values through ops like max_pool2d, flatten and
avoid inserting duplicated observers.
For example:
```
x1 = self.conv(x)
x2 = maxpool(x1)
x3 = self.conv(x2)
```
If x1 is observed, we should propagate this information through maxpool and
we should consider x2 as observed as well.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20261897

fbshipit-source-id: 7de354a3ccb2b6e1708f5c743d4d9f7272691a93
2020-03-09 13:15:54 -07:00
Adam Paszke
e3d50c4dda Retain the order of parameters while generating ConcreteModuleTypes (#34131)
Summary:
`ConcreteModuleTypeBuilder` used to keep parameters together with all others attributes in an `unordered_map` often leading to reordering them while building up the type. Parameter order is semantically meaningful, so we need to preserve it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34131

Differential Revision: D20331542

Pulled By: suo

fbshipit-source-id: 5b860025f7902654d6099751d3fb14b12f6f5a67
2020-03-09 10:25:45 -07:00
Shen Li
30680196e4 Revert D20121915: [JIT] Add support for list()
Test Plan: revert-hammer

Differential Revision:
D20121915

Original commit changeset: c6c4ef444dbf

fbshipit-source-id: 829adb58780f4d0f41acebb3e7640a9c68bdbc1b
2020-03-06 07:16:40 -08:00
Elias Ellison
38857734f0 [JIT] fix py35 test (#34350)
Summary:
test_module_interfaces was using syntax only supported in >= 3.6
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34350

Reviewed By: mrshenli

Differential Revision: D20298869

Pulled By: eellison

fbshipit-source-id: 22319ca403113cff2eedf57767bb34d9580e6db3
2020-03-05 21:31:19 -08:00
Elias Ellison
78aebbcb88 [JIT] add other module apis (#34106)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34106

Test Plan: Imported from OSS

Differential Revision: D20283996

Pulled By: eellison

fbshipit-source-id: 88e7bc4547e96717d6c8efe0b25ede0d198d9e68
2020-03-05 16:12:29 -08:00
Elias Ellison
f218842f2e [JIT] Add support for list() (#33818)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33818

Test Plan: Imported from OSS

Differential Revision: D20121915

Pulled By: eellison

fbshipit-source-id: c6c4ef444dbf1d4134dccb28c13315e225945b64
2020-03-05 14:48:20 -08:00
Elias Ellison
479c3b0aa5 [JIT] add support for torch.norm (#33783)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33783

Fix for https://github.com/pytorch/pytorch/issues/20113

Test Plan: Imported from OSS

Differential Revision: D20121917

Pulled By: eellison

fbshipit-source-id: ffedcc40678cd80f5529ff9323088eed544e5158
2020-03-05 14:46:24 -08:00
Jerry Zhang
6f52562e75 [quant][graphmode] Add add_relu pattern in skip values (#32816)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32816

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20208786

fbshipit-source-id: ef84b77f46f88b192a75c123aabaa203836a7dfb
2020-03-04 09:36:02 -08:00
Jerry Zhang
e5bbd23ca7 [quant][graphmode] Skip quantizing input and output in matched module (#32814)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32814

We skip quantization for the intermediate values for patterns like `Conv - ReLU`,
but currently we didn't skip quantizing the input/output of the graphs of matched modules,
since we now changed the way we add observers, this also needs to be updated.

Test Plan:
python test/test_jit.py -- 'TestJit.test_insert_observers_skip_values'

Imported from OSS

Differential Revision: D20208785

fbshipit-source-id: ce30f2c4c8ce737500d0b41357c80ec8b33aecf9
2020-03-03 18:38:36 -08:00
Elias Ellison
04378eb618 [JIT] Add modulelist indexing for integer literal (#29236)
Summary:
Allow indexing into modulelists for integer literals.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29236

Differential Revision: D19583935

Pulled By: eellison

fbshipit-source-id: 24d54051422a69769dac5e82f3bf622ded2bd8a6
2020-03-03 14:47:31 -08:00
Jerry Zhang
f26bbb5f86 [fix] flake8 lint error (#34146)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34146

Test Plan:
.

Imported from OSS

Differential Revision: D20228830

fbshipit-source-id: 41de3c27c10256939ae6309d25b0499f708a3dca
2020-03-03 13:15:27 -08:00
Zachary DeVito
358450e02b improved TorchScript traceback (#33834)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33834

This changes how we report Tracebacks to make them more clear when
there are both serialized and non-serialized ranges. It now looks like:

```
Traceback (most recent call last):
  File "foo.py", line 25, in <module>
    s2(a, b)
  File "/scratch/zdevito/pytorch/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__.py", line 7, in forward
    x: Tensor,
    y: Tensor) -> Tensor:
    return (self).bar(x, y, )
            ~~~~~~~~~ <--- HERE
  def bar(self: __torch__.Moo,
    x: Tensor,
  File "code/__torch__.py", line 11, in bar
    x: Tensor,
    y: Tensor) -> Tensor:
    _0 = (self).baz(x, y, )
          ~~~~~~~~~ <--- HERE
    _1 = torch.ones([3], dtype=None, layout=None, device=None, pin_memory=None)
    return torch.add(_0, _1, alpha=1)
  File "code/__torch__.py", line 17, in baz
    x: Tensor,
    y: Tensor) -> Tensor:
    return torch.add(x, y, alpha=1)
           ~~~~~~~~~ <--- HERE

Traceback of TorchScript, original code (most recent call last):
  File "foo.py", line 11, in forward
    def forward(self, x, y):
        return self.bar(x, y)
               ~~~~~~~~ <--- HERE
  File "foo.py", line 9, in bar
    def bar(self, x, y):
        return self.baz(x, y) + torch.ones(3)
               ~~~~~~~~ <--- HERE
  File "foo.py", line 7, in baz
    def baz(self, x, y):
        return x + y
               ~~~~~ <--- HERE
RuntimeError: The size of tensor a (4) must match the size of tensor b (5) at non-singleton dimension 1
```

It follows Python convension of having the most important information last
and reading from the bottom up.

Changes:
* Moved the error message to the end, to copy Python
* Report original traceback separate from serialized traceback
* Make sure root functions have names in the interpreter trace.

Test Plan: Imported from OSS

Differential Revision: D20126136

Pulled By: zdevito

fbshipit-source-id: fd01f9985e5d74e04c4d064c02e8bc320f4fac13
2020-03-03 12:27:38 -08:00
Jerry Zhang
5b9f1ada30 [quant][graphmode] Observing input/output values in call site (#33277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33277

Currently we insert observer in the called graph, which is incorrect since graphs can be shared
and the decision of whether to insert observer or not might dependend on where the graph is called.
For example, for a call sequence `self.conv1(self.conv2(x))`, we can't inserting observer correctly
if `self.conv1` and `self.conv2` are sharing the same type in the current implementation, because we insert
observer in the graph of the forward method of Conv2d right now and this call sequence requires us to insert
only one observer for the output of self.conv1/input of self.conv2.
We'll need to insert observers for input/output values of the graph in call site instead.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20208787

fbshipit-source-id: 739e1d877639c0d0ed24e573bbd36211defa6836
2020-03-03 10:53:24 -08:00
davidriazati
11843049d5 [jit] Fix flipped PackedSequence outputs in script (#32955)
Summary:
Stacked PRs
 * **#32955 - [jit] Fix flipped PackedSequence outputs in script**
 * #32953 - [jit] Support properties on `Device`

Fixes #32605

Pull Request resolved: https://github.com/pytorch/pytorch/pull/32955

Pulled By: driazati

Differential Revision: D20165514

fbshipit-source-id: a130c438b40e51ec27d36f021b0dc7869570aa6a
2020-03-02 13:50:36 -08:00
Zino Benaissa
cab8772c6c Freezing Torchscript modules (#32178)
Summary:
This patch enables folding GetAttr nodes with their corresponding
values. _jit_pass_freeze_module API returns a new TorchScipt module
where all function calls and get attributes are inlined.
Usage:

frozen_model = torch._C._freeze_module(scrited_model._c)
frozen_model.forward(...)

This API currently optimizes the forward method. We will follow up to
to preserve and optimize methods and attributes that are annotated as
 torch.jit.interface.

Several future improvements to JIT optimizations are required to maximize
clean up/de-sugar the graph and eliminate redundancies.
Ideally, we want to produce a graph that can easily be lowered to
GLOW and other low-level backends.
__
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32178

Differential Revision: D19419640

Pulled By: bzinodev

fbshipit-source-id: 52baffaba9bca2cd60a8e747baa68d57711ad42b
2020-03-02 11:38:36 -08:00
Basil Hosmer
ad769d74d9 Collapse _like overloads into a single overload. (#33705)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33705

The fact that there were two overloads appears to be a historical
artifact that dates back to when goldsborough originally added these
bindings in the first place.  If TensorOptions is made optional,
then you only need one overload, not two, as they are exactly redundant
with each other.  When MemoryFormat was added, it was made a little
harder to do this, as the C++ syntax at::empty_like(t, memory_format) would
not work if you collapsed the overload; but now it works because TensorOptions
supports MemoryFormat.

The upshot is, I can get rid of all the overloads and just have one overload.
Amazingly, this change is backwards compatible, as the test attests.  While
I was at it, I also deleted the overload name from the functions entirely.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D20073355

Pulled By: bhosmer

fbshipit-source-id: c6a8908213b32ccf6737ea864d135e2cce34f56b
2020-03-01 19:40:22 -08:00
Elias Ellison
85b1c45a45 [JIT] fix alias assertion (#33952)
Summary:
This bug has been hit a couple times recently. We need to handle all bivariant types, not just optional, when asserting mutability/immutability of pointed-to elements in alias analysis.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33952

Differential Revision: D20166025

Pulled By: eellison

fbshipit-source-id: cf3df9897a639641ef8303a08ba2b13523d01ef1
2020-02-28 19:54:29 -08:00
davidriazati
2111c4ff0c [jit] Add missing tensor properties (#33906)
Summary:
Fixes #30775

This adds TorchScript implementations (copied from `python_variable.cpp`) for the remainin `Tensor` properties that were missing from the jit, in addition to a test that ensures new properties will trigger a failure so we can decide whether we want to add them as well.

For `some_tensor`, adds:

* `some_tensor.T`
* `some_tensor.ndim`
* `some_tensor.is_leaf`
* `some_tensor.name`
](https://our.intern.facebook.com/intern/diff/20153288/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33906

Pulled By: driazati

Differential Revision: D20153288

fbshipit-source-id: 2ddc48a14267077bc176065267e5ce52181b3d6b
2020-02-28 19:06:11 -08:00
Michael Suo
bd7e9c490a [jit] stop printing crap in test_jit (#33917)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33917

Test Plan: Imported from OSS

Differential Revision: D20150750

Pulled By: suo

fbshipit-source-id: 9a35298a8856d423fb6b9043174853cccf968706
2020-02-27 19:06:43 -08:00
Brian Vaughan
910acafc79 Revert D20124224: [jit] stop printing crap in test_jit
Test Plan: revert-hammer

Differential Revision:
D20124224

Original commit changeset: 9241d21fdf94

fbshipit-source-id: 0680f9db922f9a33a4e859eedd142b87a51bbede
2020-02-27 13:40:34 -08:00
Brian Vaughan
243af17d65 Revert D20103905: [jit] Fix flipped PackedSequence outputs in script
Test Plan: revert-hammer

Differential Revision:
D20103905

Original commit changeset: 84081213ed21

fbshipit-source-id: 2b260654fac87e52fbaf8035018e4ea484928af1
2020-02-27 13:29:35 -08:00
Jerry Zhang
afbd04449e [quant][graphmode] Swap dequantize after inline for ops that doesn't require observation (#33173)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33173

How to deal with ops that’s defined for both floating point and quantized Tensor?

Category of ops: the ones that doesn’t require observers, which means the quantization parameters(scale/zero_point) of the output of this op can be inferred from the quantization parameters of inputs.
For example:
avg_pool, max_pool, flatten, transpose, upsample

Another related topic to previous one is how do we deal with things like adaptive_avg_pool2d that does not require to be observed and it works with quantized tensor as well? If we insert quant/dequant for them, even the quant fusion becomes a numerically changing operation because the scale/zero_point for input and output are different.

Proposal

We can swap the operator with dequantize whenever we see it. For example, for pattern
Let’s say aten::general_op is defined for both floating point and quantized

%r = aten::conv(...)
%q = quantize(%r)
%dq = dequantize(%q)
%f = aten::general_op(%dq)
...

We detect that all inputs of aten::general_op is produced by dequantize, we’ll first delete all the dequantize for the inputs and then insert dequantize for each use of the output of the aten::general_op, note that this should work generally for all the case we might encounter.

After transformation we’ll have:

%r = aten::conv(...)
%q = quantize(%r)
%x = aten::general_op(%q)
%f = dequantize(%x)
...

1. Multiple inputs
    1. We need to make sure all inputs of the aten::general_op are produced by dequantize before we do this transformation
2. Input used by multiple operators
    1. We already did this by inserting dequantize for each use of the value
3. Output used by multiple operators
    1. We’ll reuse the code that inserts dequantize(might need some refactor)

Note that current concat does not belong to this category right now since it does not inherit quantization parameters from inputs.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20123590

fbshipit-source-id: de2febe1f37e4079457a23acaeccbc6d9c9e1f8a
2020-02-27 12:42:29 -08:00
Shihao Xu
9733711394 [JIT] Support calling Tensor.element_size() in TorchScript (#33808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33808

# Problem

https://github.com/pytorch/pytorch/issues/33620
ghstack-source-id: 99073701

Test Plan:
```
buck test mode/dev-nosan //caffe2/test:jit -- test_numel

buck test mode/dev-nosan //caffe2/test:jit -- test_element_size

buck build mode/dev-nosan //caffe2/test:jit \
&& buck-out/gen/caffe2/test/jit\#binary.par -r test_numel

buck build mode/dev-nosan //caffe2/test:jit \
&& buck-out/gen/caffe2/test/jit\#binary.par -r test_element_size
```

Compile error

P126667043

Generated code,
```
buck-out/dev/gen/caffe2/generate-code=register_aten_ops_0.cpp/register_aten_ops_0.cpp

buck-out/dev/gen/caffe2/generate-code=register_aten_ops_2.cpp/register_aten_ops_2.cpp
```
P126667064

Differential Revision: D7050644

fbshipit-source-id: 20dbdb9c500b6d7683c23e3049d43ed0ca06d831
2020-02-26 22:30:44 -08:00
davidriazati
cea0cc8ca8 [jit] Unify augmented assign handling (#33578)
Summary:
Stacked PRs
 * **#33578 - [jit] Unify augmented assign handling**
 * #32993 - [jit] Fix aug assign for non-tensor attributes

We handle augmented assignments to `Select` and `Var` statements differently, but the actual in place update is the same for both, so this PR factors it out into a method so we don't have 2 code paths doing the same thing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/33578

Pulled By: driazati

Differential Revision: D20127647

fbshipit-source-id: 94f37acbd2551498de9d2ca09a514508266f7d31
2020-02-26 19:13:15 -08:00
Jerry Zhang
4c33222c51 [quant][graphmode] Replicate dequantize nodes (#33531)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33531

We already insert dequantize for each use of the value, but there might still be cases where we only
see the value is used multiple times after inline. This pass adds the support to replicate dequantize
after inline to ensure output of dequantize is only used by one node, which is necessary to preserve all
quantization patterns like `dequant - conv - quant`

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20123591

fbshipit-source-id: 6edb10a4566538bcf9379d332233f870372b7a63
2020-02-26 18:59:16 -08:00
davidriazati
2b9fa4a756 [jit] Fix flipped PackedSequence outputs in script (#32955)
Summary:
Stacked PRs
 * **#32955 - [jit] Fix flipped PackedSequence outputs in script**
 * #32953 - [jit] Support properties on `Device`

Fixes #32605
](https://our.intern.facebook.com/intern/diff/20103905/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32955

Pulled By: driazati

Differential Revision: D20103905

fbshipit-source-id: 84081213ed214846e563b9f05bcab0210bb1a71b
2020-02-26 18:53:27 -08:00
Michael Suo
150e025be8 [jit] stop printing crap in test_jit (#33779)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33779

This should eliminate random warnings and print spew from test_jit.

It also fixes a bug where we weren't properly comparing captured outputs
(!)

Test Plan: Imported from OSS

Differential Revision: D20124224

Pulled By: suo

fbshipit-source-id: 9241d21fdf9470531b0437427b28e325cdf08d3a
2020-02-26 18:46:03 -08:00
Elias Ellison
857eb4145e [JIT] add support for torch.cdist (#33737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33737

Test Plan: Imported from OSS

Differential Revision: D20121916

Pulled By: eellison

fbshipit-source-id: b0427bbfd3ade1f3129c4a95a542fbc32c3abd76
2020-02-26 18:37:37 -08:00
Elias Ellison
f31b1d3453 [JIT] add support for lu_unpack (#33736)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33736

Test Plan: Imported from OSS

Differential Revision: D20121914

Pulled By: eellison

fbshipit-source-id: 1136f4d7678a2233129aefe3e30234af385b8353
2020-02-26 18:37:33 -08:00
Elias Ellison
4543cf4eb1 [JIT] add support for torch.lu to torchscript (#33724)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33724

Fix for https://github.com/pytorch/pytorch/issues/33381, partial fix of https://github.com/pytorch/pytorch/issues/30786

Test Plan: Imported from OSS

Differential Revision: D20077321

Pulled By: eellison

fbshipit-source-id: a1e6a0370712b36c9f66979098ac2f9d500ca5f6
2020-02-26 18:37:28 -08:00
Elias Ellison
fddf73250d [JIT] fix resolving of functions in torch/functional. fix compilation of torch.stft (#33504)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33504

Fix resolution fo functions that are bound onto torch in torch/functional.py. This does not fix compilation of all of those functions, those will be done in follow ups. Does torch.stft as a start.

Fixes #21478

Test Plan: Imported from OSS

Differential Revision: D20014591

Pulled By: eellison

fbshipit-source-id: bb362f1b5479adbb890e72a54111ef716679d127
2020-02-26 18:35:43 -08:00
Elias Ellison
057fd5e10d add support for _modules, reducing special casing of nn.Sequential (#29495)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29495

This PR adds support for `_modules`, making it so we no longer need to special case support for `nn.Sequential`. I was getting internal errors around the previous approach using `self.define()`, so i am adding this PR as part of the stack.

Fix for https://github.com/pytorch/pytorch/issues/28998

Test Plan: Imported from OSS

Differential Revision: D18412561

Pulled By: eellison

fbshipit-source-id: a8b24ebee39638fccf63b2701f65f8bb0de84faa
2020-02-26 18:07:19 -08:00
David Riazati
51e405743f Revert D20010383: [jit] Unify augmented assign handling
Test Plan: revert-hammer

Differential Revision:
D20010383

Original commit changeset: 52e559ce907e

fbshipit-source-id: 7ca938070d5e98c91e7a7b8485a3c1e790c3ceb2
2020-02-26 14:22:14 -08:00
davidriazati
867990dc17 [jit] Unify augmented assign handling (#33578)
Summary:
Stacked PRs
 * **#33578 - [jit] Unify augmented assign handling**
 * #32993 - [jit] Fix aug assign for non-tensor attributes

We handle augmented assignments to `Select` and `Var` statements differently, but the actual in place update is the same for both, so this PR factors it out into a method so we don't have 2 code paths doing the same thing.
](https://our.intern.facebook.com/intern/diff/20010383/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33578

Pulled By: driazati

Differential Revision: D20010383

fbshipit-source-id: 52e559ce907e95e5c169ab9d9690d0d235db36f3
2020-02-26 14:09:40 -08:00
Jerry Zhang
479e474a37 [quant][graphmode] FoldConvBatchNorm2d support shared ClassTypes (#32379)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32379

Folding Conv2d - BatchNorm2d modules means recalculate the weight and bias of Conv2d module by incorproating the parameters
of BatchNorm2d, and also change the method calls to calling only forward of Conv2d module, this involves change of both module
types and graph because the bias of Conv2d is a parameter when it has value and is an attribute when it is
None(since JIT code has assumption of prameter being Tensor in multiple places), therefore
we'll need to remove the bias attribute when it is None and add a bias attribute later. Since ClassType might be shared, we separate
remove and add in separate steps and also keep track of the processed graph to avoid modifying the graph and type multiple times.
However we'll have to record the slot index of bias as well so we can replay the slot removal on other instances of Conv2d module.

Test Plan:
tbd

Imported from OSS

Differential Revision: D20078719

fbshipit-source-id: cee5cf3764f3e0c0a4a2a167b78dbada2e3835cc
2020-02-24 17:29:13 -08:00
Thomas Viehmann
481e7f2e78 catch and propagate warnings for JIT ScriptMethods (#33010)
Summary:
We align it with ScriptFunctions by using the HANDLE_TH_ERRORS/END_HANDLE_TH_ERRORS_PYBIND macros.

Fixes https://github.com/pytorch/pytorch/issues/24155  or https://github.com/pytorch/pytorch/issues/24828 ?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33010

Differential Revision: D20053585

Pulled By: suo

fbshipit-source-id: c8876b54069285ba9638bb2328fd8738b59c396d
2020-02-24 10:28:17 -08:00
Nikolay Korovaiko
a7e22b4c6a add bailout checks to checkScript (#32802)
Summary:
this adds enough infrastructure to run bailout checks in `checkScript`. I'll need to figure out the best way to enable it for nightly builds now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32802

Differential Revision: D19974718

Pulled By: Krovatkin

fbshipit-source-id: 40485503f6d3ae14edcce98e1eec1f0559f3ad08
2020-02-21 21:18:54 -08:00
davidriazati
ee28831341 [jit] Fix aug assign for non-tensor attributes (#32993)
Summary:
Instead of erroring out this de-sugars augmented assignments to class
members from `self.a += 1` to `self.a = self.a + 1`.

Fixes #32973
](https://our.intern.facebook.com/intern/diff/19737636/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32993

Pulled By: driazati

Differential Revision: D19737636

fbshipit-source-id: 07307cde88d8c348a7affdafe26db21c74e28ec0
2020-02-21 08:42:35 -08:00
Hong Xu
a6a72ac68f Fix all occurrences of C416. (#33429)
Summary:
C416: Unnecessary (list/set) comprehension - rewrite using list/set().

See https://pypi.org/project/flake8-comprehensions/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33429

Differential Revision: D19972858

Pulled By: ezyang

fbshipit-source-id: faac042a94c59d737bd5ae983121a0a029346e23
2020-02-21 08:32:22 -08:00
Elias Ellison
faa800eb5b [JIT] remove inline everything jitter skip (#33468)
Summary:
The `not inline_everything` check was causing the jitter check to be skipped whenever we emitted a function. thanks SplitInfinity for pointing this out.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33468

Differential Revision: D19975934

Pulled By: eellison

fbshipit-source-id: 03faf8d2fd93f148100d8cf49cb67b8e15cf1f04
2020-02-20 15:58:25 -08:00
Michael Suo
416413dec4 [jit] add inlined_graph method to ScriptFunctions (#33508)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33508

Ever since we switched to not inlining by default, some users have
complained since they relied on inlining occuring to, e.g. process the
graph with some other tool. Add an inlined_graph for convenience in
those cases.

Test Plan: Imported from OSS

Differential Revision: D19977638

Pulled By: suo

fbshipit-source-id: fe1fa92ff888959203d5d1995930d488b5f9e24c
2020-02-19 15:41:25 -08:00
Zachary DeVito
83c347ff4a Remove prim::Constant op (#32804)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32804

Constants are interpreter primitives so the op was not actually used.
This cleans up some of the logic around it.

This also fixes constant prop such that failures to look up an op
do not silently stop constant propagation. Instead, only errors
inside the op implementation itself will do this.

Test Plan: Imported from OSS

Differential Revision: D19673156

Pulled By: zdevito

fbshipit-source-id: 7beee59a6a67a6c2f8261d86bd505280fefa999e
2020-02-18 15:06:56 -08:00
Owen Anderson
1d743e3154 Add guard elimination support for aten::unsqueeze. (#33371)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33371

Differential Revision: D19920041

Pulled By: resistor

fbshipit-source-id: 906af47676dba014c31eef069a4753207f2efc60
2020-02-18 13:22:58 -08:00
Owen Anderson
d35a4c202e Add support for aten::slice to guard elimination. (#33311)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33311

Differential Revision: D19911105

Pulled By: resistor

fbshipit-source-id: 402cfe5f2e03a62b78ed13157e1462cefd9eeafb
2020-02-14 22:54:37 -08:00
Elias Ellison
bf16688538 [JIT] peephole optimize values with NoneType (#33264)
Summary:
If a value has the type None, we can always replace it with a None constant.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33264

Differential Revision: D19878695

Pulled By: eellison

fbshipit-source-id: 5d0e7ffb37c5747997df093fec3183039d8dff4d
2020-02-13 12:03:49 -08:00
davidriazati
f61b45fc89 [jit] Support properties on Device (#32953)
Summary:
Stacked PRs
 * #32955 - [jit] Fix flipped PackedSequence outputs in script
 * **#32953 - [jit] Support properties on `Device`**

PyTorch devices have a `index` and `type` property. This PR adds support for both to TorchScript
](https://our.intern.facebook.com/intern/diff/19849320/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32953

Pulled By: driazati

Differential Revision: D19849320

fbshipit-source-id: ce845258c6110058dd9ea1f759ef74b7ed2e786e
2020-02-12 18:59:10 -08:00
Zachary DeVito
99349defc1 remove unnecessary Node* ops (#32760)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32760

Minor changes to the way ops are implemented to remove incidental use of Node*
in the operator implementation.

Current state for operators that previously took Node:

```
TBD:

USES NODE: prim::DifferentiableGraph(...) -> (...)
USES NODE: prim::profile(...) -> (...)
USES NODE: prim::FusionGroup(...) -> (...)
USES NODE: prim::PythonOp(...) -> (...)
USES NODE: prim::ImplicitTensorToNum(Tensor a) -> Scalar # next PR

Should be made interpreter primitives:

USES NODE: prim::TupleUnpack(...) -> (...)
USES NODE: prim::TupleSlice(...) -> (...)
USES NODE: prim::TupleConstruct(...) -> (...)
USES NODE: prim::ListUnpack(...) -> (...)
USES NODE: prim::ListConstruct(...) -> (...)
USES NODE: prim::DictConstruct(...) -> (...)
USES NODE: prim::Constant() -> (...)
USES NODE: prim::isinstance(...) -> (...)
USES NODE: prim::CreateObject(...) -> (...)
USES NODE: prim::fork(...) -> (...)
USES NODE: aten::warn(str message, *, int stacklevel=2) -> () # need stack level information, so ideally in interpreter so it can look at the stack

Should be made into vararg operators, i.e. the operators last argument should be an IValue
that contains the number of arguments.

USES NODE: prim::FusedConcat(...) -> (...)
USES NODE: prim::MMTreeReduce(...) -> (...)
USES NODE: prim::MMBatchSide(...) -> (...)
USES NODE: prim::ConstantChunk(...) -> (...)
USES NODE: prim::AutogradAnyNonZero(...) -> bool
USES NODE: prim::BroadcastSizes(...) -> (...)
USES NODE: prim::ChunkSizes(...) -> (...)
USES NODE: aten::format(str self, ...) -> str
USES NODE: prim::Print(...) -> (...)

fixed:

USES NODE: aten::extend(Tensor[](a!) self, Tensor [] other) -> ()
USES NODE: aten::copy(Tensor[](a) self) -> Tensor[]
USES NODE: aten::extend(int[](a!) self, int [] other) -> ()
USES NODE: aten::copy(int[](a) self) -> int[]
USES NODE: aten::extend(float[](a!) self, float [] other) -> ()
USES NODE: aten::copy(float[](a) self) -> float[]
USES NODE: aten::extend(bool[](a!) self, bool [] other) -> ()
USES NODE: aten::copy(bool[](a) self) -> bool[]
USES NODE: aten::extend(t[](a!) self, t [] other) -> ()
USES NODE: aten::copy(t[](a) self) -> t[]
USES NODE: aten::keys(Dict(str, t) self) -> str[](*)
USES NODE: aten::values(Dict(str, t) self) -> t[](*)
USES NODE: aten::dict((str, tVal)[] inputs) -> Dict(str, tVal)
USES NODE: aten::keys(Dict(int, t) self) -> int[](*)
USES NODE: aten::values(Dict(int, t) self) -> t[](*)
USES NODE: aten::dict((int, tVal)[] inputs) -> Dict(int, tVal)
USES NODE: aten::keys(Dict(float, t) self) -> float[](*)
USES NODE: aten::values(Dict(float, t) self) -> t[](*)
USES NODE: aten::dict((float, tVal)[] inputs) -> Dict(float, tVal)
USES NODE: aten::keys(Dict(Tensor, t) self) -> Tensor[](*)
USES NODE: aten::values(Dict(Tensor, t) self) -> t[](*)
USES NODE: aten::dict((Tensor, tVal)[] inputs) -> Dict(Tensor, tVal)
USES NODE: aten::test_vartype2(t a, t[] b) -> (t[])
USES NODE: aten::_ncf_unsqueeze(Tensor self, int ndim) -> Tensor
USES NODE: aten::_ncf_view(Tensor self, int[] input_shape, int normalized_ndim) -> Tensor
USES NODE: prim::is_none(int? a) -> bool
USES NODE: aten::__interpolate(Tensor input, int? size = None, float[]? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::__interpolate(Tensor input, int[]? size = None, float[]? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::__interpolate(Tensor input, int? size = None, float? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::__interpolate(Tensor input, int[]? size = None, float? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::sorted(t[](a) self) -> (t[])
USES NODE: aten::sort(t[](a!) self, bool reverse=False) -> ()
USES NODE: aten::test_vartype(t[] a, t b) -> (t)
USES NODE: prim::unchecked_unwrap_optional(t(a)? optional) -> t(a)
USES NODE: prim::unchecked_cast(...) -> (...)
USES NODE: aten::dict() -> Dict(str, Tensor)
USES NODE: prim::Load(...) -> (...)
USES NODE: prim::Store(...) -> (...)
USES NODE: prim::Drop(...) -> (...)
USES NODE: aten::tensor(t[] data, *, ScalarType? dtype=None, Device? device=None, bool requires_grad=False) -> Tensor
USES NODE: aten::as_tensor(t[] data, *, ScalarType? dtype=None, Device? device=None) -> Tensor
```

Test Plan: Imported from OSS

Differential Revision: D19615387

Pulled By: zdevito

fbshipit-source-id: 95298c3c4249b9f812c332d13f0fb79daeecb662
2020-02-12 14:49:02 -08:00
Hongyu Cai
de27f4261d [jit] remove redundant variables from JIT TestCase
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29091

Differential Revision: D19746083

Pulled By: suo

fbshipit-source-id: 76fd71740fe7a3f52da361d96a7b694ec208de24
2020-02-07 10:42:33 -08:00
Michael Suo
df1d68d52e [jit] fix parser for one-line functions (#32941)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32941

The Python grammar allows single-statement one-line functions. So we
should allow it in the string parser.

Test Plan: Imported from OSS

Differential Revision: D19704153

Pulled By: suo

fbshipit-source-id: 8c06cc9c600aa2a9567b484a1ecc0360aad443e3
2020-02-05 13:11:47 -08:00
James Reed
f393adc0ed [JIT] Fix python pickle serialization for torchbind (#32878)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32878
ghstack-source-id: 97736045

Test Plan: Imported from OSS

Differential Revision: D19669879

fbshipit-source-id: 23ea91cffe7344d1eed014e2509983c281dd18d3
2020-02-04 19:29:55 -08:00
James Reed
bc4790b3aa [JIT] Trace uses of torchbind classes as module attributes (#32833)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32833
ghstack-source-id: 97736046

Test Plan: Imported from OSS

Differential Revision: D19645714

fbshipit-source-id: 10a7271f13c3588aea666b44b916e90ba7b3c666
2020-02-04 19:28:37 -08:00
Elias Ellison
040bc1d0e1 [JIT] make is_scripting a condvalue (#32871)
Summary:
Add `torch.jit.is_scripting` to the list of CondValues, or values that if they are an input to a if statement we only compile one side of the if. I'm not sure if we actually want this PR.

Pros:
- Makes it easier to add features that are not yet supported in TorchScript (like has_torch_function)
- The current idiom of writing `torch.jit.is_scripting` and factoring out the block to a function annotated with `torch.jit.ignore` is functionally equivalent and much more cumbersome

Cons:
- Makes it easier to add features that are not yet supported in TorchScript
- Perhaps is confusing as a reader what is being compiled. Potentially could give all caps name or otherwise change name to make it more visually stand out.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32871

Differential Revision: D19670383

Pulled By: eellison

fbshipit-source-id: 5257b0bd23c66f199d59a7f2c911e948301e5588
2020-01-31 18:23:42 -08:00
Elias Ellison
10bd21d550 [JIT] fix nested select assign (#32877)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/31902

```
self.sub.a = 1
 ```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32877

Differential Revision: D19670322

Pulled By: eellison

fbshipit-source-id: 6d8f350b4d1169be1d2a56050fccd7c246ad9212
2020-01-31 16:58:26 -08:00
Hong Xu
b16dab8a41 Coding header is better specified in lowercase letters (#32850)
Summary:
The Python document <https://www.python.org/dev/peps/pep-0263/> gives
all examples using lowercase letters. Although it doesn't say
straightly, the following paragraph seems to indicate that uppercase
letters aren't legitimate:

> If a source file uses both the UTF-8 BOM mark signature and a magic encoding comment, the only allowed encoding for the comment is 'utf-8'.  Any other encoding will cause an error.

My Emacs also complains about the uppercase letters every time I save
the file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32850

Differential Revision: D19663281

Pulled By: ezyang

fbshipit-source-id: 48127d3c2fd6e22dd732a2766913735136ec2ebc
2020-01-31 10:02:30 -08:00
Michael Suo
3552be1090 [jit] fix the NoneType param/buffer hack (#32745)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32745

Some parameters (like `bias` in conv) are optional. To achieve this
previously, you had to add `bias` as a constant, which would invoke some
pretty weird behavior in the frontend, summarized as:
```
if bias is not None:
  add it as a parameter normally
else: # bias is None
  add it as a constant with the value None
```

There are several things bad about this:
1. Bias is not a constant. Marking it `__constants__` is confusing.
2. It basically relies on an implementation detail (the frontend
processes parameters before constants) to work.

Okay, whatever. I don't even know why we did this originally, but
getting rid of it doesn't break anything, so I assume improved NoneType
refinement has made this a non-issue.

Note on perf: this will make no difference; if bias was `None` it's still
folded out today, if bias is a Tensor it would be added as a parameter
both before and after this change

Test Plan: Imported from OSS

Differential Revision: D19628634

Pulled By: suo

fbshipit-source-id: d9128a09c5d096b938fcf567b8c23b09ac9ab37f
2020-01-29 17:04:39 -08:00
James Reed
465ebd58ba [JIT] pickle serialization for custom bound classes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32604

Test Plan: Imported from OSS

Differential Revision: D19566633

fbshipit-source-id: 9387d3ff45cbd6ccde49ce190a52859481cc301c
2020-01-28 11:02:59 -08:00
James Reed
1719da13f9 [JIT] Support for registering C++ lambdas as methods on custom C++ class
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32553

Test Plan: Imported from OSS

Differential Revision: D19543269

Pulled By: jamesr66a

fbshipit-source-id: 7e566650295e9d1c4f2f716470e061308a6210a0
2020-01-28 11:01:07 -08:00
Michael Suo
63170431f9 [jit] fix segfault on missing getstate (#32642)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32642

Previously, if we defined `__setstate__` but not `__getstate__`, we
would segfault. This PR turns that into a comprehensible error message
(and improves another error message as well).

Fixes https://github.com/pytorch/pytorch/issues/25886

Test Plan: Imported from OSS

Differential Revision: D19596463

Pulled By: suo

fbshipit-source-id: dbe76bc36bc747d65fb0223184c009e0e9ba072c
2020-01-28 01:25:37 -08:00
James Reed
d68592a440 [JIT] Fix classes as attributes in recursive scripting
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32594

Test Plan: Imported from OSS

Differential Revision: D19562951

Pulled By: jamesr66a

fbshipit-source-id: 3d5491c1c23456f107390a78be16da687de951e6
2020-01-27 20:37:48 -08:00
Jerry Zhang
91f10a1de1 [quant][graphmode][refactor] Better API for fold_convbn (#32380)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32380

We'll clone the module first and then fold conv bn and return a new
module

Test Plan:
.

Imported from OSS

Differential Revision: D19508033

fbshipit-source-id: 328e91a2c9420761c904a7f2b62dab4cfaaa31ac
2020-01-24 15:46:47 -08:00
Nikolay Korovaiko
7d0f0b62de API for testing bailouts (#32518)
Summary:
This API seems to be quite useful to make sure all bailouts in a graph are triggered. I used it for testing torchvision models and I was wondering if this might be something we might actually want to have? zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32518

Differential Revision: D19553147

Pulled By: Krovatkin

fbshipit-source-id: 7542c99051588b622091aec6d041c70731ca5d26
2020-01-24 11:19:41 -08:00
James Reed
6745bfc31c Revert "Remove __torch__ from custom class qualname" (#32514)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32514

This reverts commit c7fdf5b251.

Test Plan: Imported from OSS

Differential Revision: D19525532

Pulled By: jamesr66a

fbshipit-source-id: 126f4e87250a2ac739bd7aa161a0f7b39f143d38
2020-01-23 14:56:25 -08:00
James Reed
69f9bf8893 [JIT] Support returning tuple from custom bound C++ method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32477

Test Plan: Imported from OSS

Differential Revision: D19509927

Pulled By: jamesr66a

fbshipit-source-id: 7d407150402cc19344c3ec3b4a27b3d7c464e8ac
2020-01-23 14:56:15 -08:00
James Reed
7e14c420ae [JIT] Test __getstate__ and __setstate__ for custom bound C++ classes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32470

Test Plan: Imported from OSS

Differential Revision: D19508250

Pulled By: jamesr66a

fbshipit-source-id: 481299fb3c18fa874c2a1d2993984bb6b3193bac
2020-01-23 14:56:06 -08:00
James Reed
dbd29e5668 [JIT] Passing custom class as arg (#32260)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32260

This makes it so you can actually pass the custom class as an arg to ScriptFunctions

Test Plan: Imported from OSS

Differential Revision: D19424252

Pulled By: jamesr66a

fbshipit-source-id: c3530186619655781dedbea03c2ad321aaff1cb8
2020-01-23 14:54:59 -08:00
Elias Ellison
ef94496b36 [JIT] throw if no self arg on ignored methods (#32503)
Summary:
There was a user who did this and it would seg fault.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32503

Differential Revision: D19538481

Pulled By: eellison

fbshipit-source-id: dc3752028b9eff6ac88c025e8a2b5f8fd44ce32f
2020-01-23 14:27:00 -08:00
Pritam Damania
f050b16dd9 Move pytorch distributed tests to separate folder for contbuild. (#30445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445

Create distributed and rpc directories under caffe/test for better management
of unit tests.

Differential Revision: D18702786

fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
2020-01-22 21:16:59 -08:00
Yanli Zhao
193ac31441 [jit] Enable IValue to hold a PyObject (#32491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32491

This PR enables IValue to be able to hold a pure PyObject by adding a
new enum tag, a new jit_type to denote PyObject existance in IValue and
the JIT type system. We don't and not plan to expose this to user.

This is the basic piece that enable ivalue to be adopted broader like
making RRef always hold IValue, it might also simplify some compiler
logic
ghstack-source-id: 97039980

Test Plan: Imported from OSS

Differential Revision: D19502234

fbshipit-source-id: 90be001706d707d376cfbea25980fd82980df84a
2020-01-22 15:48:32 -08:00
Elias Ellison
38d122eca9 implement tuple constants (#31841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31841

Add Tuple Constants to JIT. The constraint here is that all elements of a tuple must themself be insertable as a a constant. Previously tuples were special cased in constant propagation, but now that there are more passes that are inserted constants, such as freezing, we should just have tuples be representable as constants.

Test Plan: Imported from OSS

Differential Revision: D19439514

Pulled By: eellison

fbshipit-source-id: 3810ba08ee349fa5598f4b53ea64525996637b1a
2020-01-22 12:13:31 -08:00
Elias Ellison
adf0916606 Add str[] float[] constants resubmit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31791

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D19439513

Pulled By: eellison

fbshipit-source-id: a04c7401687b051f0d4fb4794963931ebe004194
2020-01-22 12:11:58 -08:00
peter
b77c25dec0 Fix dll load logic for Python 3.8 on Windows (#32215)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/31181 and https://github.com/pytorch/pytorch/pull/31162#discussion_r362495611.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32215

Differential Revision: D19501869

Pulled By: ezyang

fbshipit-source-id: 363824e52d2592ad968ecf1df345aa4c0daff915
2020-01-22 08:33:34 -08:00
Jerry Zhang
44b270d892 insert_quant_dequant pass support shared class types (#31408)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31408

We'll error out when a graph is quantized with different QSchemes.
This only occurs when we have two modules that have same types (e.g. two Conv2d modules initialized with
same arguments) and quantized with two configs that would produce different quantized graphs, for example
per tensor affine and per channel affine. This is a rare case, so it should be OK to skip for now.
Actual support will come later.

Test Plan:
test_jit.py, test_quantization.py

Imported from OSS

Differential Revision: D19162366

fbshipit-source-id: 798f06d0ddef0c8458237ce88b62159cc77eec8b
2020-01-21 22:18:49 -08:00
James Reed
1ecad2bb2b Test passing custom class instance to bound method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32320

Test Plan: Imported from OSS

Differential Revision: D19437335

Pulled By: jamesr66a

fbshipit-source-id: 8f5166dbe6fc5704b12b6224932460b12be0d39b
2020-01-17 23:09:38 -08:00
James Reed
c7078a1ce8 Fix returning instance of custom class from method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32312

Test Plan: Imported from OSS

Differential Revision: D19433511

Pulled By: jamesr66a

fbshipit-source-id: f048d5f60eaba992ee42fea2d318a59b3a156578
2020-01-17 23:09:34 -08:00
Elias Ellison
e7bc1663bd fix unchecked cast alias analysis (#32309)
Summary:
Unchecked cast just refines the type of a value, the value stays the same, so the output should alias the input.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32309

Differential Revision: D19439037

Pulled By: eellison

fbshipit-source-id: fe6902d0d9a5a9ef5e9c13e1dbd056576d8c327e
2020-01-17 12:29:28 -08:00
Nikolay Korovaiko
53708e21ed classic fixed-point liveness
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31724

Differential Revision: D19426570

Pulled By: Krovatkin

fbshipit-source-id: 3387dfb25e6e9456d5d0517eac1d2e44e61d6813
2020-01-16 15:13:22 -08:00
Michael Suo
90c65b81c3 Define repr() on IValues (#32232)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32232

Previously, we were using `operator<<` as the default way of printing
IValue constants during serialization. The semantics of `operator<<`
were ill-defined; and this bit us in particular with strings and lack of
quoting.

This PR defines the role of `operator<<`: much like Python `str()`, it
is intended to produce a human-readable-ish representation for
debugging purposes.

This PR also defines a new `repr()` function on IValue that is intended
to produce a valid Python expression that can be used to recreate an
object with the same value. `repr()` is not defined on all IValue kinds
(notably tensors!) for this reason.

Test Plan: Imported from OSS

Differential Revision: D19417036

Pulled By: suo

fbshipit-source-id: c102d509eaf95a28b6a62280bc99ca6f09603de5
2020-01-15 17:35:41 -08:00
Richard Zou
19bbb4fccb Stop building documentation in pytorch_linux_xenial_cuda*_build (#32187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32187

Fixes #32058. Previously we would build documentation during the pytorch
linux cuda build. We don't actually need to do this because we have a
dedicated python_doc_build job that builds the docs. With this change,
the CUDA build should run ~10 minutes faster, giving devs faster signal.

Test Plan: - Check the CUDA (10.1) build on this PR, make sure it doesn't build the docs.

Differential Revision: D19400417

Pulled By: zou3519

fbshipit-source-id: e8fb2b818146f33330e06760377a9afbc18a71ed
2020-01-15 07:48:42 -08:00
Nikolay Korovaiko
02c3493a84 Fix an invalid peephole transformation if input/output values are written to (#28455)
Summary:
fixes https://github.com/pytorch/pytorch/issues/28360
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28455

Differential Revision: D19374601

Pulled By: Krovatkin

fbshipit-source-id: 622f24b40aba03e79e55a6b8d25d88417f7d8bad
2020-01-14 16:28:07 -08:00
davidriazati
61e509b992 Skip un-runnable tests (#31965)
Summary:
`test_init_ops` calls `orthogonal_` which fails without lapack (this test was just missing a skip condition)

The cpp tests would fail with a `undefined symbol` error if run with `BUILD_TESTS=0`, so this PR skips them if that flag is `0`
](https://our.intern.facebook.com/intern/diff/19320064/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31965

Pulled By: driazati

Differential Revision: D19320064

fbshipit-source-id: d1dcd36714107688ded25a414e8969abe026bd03
2020-01-14 11:36:52 -08:00
Jerry Zhang
1f34801460 More robust mangling (#31978)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31978

Currently we keep a `mangleIndex_` that's intenral to compilation unit and
just increment the index when we found the original name is mangled, this doesn't
guarantee the new name is not defined.
This PR fixes the problem by querying whether the new name is defined or not.
fixes: https://github.com/pytorch/pytorch/issues/31268

Test Plan:
fixes the issue

Imported from OSS

Differential Revision: D19350535

fbshipit-source-id: fe3262b2838d4208ab72e2cd4a5970b3a792ae86
2020-01-13 11:11:50 -08:00
Elias Ellison
8ecd3f783d check for object equality in constant pooling (#31800)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31800

If we know that two constants are the same object, we can ignore other constraints and pool them together. This fixes an issue introduced by the other PR where quantization relied on constant pooling happening for correctness.

Test Plan: Imported from OSS

Differential Revision: D19269499

Pulled By: eellison

fbshipit-source-id: 9d4396125aa6899cb081863d463d4f024135cbf4
2020-01-08 16:47:07 -08:00
davidriazati
883fb5434a Use real argument names for Python functions (#29300)
Summary:
This hooks up `inspect` so that Python functions get their parameters
names attached instead of naming them `0, 1, 2, ...`. This also fixes
issue #28537 where `ignore` functions were improperly typing `self`.
](https://our.intern.facebook.com/intern/diff/19256434/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29300

Pulled By: driazati

Differential Revision: D19256434

fbshipit-source-id: 6a1fe7bd0afab708b8439517798955d0abfeb44c
2020-01-08 15:41:28 -08:00
Artem Volkhin
3a2757c682 Fix tracing for modules with List[Tensor] as output (#31343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31343

Fix an issue in TorchScript tracing for modules with `c10::List<at::Tensor>` as an output. TensorList was not supported properly.

Test Plan: unit tests

Reviewed By: wanchaol

Differential Revision: D18850722

fbshipit-source-id: 87a223104d1361fe754d55deceeb1e8bbcad629b
2020-01-07 11:57:25 -08:00
Jerry Zhang
5579611544 Enable foldbn tests (#29220)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29220

Support for accessing constant is added in previous
PRs, this PR re-enables the foldbn tests

Test Plan:
test_jit.py

Imported from OSS

Differential Revision: D18846848

fbshipit-source-id: 90ceaf42539ffee80b984e0d8b2420da66c263c3
2020-01-04 11:47:01 -08:00
Jerry Zhang
ebe69236d1 Expose class constant through attr and setattr in object (#29219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29219

We added class constant in previous PRs, this PR allows access to
class constant in the object API

Test Plan:
build/bin/test_jit
python test/test_jit.py

Imported from OSS

Differential Revision: D18846851

fbshipit-source-id: 888a6517d5f747d1f8ced283c0c2c30b2f6c72c6
2020-01-04 11:09:35 -08:00
BowenBao
c4f10e0fe7 Renaming scales parameter for interpolate (#31526)
Summary:
PR separated from https://github.com/pytorch/pytorch/pull/31274.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31526

Reviewed By: zou3519

Differential Revision: D19221931

Pulled By: gchanan

fbshipit-source-id: 81958a9910867ac9d62f2b47abc49384526c4e51
2020-01-02 08:19:30 -08:00
Lu Fang
cb1af5f61f Revert D19233558: add float[] str[] constants
Test Plan: revert-hammer

Differential Revision:
D19233558

Original commit changeset: 4f7c6d9ddbe7

fbshipit-source-id: a5020a9169e349a5970323471d673e8cd7818c66
2019-12-31 11:57:34 -08:00
Elias Ellison
dd0f2f0c19 add float[] str[] constants (#31503)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31503

Add support for float lists and string lists constants, which enables better constant propagation + constant pooling + freezing.

Test Plan: Imported from OSS

Differential Revision: D19233558

Pulled By: eellison

fbshipit-source-id: 4f7c6d9ddbe7623757a9a20606ce5f394e14e93d
2019-12-30 11:58:17 -08:00
davidriazati
6064223808 @slowTest some slow tests (#31706)
Summary:
These are all the jit tests that take > 10 seconds according to `pytest test/test_jit.py --durations=15`

```
32.76s call     test/test_jit.py::TestModels::test_super_resolution
32.20s call     test/test_jit.py::TestModels::test_neural_style
30.90s call     test/test_jit.py::TestJit::test_export_batchnorm
25.95s call     test/test_jit.py::TestJit::test_dropout_module_requires_grad
22.24s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Transformer
12.38s call     test/test_jit.py::TestScript::test_fuser_double_float_codegen
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31706

Pulled By: driazati

Differential Revision: D19251567

fbshipit-source-id: 8e76f717506b8bf28d1a63ce302feb0446dc9141
2019-12-30 11:45:24 -08:00
Mingbo Wan
647569e546 get rid of choco install (#30897)
Summary:
7zip and cmake are part of base image, no need to re-install. Remove the install step can make build/test more stable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30897

Differential Revision: D19232961

Pulled By: mingbowan

fbshipit-source-id: fa3bbd1325839a2a977bf13fdbd97fda43793b8d
2019-12-27 13:12:04 -08:00
davidriazati
446e9af5b9 Fix parsing of big float literals (#29940)
Summary:
Stacked PRs
 * **#29940 - [jit] Fix parsing of big float literals**
 * #29935 - [jit] Fix hex literal parsing
 * #29931 - [jit] Throw a better error for int too big for int64_t
](https://our.intern.facebook.com/intern/diff/19186604/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29940

Pulled By: driazati

Differential Revision: D19186604

fbshipit-source-id: 6ef66588a5cf956f281e7bd1e5584ef06f5296e9
2019-12-23 17:21:07 -08:00
Gregory Chanan
68e5172382 Support optional float parameters (float?, optional<double>). (#31517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31517

This is going to be used by upsample (which currently uses magic values to represent optionals).

For now, we just introduce a fake function for testing (torch._test_optional_float(x)).

Test Plan: Imported from OSS

Differential Revision: D19198721

Pulled By: gchanan

fbshipit-source-id: 0a1382fde0927c5d277d02d62bfb31fb574b8c74
2019-12-23 08:33:39 -08:00
James Reed
7d630278da Separate torchbind from Python (#30242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30242

Pull Request resolved: https://github.com/pytorch/pytorch/pull/29501

Currently blocked on schema serialization issue

Test Plan: Imported from OSS

Differential Revision: D18463063

Pulled By: jamesr66a

fbshipit-source-id: c12a1b644eb9bf04e68ff93cccf91d6cb3e75359
2019-12-21 22:52:40 -08:00
Martin Yuan
11854bcd38 Add test to torch.jit.export_opnames, make the _C function private
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31446

Test Plan: Imported from OSS

Differential Revision: D19172851

Pulled By: iseeyuan

fbshipit-source-id: f06d8766ed73c9abe4ebf41c402ee64880d745be
2019-12-20 13:38:43 -08:00
Nikolay Korovaiko
5375ceae80 run optimizations on pre-profiled graph (#31392)
Summary:
This is the first stab at running profile-insensitive optimizations on pre-profiled graphs. Running those optimizations has a potential to simplify graphs greatly before GuardElimination and GuardElimination should be able to remove more guards.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31392

Differential Revision: D19173639

Pulled By: Krovatkin

fbshipit-source-id: 2485a2a598c10f9b5445efb30b16439ad4551b3f
2019-12-20 10:49:08 -08:00
Zachary DeVito
457286a383 fix missing type check in dictionary literal
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31375

Test Plan: Imported from OSS

Differential Revision: D19145440

Pulled By: zdevito

fbshipit-source-id: 69909089586149ef766b4858d3420864a81b2493
2019-12-19 16:22:36 -08:00
Nikolay Korovaiko
fc3103b116 fixing a naming issue in creating a residual loop node in a bailout graph (#31400)
Summary:
This addresses the issue of differentiating between `%4` in
`%12 : int, %y.1 : Tensor = prim::Loop(%9, %6, %4, %3)` and `%y.5 : Double(3) = aten::cat(%22, %4) # test_jit.py:3772:24` in `%4` loop's body in a residual continuation loop, because these should be different values.

```
[DUMP profiling_graph_executor_impl.cpp:124] with prim::BailoutTemplate_0 = graph(%z.1 : int,
[DUMP profiling_graph_executor_impl.cpp:124]       %size.1 : int):
[DUMP profiling_graph_executor_impl.cpp:124]   %2 : Tensor = prim::Constant[value= 1  1 [ CPUDoubleType{2} ]]()
[DUMP profiling_graph_executor_impl.cpp:124]   %3 : Double(2) = prim::BailOut[index=0](%2, %z.1, %size.1)
[DUMP profiling_graph_executor_impl.cpp:124]   %4 : int = prim::Constant[value=0]() # test_jit.py:3772:54
[DUMP profiling_graph_executor_impl.cpp:124]   %5 : None = prim::Constant()
[DUMP profiling_graph_executor_impl.cpp:124]   %6 : bool = prim::Constant[value=1]() # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]   %counters.1 : int[] = prim::ListConstruct()
[DUMP profiling_graph_executor_impl.cpp:124]   %8 : int = prim::Constant[value=8]()
[DUMP profiling_graph_executor_impl.cpp:124]   %9 : int = aten::__round_to_zero_floordiv(%size.1, %8)
[DUMP profiling_graph_executor_impl.cpp:124]   %10 : int = aten::mul(%9, %8)
[DUMP profiling_graph_executor_impl.cpp:124]   %11 : int = aten::sub(%size.1, %10)
[DUMP profiling_graph_executor_impl.cpp:124]   %12 : int, %y.1 : Tensor = prim::Loop(%9, %6, %4, %3) # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]     block0(%i.2 : int, %15 : int, %y.7 : Tensor):
[DUMP profiling_graph_executor_impl.cpp:124]       %17 : Double(2) = prim::BailOut[index=1](%y.7, %z.1, %counters.1, %9, %11, %i.2, %15)
[DUMP profiling_graph_executor_impl.cpp:124]       %18 : int[] = aten::append(%counters.1, %15) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %19 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %20 : Tensor = aten::ones(%19, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %21 : Double(1) = prim::BailOut[index=2](%20, %z.1, %counters.1, %9, %11, %i.2, %15, %17)
[DUMP profiling_graph_executor_impl.cpp:124]       %22 : Tensor[] = prim::ListConstruct(%17, %21)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.5 : Double(3) = aten::cat(%22, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %24 : int = prim::Constant[value=1]()
[DUMP profiling_graph_executor_impl.cpp:124]       %25 : int = aten::add(%15, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %26 : int[] = aten::append(%counters.1, %25) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %27 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %28 : Tensor = aten::ones(%27, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %29 : Double(1) = prim::BailOut[index=3](%28, %z.1, %counters.1, %9, %11, %i.2, %y.5, %25)
[DUMP profiling_graph_executor_impl.cpp:124]       %30 : Tensor[] = prim::ListConstruct(%y.5, %29)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.9 : Double(4) = aten::cat(%30, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %32 : int = aten::add(%25, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %33 : int[] = aten::append(%counters.1, %32) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %34 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %35 : Tensor = aten::ones(%34, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %36 : Double(1) = prim::BailOut[index=4](%35, %z.1, %counters.1, %9, %11, %i.2, %y.9, %32)
[DUMP profiling_graph_executor_impl.cpp:124]       %37 : Tensor[] = prim::ListConstruct(%y.9, %36)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.10 : Double(5) = aten::cat(%37, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %39 : int = aten::add(%32, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %40 : int[] = aten::append(%counters.1, %39) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %41 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %42 : Tensor = aten::ones(%41, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %43 : Double(1) = prim::BailOut[index=5](%42, %z.1, %counters.1, %9, %11, %i.2, %y.10, %39)
[DUMP profiling_graph_executor_impl.cpp:124]       %44 : Tensor[] = prim::ListConstruct(%y.10, %43)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.11 : Double(6) = aten::cat(%44, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %46 : int = aten::add(%39, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %47 : int[] = aten::append(%counters.1, %46) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %48 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %49 : Tensor = aten::ones(%48, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %50 : Double(1) = prim::BailOut[index=6](%49, %z.1, %counters.1, %9, %11, %i.2, %y.11, %46)
[DUMP profiling_graph_executor_impl.cpp:124]       %51 : Tensor[] = prim::ListConstruct(%y.11, %50)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.12 : Double(7) = aten::cat(%51, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %53 : int = aten::add(%46, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %54 : int[] = aten::append(%counters.1, %53) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %55 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %56 : Tensor = aten::ones(%55, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %57 : Double(1) = prim::BailOut[index=7](%56, %z.1, %counters.1, %9, %11, %i.2, %y.12, %53)
[DUMP profiling_graph_executor_impl.cpp:124]       %58 : Tensor[] = prim::ListConstruct(%y.12, %57)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.13 : Double(8) = aten::cat(%58, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %60 : int = aten::add(%53, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %61 : int[] = aten::append(%counters.1, %60) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %62 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %63 : Tensor = aten::ones(%62, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %64 : Double(1) = prim::BailOut[index=8](%63, %z.1, %counters.1, %9, %11, %i.2, %y.13, %60)
[DUMP profiling_graph_executor_impl.cpp:124]       %65 : Tensor[] = prim::ListConstruct(%y.13, %64)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.14 : Double(9) = aten::cat(%65, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %67 : int = aten::add(%60, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %68 : int[] = aten::append(%counters.1, %67) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %69 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %70 : Tensor = aten::ones(%69, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %71 : Double(1) = prim::BailOut[index=9](%70, %z.1, %counters.1, %9, %11, %i.2, %y.14, %67)
[DUMP profiling_graph_executor_impl.cpp:124]       %72 : Tensor[] = prim::ListConstruct(%y.14, %71)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.15 : Tensor = aten::cat(%72, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %74 : int = aten::add(%67, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       -> (%6, %74, %y.15)
[DUMP profiling_graph_executor_impl.cpp:124]   %75 : Double(10) = prim::BailOut[index=10](%y.1, %z.1, %counters.1, %11, %12)
[DUMP profiling_graph_executor_impl.cpp:124]   %76 : int, %y : Tensor = prim::Loop(%11, %6, %12, %75) # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]     block0(%i.1 : int, %79 : int, %y.6 : Tensor):
[DUMP profiling_graph_executor_impl.cpp:124]       %81 : Double(*) = prim::BailOut[index=11](%y.6, %z.1, %counters.1, %11, %i.1, %79)
[DUMP profiling_graph_executor_impl.cpp:124]       %82 : int[] = aten::append(%counters.1, %79) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %83 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %84 : Tensor = aten::ones(%83, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %85 : Double(1) = prim::BailOut[index=12](%84, %counters.1, %11, %i.1, %79, %81)
[DUMP profiling_graph_executor_impl.cpp:124]       %86 : Tensor[] = prim::ListConstruct(%81, %85)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.4 : Tensor = aten::cat(%86, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %88 : int = prim::Constant[value=1]()
[DUMP profiling_graph_executor_impl.cpp:124]       %89 : int = aten::add(%79, %88)
[DUMP profiling_graph_executor_impl.cpp:124]       -> (%6, %89, %y.4)
[DUMP profiling_graph_executor_impl.cpp:124]   %90 : Double(12) = prim::BailOut[index=13](%y, %counters.1)
[DUMP profiling_graph_executor_impl.cpp:124]   %91 : (Tensor, int[]) = prim::TupleConstruct(%90, %counters.1)
[DUMP profiling_graph_executor_impl.cpp:124]   return (%91)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31400

Differential Revision: D19172750

Pulled By: Krovatkin

fbshipit-source-id: 85d3aac4e80b65b83b6be3c0bca8075a731a2b7e
2019-12-19 00:34:50 -08:00
Elias Ellison
fb24f7c4ad catch all exceptions in converting default values to ivalues (#31398)
Summary:
Previously we would only catch `py::cast_error` which led to incomprehensible error messages like: `TypeError: 'NoneType' object is not iterable`. We are running arbitrary pybind code here, and not doing anything with the error message, so we should be less restrictive with the types of errors we catch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31398

Differential Revision: D19166655

Pulled By: eellison

fbshipit-source-id: 84db8b3714c718b475913f2f4bb6f19e62f2d9ec
2019-12-18 20:27:46 -08:00
Jerry Zhang
fe707c7849 Use default_observer and default_weight_observer in tests (#31424)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31424

att

Test Plan:
test_jit.py

Imported from OSS

Differential Revision: D19162368

fbshipit-source-id: 33b95ba643eeeae942283bbc33f7ceda8d14c431
2019-12-18 18:35:07 -08:00
James Reed
a3cdb7eca3 Fix default instantation of dynamic quantized LSTM
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31433

Test Plan: Imported from OSS

Differential Revision: D19164539

Pulled By: jamesr66a

fbshipit-source-id: 7045817ab3dfb530c4480a10523c4c6bcdbfc7eb
2019-12-18 16:59:00 -08:00
davidriazati
148bcd3ee5 Add support for builtins as attributes (#31269)
Summary:
Fixes #27495

This adds builtins as another piece of a concrete type. They're separate from normal functions since they represent the `BuiltinFunction` sugared value (which is a direct call to a builtin op). It also moves the builtins related logic from `jit/__init__.py` to `jit/_builtins.py` so it can be used from `jit/_recursive.py` to look up functions in the builtins table.
](https://our.intern.facebook.com/intern/diff/19149779/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31269

Pulled By: driazati

Differential Revision: D19149779

fbshipit-source-id: d4e5e5d7d7d528b75a2f503e6004394251a4e82d
2019-12-18 15:24:45 -08:00
davidriazati
7692494c67 Fix hex literal parsing (#29935)
Summary:
Stacked PRs
 * #29940 - [jit] Fix parsing of big float literals
 * **#29935 - [jit] Fix hex literal parsing**
 * #29931 - [jit] Throw a better error for int too big for int64_t

Previously these were all parsed as `0`
](https://our.intern.facebook.com/intern/diff/19124944/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29935

Pulled By: driazati

Differential Revision: D19124944

fbshipit-source-id: 1ee0c1dee589933363a5efba069a2cfaf94373c5
2019-12-18 14:00:22 -08:00
davidriazati
1f50cfc24d Throw a better error for int too big for int64_t
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29931

Pulled By: driazati

Differential Revision: D19124934

fbshipit-source-id: 91841d7ba4f2f6142c51fba07b7faa14bb817e3a
2019-12-18 14:00:16 -08:00
Elias Ellison
fb30a48b4e add unsupported section (#31329)
Summary:
Add a section for unsupported ops, and modules. Automatically generate the properties and attributes that aren't bound, and for ops that have semantic mismatches set up tests so the docs stay up to date.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31329

Differential Revision: D19164472

Pulled By: eellison

fbshipit-source-id: 46290bb8a64d9de928cfb1eda5ff4558c3799c88
2019-12-18 13:56:02 -08:00
Alexander Stante
f30b14dead Fix handling of type comments in body (#30590)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/30477. Any type comment after `# type: (...) -> ` is ignored.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30590

Differential Revision: D18887351

Pulled By: driazati

fbshipit-source-id: 162c652f6d7610d14609bbcb25aaa27cdd947a76
2019-12-12 18:19:30 -08:00
Elias Ellison
bee6344d4e remove / rewrite weak module tests (#31193)
Summary:
Remove most of the testing for `weak_script`, since we removed it. Refactor a few of the existing tests to use recursive scripting api.

Fix for https://github.com/pytorch/pytorch/issues/23965
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31193

Differential Revision: D18966291

Pulled By: eellison

fbshipit-source-id: 6b1e18c293f55017868a14610d87b69be42bde12
2019-12-12 13:33:38 -08:00
Elias Ellison
56de8853da Resubmit overload v2 (#31123)
Summary:
Resubmit of https://github.com/pytorch/pytorch/pull/30356 and https://github.com/pytorch/pytorch/pull/31014 :'(

The last commit contains the fix. There was an internal FBcode error not able to compile the previous `impl_default->second.equal(default_val.second))` line. I tried various fixes in C++ internally but couldn't figure anything out. This is a good example of the programming costs of going from python -> c++ for different types of objects, because the conceptual overhead has expanded in scope from (python) -> (python, c++, pybind).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31123

Differential Revision: D18936128

Pulled By: eellison

fbshipit-source-id: 7d8fd66a6dd4a3e9838f3a0b68c219b6565a9462
2019-12-12 07:54:23 -08:00
Lara
97c1e90f46 ONNX Interpolate Add Scales Params (#28324)
Summary:
Fix for : https://github.com/pytorch/pytorch/issues/27176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28324

Reviewed By: hl475

Differential Revision: D18309133

Pulled By: houseroad

fbshipit-source-id: 348bb41393442c6b107d88fc2cd3224e0afa3ccf
2019-12-11 20:09:15 -08:00
davidriazati
679b20b1e4 Unify list elements for all list types (#30777)
Summary:
Previously list elements were only unified for tensor lists.
This improves error messages and expands the unification logic
to include all types.
](https://our.intern.facebook.com/intern/diff/18837726/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30777

Pulled By: driazati

Differential Revision: D18837726

fbshipit-source-id: c4d275562a8429700987569426d694faa8f6002e
2019-12-11 17:00:52 -08:00
David Riazati
1f87e823b8 Make nn.Transformer TorchScript compatible (#28561)
Summary:
This makes `nn.Transformer` usable from TorchScript. It preserves backwards compatibility via `__setstate__` on the encoder/decoder.

Fixes https://github.com/pytorch/pytorch/issues/24173
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28561

Differential Revision: D18124753

Pulled By: driazati

fbshipit-source-id: 7314843e5aa9c9bf974c4672e4edb24ed8ef4a6f
2019-12-11 10:57:31 -08:00
Alban Desmaison
717274c001 Add useful warnings for t.grad when it won't be populated for known reasons (#30531)
Summary:
Fix https://github.com/pytorch/pytorch/issues/2362 and https://github.com/pytorch/pytorch/issues/19778

To avoid issues with frozen model, we only consider warning for Tensors that require gradients and are neither leafs nor retain gradients.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30531

Differential Revision: D18832767

Pulled By: albanD

fbshipit-source-id: 743e863dc14ab57713e66da78b2e4d759dfba0ff
2019-12-11 09:47:18 -08:00
Elias Ellison
9f3fe78239 peephole optimize type refinements (#31024)
Summary:
Peephole optimize out type refinements when they are no longer refining the type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31024

Differential Revision: D18920958

Pulled By: eellison

fbshipit-source-id: 6d05d9812b9f9dcf001de760a78a2042fb832773
2019-12-10 18:32:28 -08:00
Pieter Noordhuis
78a00d72b4 Revert D18899127: resubmit polish up overloads on free functions
Test Plan: revert-hammer

Differential Revision:
D18899127

Original commit changeset: 9049b8718926

fbshipit-source-id: c70a8aa4120aa757dce0926a8ab3cc5c92cd6041
2019-12-10 10:51:07 -08:00
Elias Ellison
af4040d808 resubmit polish up overloads on free functions (#31014)
Summary:
Resubmitting https://github.com/pytorch/pytorch/pull/30356

Second commit has reintroduces deleted function which caused revert previously.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31014

Differential Revision: D18899127

Pulled By: eellison

fbshipit-source-id: 9049b8718926c329d9cb46bb96eac6c278e9b866
2019-12-10 07:57:47 -08:00
Elias Ellison
f48a8901c5 Add floor_divide function (#30493)
Summary:
Adds `torch.floor_divide` following the numpy's `floor_divide` api. I only implemented the out-of-place version, I can add the inplace version if requested.

Also fixes  https://github.com/pytorch/pytorch/issues/27512
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30493

Differential Revision: D18896211

Pulled By: eellison

fbshipit-source-id: ee401c96ab23a62fc114ed3bb9791b8ec150ecbd
2019-12-10 07:51:39 -08:00