Commit Graph

1509 Commits

Author SHA1 Message Date
Michael Voznesensky
960f4b51e3 [JIT] Fix @staticmethod access from self on modules (#37702)
Summary:
Closes https://github.com/pytorch/pytorch/issues/30755
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37702

Differential Revision: D21389989

Pulled By: voznesenskym

fbshipit-source-id: f9b7e26a9eab7dc3d7762a5a28f85424dac5fbb3
2020-05-14 21:12:10 -07:00
Will Feng (FAIAR)
38d141ede5 Support having a different forward method when we are not in scripting mode (#38158)
Summary:
TorchScript currently doesn’t support `*args, **kwargs` in method signature, which is extensively used in DPER3 low-level modules’ forward method. In order to make DPER3 low-level modules scriptable, I was thinking about a solution of having a forward method *only* for TorchScript, and replace the forward method when we are not in scripting mode.

This solution works today, and I would like to add a test to make sure it will always work in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38158

Differential Revision: D21485657

Pulled By: yf225

fbshipit-source-id: df7368e8a5265418be7c305e6666ffd76e595466
2020-05-14 12:13:06 -07:00
David Reiss
7f7fdb1013 Remove a use of checkScript(str) (#35623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35623

Python 2 has reached end-of-life and is no longer supported by PyTorch.
This test case is valid syntax in Python 3.

Test Plan: CI

Differential Revision: D20842874

Pulled By: dreiss

fbshipit-source-id: 9f12e046f827d4f9d5eca99b0b0b46f73e06ff51
2020-05-14 10:07:58 -07:00
Hong Xu
336e1ec592 Clean up error handling in is_nonzero and where in TensorCompare.cpp (#38150)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38150

Differential Revision: D21539736

Pulled By: ezyang

fbshipit-source-id: e390c12f5948192a552d66dcd1bb89b2cb45f170
2020-05-13 20:19:40 -07:00
Elias Ellison
8d883f5c7c [JIT] [Easy] Add location to implicit conversions (#38442)
Summary:
Previously, we weren't adding the location to implicit conversions, so the error message wouldn't show location when these ops failed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38442

Differential Revision: D21563500

Pulled By: eellison

fbshipit-source-id: 19dd786ab8580f11ed919aac669efeed0ef52dcb
2020-05-13 18:02:41 -07:00
Michael Suo
2efa7e04c2 [jit] move torchbind tests to separate file (#37473)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37473

Test Plan: Imported from OSS

Differential Revision: D21297541

Pulled By: suo

fbshipit-source-id: 65c48094b1f26fbbf251021957257ce04279922b
2020-05-13 17:37:00 -07:00
anjali411
1676c7d618 Added autograd tests, disabled jit autograd tests for complex and added a separate list for tests for complex dtype only (#38399)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38399

Test Plan: Imported from OSS

Differential Revision: D21555941

Pulled By: anjali411

fbshipit-source-id: ea9f5a76590c5bab3df6a540617b074238bfb535
2020-05-13 16:41:09 -07:00
Michael Suo
167a978a03 Fix method stub creation for function attributes (#37994)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37994

Before, reassigning a method in a module (like `forward = _forward`)
didn't work, because we look at the function object's name for our def
name when building AST. Mkae that overrideable to handle cases like
reassignment

Test Plan: Imported from OSS

Differential Revision: D21444535

Pulled By: suo

fbshipit-source-id: 4f045f18b5a146edc8005689af525d7d7ed8dd5f
2020-05-12 23:20:35 -07:00
Elias Ellison
eb3e9872c9 [JIT] make torch.unique compilable (#38156)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/37986

Follows the stack in https://github.com/pytorch/pytorch/pull/33783 stack to make functions in `torch/functional.py` resolve to their python implementations. Because the return type of `torch.unique` depends on `return_inverse` and `return_counts` I had to refactor the implementation to use our boolean_dispatch mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38156

Differential Revision: D21504449

Pulled By: eellison

fbshipit-source-id: 7efb1dff3b5c00655da10168403ac4817286ff59
2020-05-12 18:37:53 -07:00
Kimish Patel
f954dd7823 Add dropout removal pass. (#38253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38253

This pass removes dropout and dropout_ nodes when training is false. It
requires to have run freeze_module pass which does both inlining and constant
propagation, without which training variable remains as attribute instead of
constant.
ghstack-source-id: 103939141

Test Plan: python test/test_jit.py TestScript.test_remove_dropout

Reviewed By: dreiss

Differential Revision: D21505863

fbshipit-source-id: 42ea45804e4653b625b6a254c8d8480757264aa8
2020-05-12 14:38:34 -07:00
James Reed
a553935e3c [JIT] Expose magic methods on script::Object (#38167)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38167

Test Plan: Imported from OSS

Differential Revision: D21486709

Pulled By: jamesr66a

fbshipit-source-id: 17b44d979fc658768b0d64f7d8af6fb684043ea3
2020-05-11 15:01:15 -07:00
Vitaly Fedyunin
57d01be92b Replacing assertEqual with assertEqualIgnoreType wherever types missmatch (#38102)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38102

Test Plan: Imported from OSS

Differential Revision: D21477060

Pulled By: VitalyFedyunin

fbshipit-source-id: 25e0fd837ca9bfccf0ce994c80f7790c894096d4
2020-05-09 14:48:55 -07:00
Ailing Zhang
e84aa0211d [JIT]Support List variable in adv indexing. (#37966)
Summary:
Followup of https://github.com/pytorch/pytorch/issues/37848 I realized that it's better to condition on `Value` type instead of token type. So now it also support indexing through list variables (used to be list literal only).
Also apparently our eager frontend accept indexing with float list as well, so matched this edge case behavior as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37966

Reviewed By: suo

Differential Revision: D21439642

Pulled By: ailzhang

fbshipit-source-id: cedb8431ef38747d4aa9909a6bbf8e954dbe0e25
2020-05-08 15:40:11 -07:00
James Reed
c1e7758b5e Back out "Revert D20229168: [quantization] Use torchbind for Linear PackedParams" (#38101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38101

Original commit changeset: 29e8a4d3b8bf
ghstack-source-id: 103730417

Test Plan: waitforsadcastle

Differential Revision: D21471381

fbshipit-source-id: a922cdf31ba32021e7264ae1454c646c0bfd7ef4
2020-05-08 10:53:06 -07:00
Ailing Zhang
9232356e5f remove uses of type() and type_as() part 1. (#38029)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38029

Differential Revision: D21468523

Pulled By: ailzhang

fbshipit-source-id: 14b7185d43eb03f630cfaa2d70e02d637ff8551b
2020-05-08 08:16:24 -07:00
Nikita Shulga
4bc0a7f86a Revert D20229168: [quantization] Use torchbind for Linear PackedParams
Test Plan: revert-hammer

Differential Revision:
D20229168

Original commit changeset: 3607cac9aa5b

fbshipit-source-id: 29e8a4d3b8bffd95ff6a58b46c4f1c1e23770304
2020-05-07 19:47:45 -07:00
James Reed
eaf9b28c55 [quantization] Use torchbind for Linear PackedParams (#34140)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34140

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D20229168

Pulled By: jamesr66a

fbshipit-source-id: 3607cac9aa5b4b044572329742baed03350491c6
2020-05-07 19:03:44 -07:00
eellison
d5df055bbb [WIP][JIT] Add JIT backend registration API (#35833)
Summary:
**Summary**
This commit adds `torch::jit::RegisterBackend`, an API that allows
external backends to be registered for the execution of JIT subgraphs
outside the JIT interpreter. In order to register an external backend,
one must extend the provided abstract class `PyTorchBackendInterface` and provide
two additional functions: one that creates an instance of the aforementioned subclass
of `PyTorchBackendInterface`, and another that preprocesses a `ScriptModule` so that
it can run on the backend. Then, a `ScriptModule` that can compile and execute a given
JIT subgraph using the functions provided at registration time is generated
for each registered backend.

**Testing**
This commit adds a unit test that uses a minimal test backend
to make sure that the registration endpoint and generated
`ScriptModule` work.

```
$ python test/test_jit.py TestBackends
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.183s

OK

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35833

Differential Revision: D21231955

Pulled By: SplitInfinity

fbshipit-source-id: 452db1123d0e5d83f97fe5da8a00fdfdb50dbef9
2020-05-07 18:15:26 -07:00
Elias Ellison
f5b3125af7 [JIT] Peephole optimize list ops (#37612)
Summary:
Peephole optimize  `len(li)` and `li[index]` patterns.

This changes the Profiled Graph IR for the following tests:
```
(Test Name, Num ifs loops, Num non-tensor nodes)
Before:
('test_nn_Conv1d_reflect_stride2_pad2', 3, 14)
('test_nn_Conv2d_reflect_stride2_pad2', 3, 14)
('test_nn_Conv1d_circular_stride2_pad2', 5, 31)
('test_nn_Conv2d_circular_stride2_pad2', 5, 31)
('test_nn_Conv3d_circular_stride2_pad2', 5, 31)
('test_nn_Conv1d_replicate_stride2_pad2', 3, 14)
('test_nn_Conv2d_replicate_stride2_pad2', 3, 14)
('test_nn_Conv3d_replicate_stride2_pad2', 3, 14)
After
('test_nn_Conv1d_reflect_stride2_pad2', 0, 2)
('test_nn_Conv2d_reflect_stride2_pad2', 0, 2)
('test_nn_Conv1d_circular_stride2_pad2', 0, 4)
('test_nn_Conv2d_circular_stride2_pad2', 0, 7)
('test_nn_Conv3d_circular_stride2_pad2', 0, 10)
('test_nn_Conv1d_replicate_stride2_pad2', 0, 2)
('test_nn_Conv2d_replicate_stride2_pad2', 0, 2)
('test_nn_Conv3d_replicate_stride2_pad2', 0, 2)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37612

Differential Revision: D21352676

Pulled By: eellison

fbshipit-source-id: f8a0e7653b7a6a4c769f075de9b3044242ca9336
2020-05-06 15:55:18 -07:00
Elias Ellison
28ac5cdc91 fix profiling test (#37961)
Summary:
this is failing in the profiling_executor job
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37961

Differential Revision: D21434341

Pulled By: eellison

fbshipit-source-id: b34f94b1595ef6f6edee76cd200f951a2ef21f22
2020-05-06 15:04:44 -07:00
Elias Ellison
0e3a05ec00 [JIT] rename enable_profiling_mode to enable_profiling_mode_for_profiling_tests (#37825)
Summary:
The existing contextmanager only conditionally enabled_profiling_mode, which was counter intuitive. When we changed the default executor it broke internal benchmarking as a result.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37825

Differential Revision: D21404611

Pulled By: eellison

fbshipit-source-id: 306b3c333ef4eb44ab6a6e5ab4e0682e5ce312ce
2020-05-06 11:30:02 -07:00
Ailing Zhang
dd618216c5 [JIT]Support adv indexing using list. (#37848)
Summary:
We used to only support indexing through
- numbers like `x[0, 1]`
- tuple like `x[(0, 1)]`
- tensor like `x[torch.tensor([0, 1])]`

This PR adds support for indexing through list which is equivalent to tensor.
- `x[[0, 1, 5]]`
- `x[[0, 1], [0, 1]]`
- `x[[[0, 1], [0, 1]], [[0, 1], [0, 1]]]`

Note for `x[[0, 1, 5]]` we had a bug in AST conversion code so we used to treat it like `x[0, 1, 5]` which means it might accidentally run and produce wrong result(fixes https://github.com/pytorch/pytorch/issues/37286 fixes https://github.com/pytorch/pytorch/issues/18616), now that it's fixed we probably want to mark it as BC breaking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37848

Reviewed By: suo

Differential Revision: D21409840

Pulled By: ailzhang

fbshipit-source-id: 6f2d962885c6dc009cb384d98be1822f5ca7a189
2020-05-06 10:44:48 -07:00
Jerry Zhang
70f375becf [quant] ConvPackedParams with TorchBind (#35923)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35923

(Note: this ignores all push blocking failures!)

Test Plan:
tbd

Imported from OSS

Differential Revision: D20957089

fbshipit-source-id: 74d8bd628ccba64e902ea6ebabc2b883924050b0
2020-05-05 20:18:36 -07:00
Michael Suo
bd220b336b [jit] fix trace checking reporting divergent names (#37842)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37842

Fixes https://github.com/pytorch/pytorch/issues/23993.

Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample1 = torch.ones(1)
sample2 = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample1, sample2,),))
> produces a graph with something like:
> %sample1, %sample2 = prim::TupleUnpack(%input)
```

This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
> produces a graph with something like
> %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.

Test Plan: Imported from OSS

Differential Revision: D21406478

Pulled By: suo

fbshipit-source-id: 3c7066b95d4a6e9b528888309954b02dadbc1a07
2020-05-05 13:39:41 -07:00
Elias Ellison
23d0441da7 [JIT] Fix GetAttr inconsistency (#37424)
Summary:
We were previously only looking at class attributes, so that didn't include methods etc, and would silently give wrong semantics. This makes hasAttr go through the same resolution as our other attribute lookups.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37424

Differential Revision: D21282633

Pulled By: eellison

fbshipit-source-id: 8e970f365c2740d137a02331739c2ed93747b918
2020-05-05 09:06:51 -07:00
Michael Suo
804e32a467 split out docs tests into separate job (#37793)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37793

Test Plan: Imported from OSS

Differential Revision: D21392798

Pulled By: suo

fbshipit-source-id: 172fb0522d0b168ca19a382e5fb1eb87b6390acc
2020-05-04 17:58:04 -07:00
Michael Suo
b7f258bbd3 add fmt to libtorch_python.so (#37560)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37560

Test Plan: Imported from OSS

Differential Revision: D21320059

Pulled By: suo

fbshipit-source-id: 95cfe7cf26c515fdfcb4621cc58266d838a38a3e
2020-05-04 10:14:37 -07:00
Nikolay Korovaiko
831c8f362f fix the incorrect merge of profiling information of two tensor types for the same value (#36806)
Summary:
as a part of moving to the dynamic shapes we are now passing `frame_id` to each profiling callback. The implementation of that requires copying profiling callbacks into Interpreter, so `first`s are actually different for every run. The dynamic shapes merging algorithm won't be using `first`, but in the meantime, while we get there, this should be a good enough fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36806

Differential Revision: D21307173

Pulled By: Krovatkin

fbshipit-source-id: 7dade56ebcc72ebd40bb7f3d636c7b83c99b628f
2020-05-01 12:53:25 -07:00
Michael Voznesensky
91e74fd843 [JIT] Adds a code_with_constants method to module printing (#37586)
Summary:
Closes https://github.com/pytorch/pytorch/issues/36625
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37586

Differential Revision: D21331385

Pulled By: suo

fbshipit-source-id: 752e63eac8bdd06c6719efb972cdc832ad7c1535
2020-04-30 20:44:01 -07:00
James Reed
d3d10cc14a Add tests for lower_graph and fix unpack() ops dispatch (#37540)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37540

ghstack-source-id: 103169129

Test Plan:
buck test mode/no-gpu mode/dev //caffe2/test:jit -- 'test_lower_graph_conv \(test_jit\.TestScript\)'
buck test mode/no-gpu mode/dev //caffe2/test:jit -- 'test_lower_graph \(test_jit\.TestScript\)'

Differential Revision: D21313433

fbshipit-source-id: bb9942272784e517b07537ee4c149b9dc4df4c2a
2020-04-30 10:55:05 -07:00
Michael Suo
896f8130a6 Revert D21297549: [jit] fix trace checking reporting divergent names
Test Plan: revert-hammer

Differential Revision:
D21297549

Original commit changeset: 981d5879a4a2

fbshipit-source-id: 9be6e88007c644914973a305f9e7a961ef11a815
2020-04-29 16:16:44 -07:00
Michael Suo
4bfa51d405 [jit] fix trace checking reporting divergent names (#37464)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37464

Fixes https://github.com/pytorch/pytorch/issues/23993.

There are two fixes here:
1. Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample, sample,),))
# produces a graph with something like
# %sample, %sample = prim::TupleUnpack(%input)
```
This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
# produces a graph with something like
# %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.

2. Previously, our input cloning for trace checking didn't do a memoized
deep copy. So:
```
_clone_inputs((sample, sample, sample))
```
produces a tuple with three non-aliased tensors. That's wrong! Use
copy.deepcopy with a memoization argument to fix this.

Test Plan: Imported from OSS

Differential Revision: D21297549

Pulled By: suo

fbshipit-source-id: 981d5879a4a244520dd68489767129ff357f1497
2020-04-28 23:52:57 -07:00
Elias Ellison
a55d80e1c5 [JIT] remove dominated guards of functional values (#37105)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37105

If a value isn't mutated anywhere and is guarded by a node, then we can remove all other guards that are dominated by the first guard.

This reduces the number of (test name, Ifs/Loops, non-tensor nodes excluding getAttr and Bailouts) from the previous PR for the following tests:
```
Before:  ('upsample', 0, 13)
After:  ('upsample', 0, 5)
Before:  ('upsample', 0, 2)
After:  ('upsample', 0, 1)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 21)
After:  ('interpolate', 1, 18)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 21)
After:  ('interpolate', 1, 20)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 13)
After:  ('interpolate', 1, 11)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 15)
After:  ('interpolate', 1, 13)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 25)
After:  ('interpolate', 1, 21)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 27)
After:  ('interpolate', 1, 23)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('test_nn_BatchNorm1d_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_3d_input', 2, 3)
After:  ('test_nn_BatchNorm1d_3d_input', 1, 2)
Before:  ('test_nn_BatchNorm1d_affine_simple_average', 2, 5)
After:  ('test_nn_BatchNorm1d_affine_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm1d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_3d_input_not_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_3d_input_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm1d_zero_batch', 1, 2)
Before:  ('test_nn_BatchNorm2d', 2, 3)
After:  ('test_nn_BatchNorm2d', 1, 2)
Before:  ('test_nn_BatchNorm2d_2d_simple_average', 2, 5)
After:  ('test_nn_BatchNorm2d_2d_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm2d_momentum', 2, 3)
After:  ('test_nn_BatchNorm2d_momentum', 1, 2)
Before:  ('test_nn_BatchNorm2d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm2d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm2d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm2d_zero_batch', 1, 2)
Before:  ('test_nn_BatchNorm3d', 2, 3)
After:  ('test_nn_BatchNorm3d', 1, 2)
Before:  ('test_nn_BatchNorm3d_3d_simple_average', 2, 5)
After:  ('test_nn_BatchNorm3d_3d_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm3d_momentum', 2, 3)
After:  ('test_nn_BatchNorm3d_momentum', 1, 2)
Before:  ('test_nn_BatchNorm3d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm3d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm3d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm3d_zero_batch', 1, 2)
Before:  ('test_nn_Transformer', 127, 467)
After:  ('test_nn_Transformer', 122, 450)
```

Test Plan: Imported from OSS

Differential Revision: D21215652

Pulled By: eellison

fbshipit-source-id: 0365fc2e351caca7e1ccaa25428908a26e3f5343
2020-04-28 23:28:18 -07:00
Elias Ellison
45e8451b33 optimize is_float_point calls (#37012)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37012

Removes an if statement in `torch.nn.functional.affine_grid`

Test Plan: Imported from OSS

Differential Revision: D21160755

Pulled By: eellison

fbshipit-source-id: 8b030936c9fbdb05b44abc9f254805d102f2acc2
2020-04-28 23:28:12 -07:00
Elias Ellison
cde1350a5d Add support for generic list constants (#36953)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36953

Add support for generic lists as a constant. generic dicts & tuples are already implemented. This is a pretty common pattern and cuts down on the number of non-tensor nodes executed in interpolate tests.

Test Plan: Imported from OSS

Differential Revision: D21160761

Pulled By: eellison

fbshipit-source-id: 1e6b7b25b7580f09067794772d44e615601c60c4
2020-04-28 23:28:07 -07:00
Elias Ellison
92129956cf Add size peephole optimziation (#36758)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36758

Test Plan: Imported from OSS

Differential Revision: D21160760

Pulled By: eellison

fbshipit-source-id: 9cdb8eeffa71fb4670a811347ae4fad2a82ae1d8
2020-04-28 23:27:52 -07:00
Michael Suo
92b9089fd9 [jit] Fix pretty printing of functions (#37432)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37432

Fixes https://github.com/pytorch/pytorch/issues/36803.

Test Plan: Imported from OSS

Differential Revision: D21284735

Pulled By: suo

fbshipit-source-id: 8c673099b3171070bff80fd1defc91487f66d4b3
2020-04-28 21:30:49 -07:00
mattip
ec8006cc16 [ONNX] fix provider_version and add consistency test (#36797)
Summary:
forward port the test from pr gh-36795, xref issue gh-32561
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36797

Differential Revision: D21257034

Pulled By: ezyang

fbshipit-source-id: d217da0e74f00a433c904defc0bf3eb5f594fd5e
2020-04-27 11:00:23 -07:00
Nikita Shulga
47c4dca1ab Remove python-2 or python<3.5 checks from unit tests (#37252)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37252

Test Plan: CI

Differential Revision: D21241083

Pulled By: malfet

fbshipit-source-id: 44164b822f7905288abb2beda0175d2162d86143
2020-04-24 17:42:04 -07:00
Zachary DeVito
b6bb644e41 Fix long line splitting issue in python_print (#37088)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37088

For an inlined expression tree like `(e_0, (e_1, e_long))` the previous
algoritm only scanned the same statement as `e_long`, splitting the
inlined expressions across lines. Because it did not scan `e_0`, `e_0`
would still get emitted inline, causing it to reverse order with `e_1` and
`e_long`. The new algorithm scans starting at `e_long` and going all
the way back up the expression until it reaches the end of the inlined
statement. Caching of what has already been scanned has been added so that
if there was a second long long `e_long2` after `e_long`, it would not
rescan and re-inline the statements that were already split.

Test Plan: Imported from OSS

Differential Revision: D21180394

Pulled By: zdevito

fbshipit-source-id: 4d142c83a04c89a47d04282f67a513f82cf153c0
2020-04-24 15:14:39 -07:00
moto
5a27ec09b8 Add Inverse Short Time Fourier Transform in ATen native (#35569)
Summary:
Ported `torchaudio`'s implementation (test, and documentation as well) to ATen.

Note
 - Batch packing/unpacking is performed in Python. ATen implementation expects 4D input tensor.
 - The way `hop_length` is initialized in the same way as `stft` implementation. [The Torchaudio's version tried to mimic the same behavior but slightly different](7da61a4bee/torchaudio/functional.py (L152-L157)).

Closes https://github.com/pytorch/pytorch/issues/34827
Relates https://github.com/pytorch/pytorch/issues/3775
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35569

Differential Revision: D21178090

Pulled By: mthrok

fbshipit-source-id: 2701a8b241a36a6fb1b740c2fb2b07cb938185d4
2020-04-24 12:14:55 -07:00
Vishwak Srinivasan
fd5b5cd604 Allowing casting str to int in JIT (#36016)
Summary:
Changelog:
- Allow int(str) in TorchScript
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36016

Test Plan:
- Added tests in test_jit.py

Closes https://github.com/pytorch/pytorch/issues/35948

Differential Revision: D21076438

Pulled By: driazati

fbshipit-source-id: d0753dc0e1c79f4f943c303b58b2d228856ba793
2020-04-23 14:26:24 -07:00
David Reiss
e75fb4356b Remove (most) Python 2 support from Python code (#35615)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35615

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well (though using side-by-side view and ignoring
whitespace change might be helpful).

Test Plan: CI

Differential Revision: D20842886

Pulled By: dreiss

fbshipit-source-id: 8cad4e87c45895e7ce3938a88e61157a79504aed
2020-04-22 09:23:14 -07:00
Mikhail Zolotukhin
359e7f4bba Teach IRParser to parse strides along with sizes in a tensor type. (#36951)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36951

Test Plan: Imported from OSS

Differential Revision: D21139940

Pulled By: ZolotukhinM

fbshipit-source-id: b56a1fddfc9de4684da3ba9a462e344d0985e8b6
2020-04-21 17:27:15 -07:00
Mike Ruberry
bcdb0727c2 Revert D20907254: Fix long line splitting issue in python_print
Test Plan: revert-hammer

Differential Revision:
D20907254

Original commit changeset: ebfc1a4eefc2

fbshipit-source-id: 76440a8649a17728c50e2f3eeb3744a2245f6daf
2020-04-21 16:24:32 -07:00
Zachary DeVito
bf676682e7 Fix long line splitting issue in python_print (#36188)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36188

* Need to remove n^2 behavior for scanning whether to split or not
  otherwise long inline chains will take a long time re-scanning.

Test Plan: Imported from OSS

Differential Revision: D20907254

Pulled By: zdevito

fbshipit-source-id: ebfc1a4eefc26d5806381e7afd75b7a9cd4cde97
2020-04-21 15:46:42 -07:00
Mike Ruberry
71ec8b2002 Switches test_jit to use float32 as its default scalar type (#36982)
Summary:
Our test suite used to set double as its default scalar type, and when it was switched to not do so (to be more consistent with how users experience PyTorch), a few tests had to still set the default scalar type to double to function properly. Now that the jit no longer creates double tensors so frequently, it appears that test_jit no longer needs to set double as its default scalar type, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36982

Differential Revision: D21152120

Pulled By: mruberry

fbshipit-source-id: ea6d3c1ad55552dc5affa1fe1bd0e5189849e6d7
2020-04-21 11:23:28 -07:00
Brian Vaughan
54ed6fd3ee Use both absolute and relative tolerance in testing (#34258)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34258

This PR allows both atol and rtol to be specified, uses defaults based on the prior analysis (spreadsheet attached to https://github.com/pytorch/pytorch/pull/32538), but retains the absolute tolerance behavior in cases where precision was previously specified explicitly.

Test Plan: Imported from OSS

Differential Revision: D21110255

Pulled By: nairbv

fbshipit-source-id: 57b3a004c7d5ac1be80ee765f03668b1b13f4a7e
2020-04-19 06:16:49 -07:00
Wanchao Liang
24aac32171 [jit] Add dictionary as output of tracer (#36696)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36696

This PR add dictionary as a supported output of tracer under the strict
flag.

Test Plan: Imported from OSS

Reviewed By: houseroad

Differential Revision: D21056962

Pulled By: wanchaol

fbshipit-source-id: ace498182d636de853cf8a1efb3dc77f5d53db29
2020-04-16 18:12:38 -07:00
David Reiss
63e5058c88 Fix naming of "strides" method in TensorType (#36727)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36727

Looks like this was renamed by accident in 0cbd7fa46f

Test Plan:
Unit test.
Lint.

Differential Revision: D21076697

Pulled By: dreiss

fbshipit-source-id: dbd18cb41c7b26479984a7a7b12ad41a4c5b7658
2020-04-16 17:07:27 -07:00