Commit Graph

1913 Commits

Author SHA1 Message Date
jiej
e488e3c443 Exposing linear layer to fuser (#50856)
Summary:
1. enabling linear in autodiff;
2. remove control flow in python for linear;

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50856

Reviewed By: pbelevich

Differential Revision: D26105797

Pulled By: eellison

fbshipit-source-id: 6f7cedb9f6e3e46daa24223d2a6080880498deb4
2021-02-02 15:39:01 -08:00
Meghan Lele
751c30038f [JIT] Properly convert Python strings implictly to device (#51340)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51340

**Summary**
`toIValue` assumes that any value passed for an argument of type
`torch.device` is a valid device object, even when it is not. This can
lead to device type arguments of functions being assigned incorrect
values (see #51098).

This commit adds an explicit check that the passed in object is indeed a
`torch.device` using `THPDevice_Check` and only then does is it
converted to an `IValue`. Since implicit conversion from strings to
devices is generally allowed, if `THPDevice_Check` fails, it is assumed
that the object is a string and an `IValue` containing a `c10::Device`
containing the passed in string is returned.

**Test Plan**
This commit adds a unit test to `test_jit.py` to test that invalid
strings passed as devices are not longer silently accepted.

**Fixes**
This commit fixes #51098.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26187190

Pulled By: SplitInfinity

fbshipit-source-id: 48c990203431da30f9f09381cbec8218d763325b
2021-02-02 10:57:56 -08:00
Ansley Ussery
09e48dbd33 Handle error during dict expansion (#51374)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51374

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D26155995

Pulled By: ansley

fbshipit-source-id: 04e924cb641565341c570c6cf5e5eec42e4f9c8b
2021-01-29 18:46:10 -08:00
anjali411
f9f22c8b5c Add serialization logic for complex numbers (#51287)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51287

This reverts commit dfdb1547b9.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D26131165

Pulled By: anjali411

fbshipit-source-id: 047167fac594ddb670c5e169446e90e74991679a
2021-01-28 17:25:35 -08:00
Nikitha Malgi
b955da3310 Adding correct error message for for..else (#51258)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51040

========
Add error message for for..else statement in Torchscript

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51258

Test Plan:
=====
pytest -k test_for_else test/test_jit.py

Reviewed By: pbelevich

Differential Revision: D26125148

Pulled By: nikithamalgifb

fbshipit-source-id: 82b67ab1c68e29312162ff5d73b82c8c0c9553df
2021-01-28 08:17:31 -08:00
Lillian Johnson
3b6f30824c OpInfo JIT op.output_func handling support (#50775)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50775

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25964541

Pulled By: Lilyjjo

fbshipit-source-id: 8cf1ee9191d526cc46ae283f38c2d64bd60afdb2
2021-01-27 15:04:23 -08:00
Nikita Shulga
00adc7b07f Fix more JIT tests under Python-3.9 (#51182)
Summary:
Mostly replace `global Foo` with `make_global(Foo)`
The only real fix is generating Subscript annotation, which is a follow up from https://github.com/pytorch/pytorch/pull/48676

Fixes https://github.com/pytorch/pytorch/issues/49617

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51182

Reviewed By: gmagogsfm

Differential Revision: D26095244

Pulled By: malfet

fbshipit-source-id: 0e043d9a2cf43fff71dfbb341f708cd7af87c39a
2021-01-27 10:57:03 -08:00
Thomas Viehmann
ac0a3cc5fd Merge CompilationUnit from torch._C and torch.jit (#50614)
Summary:
This simplifies our handling and allows passing CompilationUnits from Python to C++ defined functions via PyBind easily.

Discussed on Slack with SplitInfinity

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50614

Reviewed By: anjali411

Differential Revision: D25938005

Pulled By: SplitInfinity

fbshipit-source-id: 94aadf0c063ddfef7ca9ea17bfa998d8e7b367ad
2021-01-25 11:06:40 -08:00
Peter Bell
47f0bda3ef Improve complex support in common_nn test machinery (#50593)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50593

There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex
types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors.

Also found a few places that explicitly cast inputs to floating point types,
which would drop the imaginary component before running the test.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25954050

Pulled By: mruberry

fbshipit-source-id: 1fa8e5af233aa095c839d5e2f860564baaf92aef
2021-01-22 09:44:45 -08:00
Lillian Johnson
3b88e1b0e7 [WIP] JIT Static Hooks: python tests (#49546)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49546

Test Plan: Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D25771119

Pulled By: Lilyjjo

fbshipit-source-id: bf8a8e20f790691d3ff58fa9c8d0d9ab3e8322c4
2021-01-20 09:12:53 -08:00
Guilherme Leobas
a9e46f1413 add type annotations to torch.nn.modules.container (#48969)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48969

Reviewed By: mrshenli

Differential Revision: D25728987

Pulled By: walterddr

fbshipit-source-id: 02c3aa2078f4ed6cc6edd90ffe1177d789c328a9
2021-01-19 15:12:17 -08:00
Nikolay Korovaiko
8e60bf9034 add RequiresGradCheck (#50392)
Summary:
This change improves perf by 3-4% on fastrnns.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50392

Reviewed By: izdeby

Differential Revision: D25891392

Pulled By: Krovatkin

fbshipit-source-id: 44d9b6907d3975742c9d77102fe6a85aab2c08c0
2021-01-15 16:50:42 -08:00
Guilherme Leobas
0d981eea6c add type annotations to torch.nn.modules.conv (#49564)
Summary:
closes gh-49563

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49564

Reviewed By: albanD

Differential Revision: D25917441

Pulled By: walterddr

fbshipit-source-id: 491dc06cfc1bbf694dfd9ccefca4f55488a931b2
2021-01-15 11:16:11 -08:00
Guilherme Leobas
374951d102 Add type annotations to torch.nn.modules.padding (#49494)
Summary:
Closes gh-49492

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49494

Reviewed By: mruberry

Differential Revision: D25723837

Pulled By: walterddr

fbshipit-source-id: 92af0100f6d9e2bb25b259f5a7fe9d449ffb6443
2021-01-12 15:34:28 -08:00
Elias Ellison
a69f008cb7 [JIT] Factor out peephole to own test file (#50220)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50220

Test Plan: Imported from OSS

Reviewed By: tugsbayasgalan

Differential Revision: D25856263

Pulled By: eellison

fbshipit-source-id: f3d918d860e64e788e0bb9b9cb85125660f834c6
2021-01-12 11:39:06 -08:00
Elias Ellison
035229c945 [JIT] Frozen Graph Conv-BN fusion (#50074)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50074

Adds Conv-BN fusion for models that have been frozen. I haven't explicitly tested perf yet but it should be equivalent to the results from Chillee's PR [here](https://github.com/pytorch/pytorch/pull/476570) and [here](https://github.com/pytorch/pytorch/pull/47657#issuecomment-725752765). Click on the PR for details but it's a good speed up.

 In a later PR in the stack I plan on making this optimization on by default as part of `torch.jit.freeze`. I will also in a later PR add a peephole so that there is not conv->batchnorm2d doesn't generate a conditional checking # dims.

Zino was working on freezing and left the team, so not really sure who should be reviewing this, but I dont care too much so long as I get a review �

Test Plan: Imported from OSS

Reviewed By: tugsbayasgalan

Differential Revision: D25856261

Pulled By: eellison

fbshipit-source-id: da58c4ad97506a09a5c3a15e41aa92bdd7e9a197
2021-01-12 11:37:32 -08:00
Thomas Viehmann
ea087e2d92 JIT: guard DifferentiableGraph node (#49433)
Summary:
This adds guarding for DifferentiableGraph nodes in order to not depend on
Also bailing out on required gradients for the CUDA fuser.

Fixes https://github.com/pytorch/pytorch/issues/49299

I still need to look into a handful of failing tests, but maybe it can be a discussion basis.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49433

Reviewed By: ngimel

Differential Revision: D25681374

Pulled By: Krovatkin

fbshipit-source-id: 8e7be53a335c845560436c0cceeb5e154c9cf296
2021-01-08 20:01:27 -08:00
Richard Barnes
ec6d29d6fa Drop unused imports from test (#49973)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49973

From
```
./python/libcst/libcst codemod remove_unused_imports.RemoveUnusedImportsWithGlean --no-format caffe2/
```

Test Plan: Standard sandcastle tests

Reviewed By: xush6528

Differential Revision: D25727350

fbshipit-source-id: 237ec4edd85788de920663719173ebec7ddbae1c
2021-01-07 12:09:38 -08:00
Nikitha Malgi
12b73fdbbf Adding JIT support for cuda streams and events (#48020)
Summary:
=======

This PR addresses the following:

 * Adds JIT support for CUDA Streams
 * Adds JIT support for CUDA Events
 * Adds JIT support for CUDA Stream context manager

Testing:
======

python test/test_jit.py -v TestCUDA

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48020

Reviewed By: navahgar

Differential Revision: D25725749

Pulled By: nikithamalgifb

fbshipit-source-id: b0addeb49630f8f0c430ed7badeca43bb9d2535c
2020-12-29 20:24:57 -08:00
peter
8d7338e820 Enable tests using named temp files on Windows (#49640)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49640

Reviewed By: ngimel

Differential Revision: D25681548

Pulled By: malfet

fbshipit-source-id: 0e2b25817c98d749920cb2b4079033a2ee8c1456
2020-12-29 09:57:35 -08:00
Ansley Ussery
58fe67967c Support the in operator with str (#47057)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47057

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D24863370

Pulled By: ansley

fbshipit-source-id: 5d17165b06052f0a4676537c5f6757083185a591
2020-12-28 10:26:24 -08:00
Erjia Guan
b80a36614f Fix return type Any for Ternary ops (#49165)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49165

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D25463694

Pulled By: ejguan

fbshipit-source-id: 5cf907e8de6eeb0171d61175a60fac9812b76c6c
2020-12-21 10:12:41 -08:00
Peter Bell
5c25f8faf3 stft: Change require_complex warning to an error (#49022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49022

**BC-breaking note**:

Previously torch.stft took an optional `return_complex` parameter that indicated whether the output would be a floating point tensor or a complex tensor. By default `return_complex` was False to be consistent with the previous behavior of torch.stft. This PR changes this behavior so `return_complex` is a required argument.

**PR Summary**:

* **#49022 stft: Change require_complex warning to an error**

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25658906

Pulled By: mruberry

fbshipit-source-id: 11932d1102e93f8c7bd3d2d0b2a607fd5036ec5e
2020-12-20 14:48:25 -08:00
Nikitha Malgi
e17f0fd676 Adding support for bitwise augassignment operators (#44621)
Summary:
========
Fixes #{42915}

This commit adds support for Bitwise Shorthands in TorchScript, i.e : |=,&=,^=,<<=,>>=,**=

Testing:
======
This commit also adds test for the above fix in test_jit.py
The test can be invoked by
pytest -k augassign test/test_jit.py

Here is a snapshot of the testing:
<img width="1238" alt="image" src="https://user-images.githubusercontent.com/70345919/93105141-8f9f5300-f663-11ea-836b-3b52da6d2be5.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44621

Reviewed By: mrshenli

Differential Revision: D23906344

Pulled By: nikithamalgifb

fbshipit-source-id: 4c93a7430a625f698b163609ccec15e51417d564
2020-12-18 12:07:54 -08:00
albanD
ccd646696b Fix Module backward hooks for all Tensor inputs/outputs (#46163)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/598

This is BC-breaking as we now explicitly don't call the hook when there are not Tensors at the top level of the output.
This feature was not working anyways as the returned grad_input/grad_output were wrong (not respecting the output structure and wrong inputs for multi-Node Module).

This is also BC-breaking as we now report the correct gradients for `nn.Module`s that contain multiple autograd `Node`s while we use to return bad results before.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46163

Reviewed By: ailzhang, mruberry

Differential Revision: D24894180

Pulled By: albanD

fbshipit-source-id: e1b5d193d2818eb2f51e2a2722c7405c8bd13c2b
2020-12-18 09:04:36 -08:00
Ansley Ussery
d17dc37112 Add dict comprehension (#47774)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47774

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25615464

Pulled By: ansley

fbshipit-source-id: 10bba6f70e812fa580cbbbf097e93de7142484cc
2020-12-17 15:25:30 -08:00
Mike Ruberry
f5b68e74d7 Revert D25574962: [pytorch][PR] Updated derivative rules for complex svd and pinverse
Test Plan: revert-hammer

Differential Revision:
D25574962 (9955355853)

Original commit changeset: 832b61303e88

fbshipit-source-id: d73f77f3e51b0f535dad6d21c5bebf8d41a6bfbd
2020-12-17 00:59:43 -08:00
Nikitha Malgi
26e076d19e Adding fix for invalid annotation types for dictionary (#49425)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49362

**Summary:**
This PR fixes the issue where invalid annotation types are used for a dictionary.
Unsupported assertion message is generated for all invalid annotations

**Test Case**:
python test/test_jit.py TestJit.test_dict_invalid_annotations

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49425

Reviewed By: navahgar

Differential Revision: D25601578

Pulled By: nikithamalgifb

fbshipit-source-id: 91633e3d0891bdcb5402f044a74d02fe352ecd6f
2020-12-17 00:28:29 -08:00
Mike Ruberry
47c65f8223 Revert D25569586: stft: Change require_complex warning to an error
Test Plan: revert-hammer

Differential Revision:
D25569586 (5874925b46)

Original commit changeset: 09608088f540

fbshipit-source-id: 6a5953b327a4a2465b046e29bb007a0c5f4cf14a
2020-12-16 16:21:52 -08:00
Peter Bell
5874925b46 stft: Change require_complex warning to an error (#49022)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49022

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25569586

Pulled By: mruberry

fbshipit-source-id: 09608088f540c2c3fc70465f6a23f2aec5f24f85
2020-12-16 12:47:56 -08:00
Ivan Yashchuk
9955355853 Updated derivative rules for complex svd and pinverse (#47761)
Summary:
Updated `svd_backward` to work correctly for complex-valued inputs.
Updated `common_methods_invocations.py` to take dtype, device arguments for input construction.
Removed `test_pinverse` from `test_autograd.py`, it is replaced by entries to `common_methods_invocations.py`.
Added `svd` and `pinverse` to list of complex tests.

References for complex-valued SVD differentiation:

- https://giggleliu.github.io/2019/04/02/einsumbp.html
- https://arxiv.org/abs/1909.02659

The derived rules assume gauge invariance of loss functions, so the result would not be correct for loss functions that are not gauge invariant.
https://re-ra.xyz/Gauge-Problem-in-Automatic-Differentiation/

The same rule is implemented in Tensorflow and [BackwardsLinalg.jl](https://github.com/GiggleLiu/BackwardsLinalg.jl).

Ref. https://github.com/pytorch/pytorch/issues/33152

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47761

Reviewed By: izdeby

Differential Revision: D25574962

Pulled By: mruberry

fbshipit-source-id: 832b61303e883ad3a451b84850ccf0f36763a6f6
2020-12-16 12:32:22 -08:00
Chen Lai
717f31d984 Remove unused reconstruct_scopes function (#48822)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48822

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D25325012

Pulled By: cccclai

fbshipit-source-id: 86ea4c0b2926257c0f82aa05cbcd83278b1b67f7
2020-12-11 23:43:36 -08:00
Tugsbayasgalan Manlaibaatar
42c78ed745 Tuple Slice with both negative and positive stepped size (#48660)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48660

We used to support tuple slicing without any step size before, but this PR extends this feature to support arbitrary step size. We do this by manually reconstructing a new tuple in the IR instead of relying on TupleSlice prim.

Test Plan:
python tests

Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D25359336

fbshipit-source-id: 28cde536f28dd8a00607814b2900765e177f0ed7
2020-12-11 11:00:38 -08:00
Peter Bell
533c837833 Register OpInfos for torch.fft transforms (#48427)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48427

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25266218

Pulled By: mruberry

fbshipit-source-id: 406e7ed5956bc7445daf8c027c9b4d2c8ff88fa1
2020-12-07 17:19:29 -08:00
Ivan Yashchuk
cb285080b0 Added computing matrix condition numbers (linalg.cond) (#45832)
Summary:
This PR adds `torch.linalg.cond` for NumPy compatibility.

Ref https://github.com/pytorch/pytorch/issues/42666.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45832

Reviewed By: ngimel

Differential Revision: D25183690

Pulled By: mruberry

fbshipit-source-id: a727959bfec2bc2dc36df59d9ef79c0534b68194
2020-12-04 02:23:57 -08:00
Lillian Johnson
c465602d78 Refactor existing JIT testing utils to enable new OpInfo test suite to reuse existing logic (#47695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47695

The method_tests from common_methods_invoations.py are being migrated into a new OpInfo class-based testing framework. The work in this commit pulls out the functions embedded in the old method_tests logic and places them in a location that both the old method_tests and OpInfo tests can use

Specifically: created torch/testing/_internal/common_jit.py from functions and methods in torch/testing/_internal/jit_utils.py and test/test_jit.py. Also created new intermediate class JitCommonTestCase to house moved methods. Also slightly modified jit_metaprogramming_utils.py to work for OpInfo tests

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25212437

Pulled By: Lilyjjo

fbshipit-source-id: 97bc52c95d776d567750e7478fac722da30f4985
2020-12-02 19:54:30 -08:00
Ilia Cherniavskii
f7a8bf2855 Use libkineto in profiler (#46470)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46470

Adding ability to use Kineto (CUPTI) to profile CUDA kernels

Test Plan:
USE_KINETO=1 USE_CUDA=1 USE_MKLDNN=1 BLAS=MKL BUILD_BINARY=1 python setup.py develop install
python test/test_profiler.py

python test/test_autograd.py -k test_profile
python test/test_autograd.py -k test_record

```
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------
                                                   Name    Self CPU %      Self CPU   CPU total %     CPU total  CPU time avg     Self CUDA   Self CUDA %    CUDA total  CUDA time avg    # of Calls
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------
                       Memcpy HtoD (Pageable -> Device)         0.00%       0.000us         0.00%       0.000us       0.000us       2.000us        33.33%       2.000us       1.000us             2
                                      sgemm_32x32x32_NN         0.00%       0.000us         0.00%       0.000us       0.000us       2.000us        33.33%       2.000us       2.000us             1
void at::native::vectorized_elementwise_kernel<4, at...         0.00%       0.000us         0.00%       0.000us       0.000us       1.000us        16.67%       1.000us       1.000us             1
                       Memcpy DtoH (Device -> Pageable)         0.00%       0.000us         0.00%       0.000us       0.000us       1.000us        16.67%       1.000us       1.000us             1
                                            aten::randn         5.17%      74.000us         6.71%      96.000us      48.000us       0.000us         0.00%       0.000us       0.000us             2
                                            aten::empty         1.33%      19.000us         1.33%      19.000us       4.750us       0.000us         0.00%       0.000us       0.000us             4
                                          aten::normal_         1.05%      15.000us         1.05%      15.000us       7.500us       0.000us         0.00%       0.000us       0.000us             2
                                               aten::to        77.90%       1.114ms        91.61%       1.310ms     436.667us       0.000us         0.00%       3.000us       1.000us             3
                                    aten::empty_strided         2.52%      36.000us         2.52%      36.000us      12.000us       0.000us         0.00%       0.000us       0.000us             3
                                            aten::copy_         2.73%      39.000us        11.19%     160.000us      53.333us       0.000us         0.00%       3.000us       1.000us             3
                                        cudaMemcpyAsync         4.34%      62.000us         4.34%      62.000us      20.667us       0.000us         0.00%       0.000us       0.000us             3
                                  cudaStreamSynchronize         1.61%      23.000us         1.61%      23.000us       7.667us       0.000us         0.00%       0.000us       0.000us             3
                                               aten::mm         0.21%       3.000us         7.20%     103.000us     103.000us       0.000us         0.00%       2.000us       2.000us             1
                                           aten::stride         0.21%       3.000us         0.21%       3.000us       1.000us       0.000us         0.00%       0.000us       0.000us             3
                                       cudaLaunchKernel         2.45%      35.000us         2.45%      35.000us      17.500us       0.000us         0.00%       0.000us       0.000us             2
                                              aten::add         0.49%       7.000us         4.27%      61.000us      61.000us       0.000us         0.00%       1.000us       1.000us             1
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------
```

benchmark: https://gist.github.com/ilia-cher/a5a9eb6b68504542a3cad5150fc39b1a

Reviewed By: Chillee

Differential Revision: D25142223

Pulled By: ilia-cher

fbshipit-source-id: b0dff46c28da5fb0a8e01cf548aa4f2b723fde80
2020-11-25 04:32:16 -08:00
Elias Ellison
d1b8da75e6 [JIT] Metacompile boolean constants (#46721)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/46703

Previously, we would compile one side of an if-statement if it was a type-based expression we could statically resolve. I think it's reasonable to extend this metacompilation to booleans that are constant at compile time. There have been some instances where i've recommended unintuitive workarounds due to not having this behavior.

This is also possibly needed if we add boolean literals to schema declarations, which is a feature that might be needed to cleanup our `boolean_dispatch` mechanism.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46721

Reviewed By: ppwwyyxx

Differential Revision: D25008862

Pulled By: eellison

fbshipit-source-id: 5bc60a18f1021c010cb6abbeb5399c669fe04312
2020-11-20 11:17:15 -08:00
Elias Ellison
4380934b9b [JIT] Dont use specialized tensor type (#46130)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/46122

For `Any`, we infer the type of the ivalue to set the ivalue's type tag. When we saw a Tensor, we would use a specialized Tensor type, so when `Dict[str, Tensor]` was passed in as any `Any` arg it would be inferred as `Dict[str, Float(2, 2, 2, 2)]` which breaks runtime `isinstance` checking.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46130

Reviewed By: glaringlee

Differential Revision: D24261447

Pulled By: eellison

fbshipit-source-id: 8a2bb26ce5b6c56c8dcd8db79e420f4b5ed83ed5
2020-11-13 18:34:40 -08:00
Tugsbayasgalan Manlaibaatar
29184f86b0 Correctly print out sign of near-zero double values (#47081)
Summary:
inside IValue.h, we previously printed -0.0 as 0.0. Therefore, it was causing some inconsistency when using -0.0.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47081

Test Plan:
A new test case inside test_jit that divides a tensor by -0. and checks if it outputs -inf for all modes.

Fixes https://github.com/pytorch/pytorch/issues/46848

Reviewed By: mrshenli

Differential Revision: D24688572

Pulled By: gmagogsfm

fbshipit-source-id: 01a9d3f782e0711dd10bf24e6f3aa62eee72c895
2020-11-07 01:25:47 -08:00
Zachary DeVito
ecfa7a27b8 [jit] fix traced training attribute (#47211)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47211

The attribute is getting shadowed by the default one set on all modules,
and the __setattr__ on the TracedModule object prevents setting it correctly.

    import torch

    inp = torch.zeros(1, 3, 224, 224)
    model = torch.hub.load('pytorch/vision:v0.6.0', 'mobilenet_v2', pretrained=True)
    model.eval()
    print(model.training)
    with torch.no_grad():
        traced = torch.jit.trace(model, inp)
    print(traced.training)
    traced.eval()
    print(traced.training)
    traced.training = False
    print(traced.training)
    torch.jit.freeze(traced)

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D24686690

Pulled By: zdevito

fbshipit-source-id: 9c1678dc68e9bf83176e9f5a20fa8f6bff5d69a0
2020-11-02 17:28:49 -08:00
tmanlaibaatar
fee585b5a3 Correctly mark unannotated NamedTuple field to be inferred TensorType (#46969)
Summary:
If there is no annotation given, we want to show users that the type is inferred

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46969

Test Plan:
Added a new test case that throws an error with the expected error message

Fixes https://github.com/pytorch/pytorch/issues/46326

Reviewed By: ZolotukhinM

Differential Revision: D24614450

Pulled By: gmagogsfm

fbshipit-source-id: dec555a53bfaa9cdefd3b21b5142f5e522847504
2020-10-29 12:07:40 -07:00
Michael Suo
dc8176356e Various cleanups to ir_emitter and friends (#46686)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46686

I was trying to page this code back in after a while and some things
stuck out as unnecessarily confusing.

1. Improve documentation of closures and fork stuff to be more accurate
to how we use them today.
2. Change `prim::LocalVariableScope` to `prim::ListComprehension`. It is
only ever used for a list comprehensions, and in general the nodes
emitted by `ir_emitter` should correspond to concrete operations or
language features rather than semantic constraints.
3. Change the somewhat mysterious "inputs" and "attributes" argument
names throughout the codebase to be the more obvious "args" and "kwargs"
that they generally represent (I think "inputs" and "attributes" come
from the AST naming).

Test Plan: Imported from OSS

Reviewed By: navahgar, jamesr66a

Differential Revision: D24464197

Pulled By: suo

fbshipit-source-id: 1f4b1475b58b5690a0b204e705caceff969533b4
2020-10-28 16:28:05 -07:00
anjali411
d94bd998ec Update backward formulas (Re #44444) (#46275)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46275

Re #44444

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D24285785

Pulled By: anjali411

fbshipit-source-id: c60ecd4fe4f144132085f2c91d3b950e92b2a491
2020-10-25 19:40:59 -07:00
Yanan Cao
13b7855f33 Support hashing of various data types by implementing generic hashing for IValues (#46441)
Summary:
It used to be that TorchScript only supported hashing of `int`, `float` and `str`. This PR adds hashing for many other types including `Tuple`, `bool`, `device` by implementing generic hashing on IValue.

* Tensor hashing follows eager behavior, which is identity-based (hash according to pointer address rather than tensor content).

Fixes https://github.com/pytorch/pytorch/issues/44038

This is based on suo's https://github.com/pytorch/pytorch/issues/44047, with some cleaning, more tests and fixing BC check issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46441

Reviewed By: robieta

Differential Revision: D24440713

Pulled By: gmagogsfm

fbshipit-source-id: 851f413f99b6f65084b551383ad21e558e7cabeb
2020-10-23 21:26:01 -07:00
Nikita Vedeneev
c31ced4246 make torch.lu differentiable. (#46284)
Summary:
As per title. Limitations: only for batches of squared full-rank matrices.

CC albanD

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46284

Reviewed By: zou3519

Differential Revision: D24448266

Pulled By: albanD

fbshipit-source-id: d98215166268553a648af6bdec5a32ad601b7814
2020-10-23 10:13:46 -07:00
albanD
27e2ea4cea Make add_relu an internal function (#46676)
Summary:
Cleanup for 1.7

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46676

Reviewed By: gchanan

Differential Revision: D24458565

Pulled By: albanD

fbshipit-source-id: b1e4b4630233d3f1a4bac20e3077411d1ae17f7b
2020-10-22 18:08:15 -07:00
Alexander Grund
93719440b8 Replace map(lambda constructs (#46462)
Summary:
Follow-up of https://github.com/pytorch/pytorch/issues/46461 with a similar goal

Makes them more readable and possibly faster. Care has to be taken because `map` applies the function immediately while `(x for x in xs)` is a generator expression which gets evaluated later. This is a benefit in some cases where it is not required to actually create the list of values in memory (e.g. when passing to `tuple` or `extend` or `join`)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46462

Reviewed By: zou3519

Differential Revision: D24422343

Pulled By: ezyang

fbshipit-source-id: 252e33499c92ac0b15238f2df32681dbbda2b237
2020-10-22 09:50:22 -07:00
Rahul Nambiar
adbb50ea67 Enabling alias annotation checks for all operations during autograd tests (#46601)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46601

* except excluded tests and magic methods.

https://github.com/pytorch/pytorch/issues/38731

Previously, we'd only do run these tests for inplace operations. Since this is a lot more tests, fixed these issues that came up when running them -
- Updated schema of conj() to reflect existing behaviour.
- Updated deepEquals method in check_alias_annotation.cpp to re-use the overloaded == operator. Previous implementation did not cover all types of IValues.
- Corrected the order inputs are passed in during autograd testing of 'view' & 'reshape'.
- Subbed out atn::ger with the func its aliased to, atn::outer, for testing. The alias annotation checking code doesn't handle aliased operators properly.
ghstack-source-id: 114830903

Test Plan: Ran all tests in test:jit and verified they pass.

Reviewed By: eellison

Differential Revision: D24424955

fbshipit-source-id: 382d7e2585911b81b1573f21fff1d54a5e9a2054
2020-10-21 20:01:57 -07:00
Ansley Ussery
475b4e30e6 Allow for source code comments at any level of indentation (#46548)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46548

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D24434778

Pulled By: ansley

fbshipit-source-id: e24ed73d497381e02ef1155622641027ae34770a
2020-10-21 13:49:42 -07:00
Lillian Johnson
f83cf2dab3 [JIT] adding torch.jit.isinstance support (#46062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46062

Adds support for torch.jit.isinstance in both eager and script mode

Example use:

```
import torch
from typing import Any, List

class TestModule(torch.nn.Module):
    def __init__(self):
        super(TestModule, self).__init__()

    def call(self, input1: str, input2: str) -> str:
        return input1

    def forward(self, input: Any) -> None:
        if torch.jit.isinstance(input, List[str]):
            for el in input:
                print(el)

TestModule().forward(["1","2"])
scripted_module = torch.jit.script(TestModule())
scripted_module(["1", "2"])
```

Test Plan: Imported from OSS

Reviewed By: bertmaher, zou3519

Differential Revision: D24264415

Pulled By: Lilyjjo

fbshipit-source-id: 039c95bddd854c414027ac8332832e6bc830b5b9
2020-10-20 16:47:49 -07:00
Ansley Ussery
fdc5261a20 Support %-based string formatting (#45976)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45976

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D24374215

Pulled By: ansley

fbshipit-source-id: 2005fe7f09dc8d3c44c4bfdccab6b4dc46a5e517
2020-10-20 16:13:36 -07:00
Alexander Grund
5b0f400488 Replace list(map(...)) constructs by list comprehensions (#46461)
Summary:
As discussed in https://github.com/pytorch/pytorch/issues/46392 this makes the code more readable and possibly more performant.

It also fixes a bug detected by this where the argument order of `map` was confused: 030a24906e (diff-5bb26bd3a23ee3bb540aeadcc0385df2a4e48de39f87ed9ea76b21990738fe98L1537-R1537)

Fixes https://github.com/pytorch/pytorch/issues/46392

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46461

Reviewed By: ailzhang

Differential Revision: D24367015

Pulled By: ezyang

fbshipit-source-id: d55a67933cc22346b00544c9671f09982ad920e7
2020-10-19 18:42:49 -07:00
Yanan Cao
6a2f40dc66 Expose script_if_tracing as public API (#46494)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45921

`torch.jit._script_if_tracing` is still kept for BC

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46494

Reviewed By: ZolotukhinM

Differential Revision: D24381621

Pulled By: gmagogsfm

fbshipit-source-id: 35d9f2da38c591039ba95cd95ef186e6c7e47586
2020-10-17 17:31:57 -07:00
Kurt Mohler
ef4817fe5a Add tensor_split function, based on numpy.array_split (#45168)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/9382

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45168

Reviewed By: ngimel

Differential Revision: D24166164

Pulled By: mruberry

fbshipit-source-id: 795459821e52885bc99623a01a2abec060995ce6
2020-10-07 23:14:48 -07:00
Elias Ellison
c86655a815 [JIT] Fix Dict bug in constant hashing (#45929)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45929

We were checking `and` when we should have been checking `or`.

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D24148804

Pulled By: eellison

fbshipit-source-id: 9c394ea10ac91a588169d934b1e3208512c71b9d
2020-10-07 17:40:17 -07:00
Ansley Ussery
5072728d88 Fix stride printing/parsing formatting (#45156)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45156

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D24078695

Pulled By: ansley

fbshipit-source-id: dab993277d43b31105c38d12098c37653747b42a
2020-10-06 15:06:46 -07:00
Ansley Ussery
f18cc9c57d Change type inferred from empty annotation (#45360)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45360

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D24078645

Pulled By: ansley

fbshipit-source-id: 5d37d07df75bd7a2111d44638befe53c1021ee82
2020-10-05 15:16:56 -07:00
Edward Yang
546aab66c1 Revert D24027761: Update backward definition for more operators and reenable tests in test_ops.py
Test Plan: revert-hammer

Differential Revision:
D24027761 (7d809f5d8e)

Original commit changeset: c1f707c2a039

fbshipit-source-id: 30750d2f08886036fb8b2cd0ae51c7732d3b7b19
2020-10-02 18:52:57 -07:00
Yanan Cao
d150d3e276 Make sure each warnings.warn only executes once inside TorchScript. (#45382)
Summary:
* Add a pass at end of runCleanupPasses to annotate `aten::warn` so that each has its unique id
* Enhanced interpreter so that it tracks which `aten::warn` has been executed before and skip them
* Improved insertInstruction so that it correctly checks for overflow

Fixes https://github.com/pytorch/pytorch/issues/45108

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45382

Reviewed By: mrshenli

Differential Revision: D24060677

Pulled By: gmagogsfm

fbshipit-source-id: 9221bc55b9ce36b374bdf614da3fe47496b481c1
2020-10-02 14:55:10 -07:00
anjali411
7d809f5d8e Update backward definition for more operators and reenable tests in test_ops.py (#44444)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44444

This PR:
1. Fixes https://github.com/pytorch/pytorch/issues/41510. Updates backward formula for the following functions: `asin`, `acos`, `asinh`, `acosh`, `atan`, `atanh`, `div`, `log`, `log10`, `log2`, `log1p`, `pow`, `reciprocal`, `angle`.
2. Re-enables the tests in `test_ops.py`.
3. Adds dispatch for complex dtypes for `tanh_backward`.
4. Re-enables commented tests in `common_methods_invocation.py`.

Test Plan: Imported from OSS

Reviewed By: glaringlee

Differential Revision: D24027761

Pulled By: anjali411

fbshipit-source-id: c1f707c2a039149a6e04bbde53ee120d9119d99a
2020-10-02 13:37:10 -07:00
Malgi Nikitha Vivekananda
85a70ce71f Add multiline string dedent support (#45580)
Summary:
Fixes #{44842}
Summary
========
This PR adds support for multiline string dedents.

Test
=====
pytest -k test_multiline_string_dedents test/test_jit.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45580

Reviewed By: wconstab

Differential Revision: D24025866

Pulled By: nikithamalgifb

fbshipit-source-id: 0f49739fb93f70f73a8f367caca2887f558a3937
2020-09-30 16:08:26 -07:00
Nikolay Korovaiko
6ab1c0b1ca Disable a few tests in preparation to enabling PE+TE (#44815)
Summary:
Disable a few tests in preparation to enabling PE+TE
Next PR: https://github.com/pytorch/pytorch/pull/45396

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44815

Reviewed By: ZolotukhinM

Differential Revision: D23948445

Pulled By: Krovatkin

fbshipit-source-id: 93e641b7b8a3f13bd3fd3840116076553408f224
2020-09-28 12:55:12 -07:00
anjali411
9f67176b82 Complex gradcheck logic (#43208)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43208

This PR adds gradcheck for complex. The logic used for complex gradcheck is described in Section 3.5.3 here: https://arxiv.org/pdf/1701.00392.pdf

More concretely, this PR introduces the following changes:
1. Updates get_numerical_jacobian to take as input a scalar value for vector (v). Adds gradcheck logic for C -> C, C-> R, R -> C. For R -> C functions, only the real value of gradient is propagated.
2. Adds backward definition for `torch.complex` and also adds a test to verify the definition added.
3. Updates backward for `mul`, `sin`, `cos`, `sinh`, `cosh`.
4. Adds tests for all `torch.real`, `torch.imag`, `torch.view_as_real`, `torch.view_as_complex`, `torch.conj`.

Follow up tasks:
1. Add more thorough tests for R -> C cases. Specifically, add R->C test variants for functions. for e.g., `torch.mul(complex_tensor, real_tensor)`
2. Add back commented test in `common_methods_invocation.py`.
3. Add more special case checking for complex gradcheck to make debugging easier.
4. Update complex autograd note.
5. disable complex autograd for operators not tested for complex.

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D23655088

Pulled By: anjali411

fbshipit-source-id: caa75e09864b5f6ead0f988f6368dce64cf15deb
2020-09-20 22:05:04 -07:00
Michael Suo
374e9373b5 [jit] Pull (most) tests out of libtorch_python (#44795)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44795

Today, we build our cpp tests twice, once as a standalone gtest binary,
and once linked in `libtorch_python` so we can call them from
`test_jit.py`.

This is convenient (it means that `test_jit.py` is a single entry point
for all our tests), but has a few drawbacks:
1. We can't actually use the gtest APIs, since we don't link gtest into
`libtorch_python`. We're stuck with the subset that we want to write
polyfills for, and an awkward registration scheme where you have to
write a test then include it in `tests.h`).
2. More seriously, we register custom operators and classes in these
tests. In a world where we may be linking many `libtorch_python`s, this
has a tendency to cause errors with `libtorch`.

So now, only tests that explicitly require cooperation with Python are
built into `libtorch_python`. The rest are built into
`build/bin/test_jit`.

There are tests which require that we define custom classes and
operators. In these cases, I've built thm into separate `.so`s that we
call `torch.ops.load_library()` on.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity, ZolotukhinM

Differential Revision: D23735520

Pulled By: suo

fbshipit-source-id: d146bf4e7eb908afa6f96b394e4d395d63ad72ff
2020-09-18 14:04:40 -07:00
Yanan Cao
174cbff00a Improve sugared value's error message (#42889)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **https://github.com/pytorch/pytorch/issues/42889 Improve sugared value's error message**

I think most (if not all) cases where this code path is reached can be attributed to closing over a global variable.
Improving error message to make this clearer to users.

close https://github.com/pytorch/pytorch/issues/41288

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42889

Reviewed By: SplitInfinity

Differential Revision: D23779347

Pulled By: gmagogsfm

fbshipit-source-id: ced702a96234040f79eb16ad998d202e360d6654
2020-09-18 11:01:40 -07:00
Yuxin Wu
9a007ba4cb [jit] stop parsing the block after seeing exit statements (#44870)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44870

fix https://github.com/pytorch/pytorch/issues/44864

Test Plan: buck test mode/dev-nosan //caffe2/test:jit -- 'test_assert_is_script'

Reviewed By: eellison

Differential Revision: D23755094

fbshipit-source-id: ca3f8b27dc6f9dc9364a22a1bce0e2f588ed4308
2020-09-17 18:09:16 -07:00
Yanan Cao
2558e5769d Implement sort for list of tuples (#43448)
Summary:
* Implement tuple sort by traversing contained IValue types and generate a lambda function as comparator for sort.
* Tuple, class objects can now arbitrarily nest within each other and still be sortable

Fixes https://github.com/pytorch/pytorch/issues/43219

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43448

Reviewed By: eellison

Differential Revision: D23352273

Pulled By: gmagogsfm

fbshipit-source-id: b6efa8d00e112178de8256da3deebdba7d06c0e1
2020-09-17 11:20:56 -07:00
Yanan Cao
99093277c0 Support Python Slice class in TorchScript (#44335)
Summary:
Implements support for[ Python Slice class](https://docs.python.org/3/c-api/slice.html) (not slice expression, which is already supported)

Slice object can be used in any place that supports slice expression, including multi-dim tensor slicing.

Fixes https://github.com/pytorch/pytorch/issues/43511
Fixes https://github.com/pytorch/pytorch/issues/43125

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44335

Reviewed By: suo, jamesr66a

Differential Revision: D23682213

Pulled By: gmagogsfm

fbshipit-source-id: f74fe25370e89fbfd2b3727d95ce4e1c4ba8dec4
2020-09-17 00:41:53 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Yanan Cao
07d07e3c6c Remove EXPERIMENTAL_ENUM_SUPPORT feature guard (#44243)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41095

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44243

Reviewed By: ZolotukhinM

Differential Revision: D23605979

Pulled By: gmagogsfm

fbshipit-source-id: 098ae69049c4664ad5d1521c45b8a7dd22e72f6c
2020-09-16 11:45:59 -07:00
Elias Ellison
551494b01d [JIT] Fix torch.tensor for empty multidimensional-typed lists (#44652)
Summary:
We were hitting an assert error when you passed in an empty `List[List[int]]` - this fixes that error by not recursing into 0-element tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44652

Reviewed By: ZolotukhinM

Differential Revision: D23688247

Pulled By: eellison

fbshipit-source-id: d48ea24893044fae96bc39f76c0f1f9726eaf4c7
2020-09-14 17:28:23 -07:00
Mike Ruberry
686e281bcf Updates div to perform true division (#42907)
Summary:
This PR:

- updates div to perform true division
- makes torch.true_divide an alias of torch.div

This follows on work in previous PyTorch releases that first deprecated div performing "integer" or "floor" division, then prevented it by throwing a runtime error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42907

Reviewed By: ngimel

Differential Revision: D23622114

Pulled By: mruberry

fbshipit-source-id: 414c7e3c1a662a6c3c731ad99cc942507d843927
2020-09-14 15:50:38 -07:00
Akihiro Nitta
84949672bf Fix exception chaining in test/ (#44193)
Summary:
## Motivation
This PR fixes https://github.com/pytorch/pytorch/issues/43770 and is the continuation of https://github.com/pytorch/pytorch/issues/43836.

## Description of the change
This PR fixes exception chaining only in files under `test/` where appropriate.
To fix exception chaining, I used either:
1. `raise new_exception from old_exception` where `new_exception` itself seems not descriptive enough to debug or `old_exception` delivers valuable information.
2. `raise new_exception from None` where raising both of `new_exception` and `old_exception` seems a bit noisy and redundant.

## List of lines containing `raise` in `except` clause:
I wrote [this simple script](https://gist.github.com/akihironitta/4223c1b32404b36c1b349d70c4c93b4d) using [ast](https://docs.python.org/3.8/library/ast.html#module-ast) to list lines where `raise`ing in `except` clause.

- [x] f8f35fddd4/test/test_cpp_extensions_aot.py (L16)
- [x] f8f35fddd4/test/test_jit.py (L2503)
- [x] f8f35fddd4/test/onnx/model_defs/word_language_model.py (L22)
- [x] f8f35fddd4/test/onnx/verify.py (L73)
- [x] f8f35fddd4/test/onnx/verify.py (L110)
- [x] f8f35fddd4/test/onnx/test_verify.py (L31)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L255)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L2992)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L3025)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L3712)
- [x] f8f35fddd4/test/distributed/test_distributed.py (L3180)
- [x] f8f35fddd4/test/distributed/test_distributed.py (L3198)
- [x] f8f35fddd4/test/distributed/test_data_parallel.py (L752)
- [x] f8f35fddd4/test/distributed/test_data_parallel.py (L776)
- [x] f8f35fddd4/test/test_type_hints.py (L151)
- [x] f8f35fddd4/test/test_jit_fuser.py (L771)
- [x] f8f35fddd4/test/test_jit_fuser.py (L773)
- [x] f8f35fddd4/test/test_dispatch.py (L105)
- [x] f8f35fddd4/test/test_distributions.py (L4738)
- [x] f8f35fddd4/test/test_nn.py (L9824)
- [x] f8f35fddd4/test/test_namedtensor.py (L843)
- [x] f8f35fddd4/test/test_jit_fuser_te.py (L875)
- [x] f8f35fddd4/test/test_jit_fuser_te.py (L877)
- [x] f8f35fddd4/test/test_dataloader.py (L31)
- [x] f8f35fddd4/test/test_dataloader.py (L43)
- [x] f8f35fddd4/test/test_dataloader.py (L365)
- [x] f8f35fddd4/test/test_dataloader.py (L391)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44193

Reviewed By: albanD

Differential Revision: D23681529

Pulled By: malfet

fbshipit-source-id: 7c2256ff17334625081137b35baeb816c1e53e0b
2020-09-14 14:20:16 -07:00
Nikolay Korovaiko
fe26102a0e Enable TE in test_jit.py (#44200)
Summary:
Enable TE in test_jit.py and adjust/fix tests accordingly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44200

Reviewed By: SplitInfinity

Differential Revision: D23673624

Pulled By: Krovatkin

fbshipit-source-id: 5999725c7aacc6ee77885eb855a41ddfb4d9a8d8
2020-09-13 15:58:20 -07:00
Mikhail Zolotukhin
c6febc6480 [JIT] Add a python hook for a function to interpret JIT graphs. (#44493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44493

This function allows to execute a graph exactly as it is, without going
through a graph executor which would run passes on the graph before
interpreting it. I found this feature extremely helpful when I worked on
a stress-testing script to shake out bugs from the TE fuser: I needed to
execute a very specific set of passes on a graph and nothing else, and
then execute exactly it.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23632505

Pulled By: ZolotukhinM

fbshipit-source-id: ea81fc838933743e2057312d3156b77284d832ef
2020-09-11 02:55:26 -07:00
Gregory Chanan
c8914afdfa Merge criterion_tests and new_criterion_tests. (#44398)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44398

These end up executing the same tests, so no reason to have them separate.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D23600855

Pulled By: gchanan

fbshipit-source-id: 0952492771498bf813f1bf8e1d7c8dce574ec965
2020-09-10 08:29:59 -07:00
Gregory Chanan
fa158c4ca6 Combine criterion and new criterion tests in test_jit. (#43958)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43958

There is not any difference between these tests (I'm merging them), so let's merge them in the JIT as well.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D23452337

Pulled By: gchanan

fbshipit-source-id: e6d13cdb164205eec3dbb7cdcd0052b02c961778
2020-09-10 08:28:14 -07:00
Elias Ellison
b69c28d02c Improving ModuleList indexing error msg (#43361)
Summary:
Follow up to https://github.com/pytorch/pytorch/pull/41946/, to suggest enumerating a module as an alternative if a user tries indexing into a modulelist/sequential with a non-integer literal

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43361

Reviewed By: mrshenli

Differential Revision: D23602388

Pulled By: eellison

fbshipit-source-id: 51fa28d5bc45720529b3d45e92d367ee6c9e3316
2020-09-09 16:22:57 -07:00
Sujoy Saraswati
54931ebb7b Release saved variable from DifferentiableGraphBackward (#42994)
Summary:
When the backward ops execute via the autograd engine evaluate_function(), the fn.release_variables() is called to release the SavedVariables. For the eager mode ops, this releases the saved inputs that was required for backward grad function. However, with TorchScript, we get a DifferentableGraph and the DifferentiableGraphBackward() doesn't implement a release_variables(). This leads to the SavedVariables to be alive longer. Implement release_variables() for DifferentiableGraphBackward to release these SavedVariables  early.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42994

Reviewed By: izdeby

Differential Revision: D23503172

Pulled By: albanD

fbshipit-source-id: d87127498cfa72883ae6bb31d0e6c7056c4c36d4
2020-09-08 14:36:52 -07:00
Michael Suo
9dd8670d7d [jit] Better match behavior of loaded ScriptModules vs. freshly created ones (#43298)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43298

IR emitter uses `ModuleValue` to represent ScriptModules and emit IR for
attribute access, submodule access, etc.

`ModuleValue` relies on two pieces of information, the JIT type of the
module, and the `ConcreteModuleType`, which encapsulates Python-only
information about the module.

ScriptModules loaded from a package used to create a dummy
ConcreteModuleType without any info in it. This led to divergences in
behavior during compilation.

This PR makes the two ways of constructing a ConcreteModuleType equivalent,
modulo any py-only information (which, by definition, is never present in
packaged files anyway).

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D23228738

Pulled By: suo

fbshipit-source-id: f6a660f42272640ca1a1bb8c4ee7edfa2d1b07cc
2020-09-03 15:03:39 -07:00
Michael Suo
74f18476a2 [jit] fix segfault in attribute lookup on loaded ScriptModules (#43284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43284

The IR emitter looks for attributes on modules like:
1. Check the JIT type for the attribute
2. Check the originating Python class, in order to fulfill requests for, e.g. static methods or ignored methods.

In the case where you do:
```
inner_module = torch.jit.load("inner.pt")
wrapped = Wrapper(inner_module)  # wrap the loaded ScriptModule in an nn.Module
torch.jit.script(wrapped)
```

The IR emitter may check for attributes on `inner_module`. There is no
originating Python class for `inner_module`, since it was directly
compiled from the serialized format.

Due to a bug in the code, we don't guard for this case an a segfault
results if the wrapper asks for an undefined attribute. The lookup in
this case looks like:
1. Check the JIT type for the attribute (not there!)
2. Check the originating Python class (this is a nullptr! segfault!)

This PR guards this case and properly just raises an attribute missing
compiler error instead of segfaulting.

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D23224337

Pulled By: suo

fbshipit-source-id: 0cf3060c427f2253286f76f646765ec37b9c4c49
2020-09-03 15:01:59 -07:00
Nikolay Korovaiko
f91bdbeabd Enable function calls in TEFuser and SpecializeAutogradZero (#43866)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43866

Reviewed By: ezyang

Differential Revision: D23452798

Pulled By: Krovatkin

fbshipit-source-id: 2cff4c905bf1b5d9de56e7869458ffa6fce1f1b5
2020-09-03 14:42:52 -07:00
Lillian Johnson
e3cb582e05 Error printing extension support for multiline errors (#43807)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/43807

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D23407457

Pulled By: Lilyjjo

fbshipit-source-id: 05a6a50dc39c00474d9087ef56028a2c183aa53a
2020-09-01 10:02:43 -07:00
Mikhail Zolotukhin
98b846cd1d [JIT] Remove loop peeling from the profiling executor pipeline. (#43847)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43847

It seems to slowdown two fastRNN benchmarks and does not speed up
others.

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D23416197

Pulled By: ZolotukhinM

fbshipit-source-id: 598144561979e84bcf6bccf9b0ca786f5af18383
2020-08-31 17:26:55 -07:00
Meghan Lele
87d7c362b1 [JIT] Add JIT support for torch.no_grad (#41371)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41371

**Summary**
This commit enables the use of `torch.no_grad()` in a with item of a
with statement within JIT. Note that the use of this context manager as
a decorator is not supported.

**Test Plan**
This commit adds a test case to the existing with statements tests for
`torch.no_grad()`.

**Fixes**
This commit fixes #40259.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D22649519

Pulled By: SplitInfinity

fbshipit-source-id: 7fa675d04835377666dfd0ca4e6bc393dc541ab9
2020-08-27 15:32:57 -07:00
Elias Ellison
a4cf4c2437 refactor tests (#43631)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43631

I added a new test for just profiler stuff - I don't think the test should go in test_jit.py. Maybe this should just go in test_tensorexpr_fuser, but I'm not really testing tensorexpr stuff either... LMK

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D23358810

Pulled By: eellison

fbshipit-source-id: 074238e1b60e4c4a919a052b7a5312b790ad5d82
2020-08-27 14:35:33 -07:00
aizjForever
cdc3e232e9 Add __str__ and __repr__ bindings to SourceRange (#43601)
Summary:
Added the bindings for `__str__` and `__repr__` methods for SourceRange

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43601

Test Plan:
`python test/test_jit.py`

cc gmagogsfm

Reviewed By: agolynski

Differential Revision: D23366500

Pulled By: gmagogsfm

fbshipit-source-id: ab4be6e8f9ad5f67a323554437878198483f4320
2020-08-27 12:30:47 -07:00
Yuxin Wu
825ec18eed [jit] better error message (#43093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43093

without this it's hard to tell which module is going wrong

Test Plan:
```
> TypeError:
> 'numpy.int64' object in attribute 'Linear.in_features' is not a valid constant.
> Valid constants are:
> 1. a nn.ModuleList
> 2. a value of type {bool, float, int, str, NoneType, torch.device, torch.layout, torch.dtype}
> 3. a list or tuple of (2)
```

Reviewed By: eellison

Differential Revision: D23148516

fbshipit-source-id: b86296cdeb7b47c9fd69b5cfa479914c58ef02e6
2020-08-17 14:57:56 -07:00
taivu
02c8ad70f2 Reconstruct scopes (#41615)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41615

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D22611331

Pulled By: taivu1998

fbshipit-source-id: d4ed4cf6360bc1f72ac9fa24bb4fcf6b7d9e7576
2020-08-13 22:38:16 -07:00
Mike Ruberry
bee174dc3f Adds linalg.det alias, fixes outer alias, updates alias testing (#42802)
Summary:
This PR:

- updates test_op_normalization.py, which verifies that aliases are correctly translated in the JIT
- adds torch.linalg.det as an alias for torch.det
- moves the torch.linalg.outer alias to torch.outer (to be consistent with NumPy)

The torch.linalg.outer alias was put the linalg namespace erroneously as a placeholder since it's a "linear algebra op" according to NumPy but is actually still in the main NumPy namespace.

The updates to test_op_normalization are necessary. Previously it was using method_tests to generate tests, and method_tests assumes test suites using it also use the device generic framework, which test_op_normalization did not. For example, some ops require decorators like `skipCPUIfNoLapack`, which only works in device generic test classes. Moving test_op_normalization to the device generic framework also lets these tests run on CPU and CUDA.

Continued reliance on method_tests() is excessive since the test suite is only interested in testing aliasing, and a simpler and more readable `AliasInfo` class is used for the required information. An example impedance mismatch between method_tests and the new tests, for example, was how to handle ops in namespaces like torch.linalg.det. In the future this information will likely be folded into a common 'OpInfo' registry in the test suite.

The actual tests performed are similar to what they were previously: a scripted and traced version of the op is run and the test verifies that both graphs do not contain the alias name and do contain the aliased name.

The guidance for adding an alias has been updated accordingly.

cc mattip

Note:

ngimel suggests:
- deprecating and then removing the `torch.ger` name
- reviewing the implementation of `torch.outer`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42802

Reviewed By: zou3519

Differential Revision: D23059883

Pulled By: mruberry

fbshipit-source-id: 11321c2a7fb283a6e7c0d8899849ad7476be42d1
2020-08-11 21:48:31 -07:00
Yanan Cao
43613b4236 Fix incorrect aten::sorted.str return type (#42853)
Summary:
aten::sorted.str output type was incorrectly set to bool[] due to a copy-paste error. This PR fixes it.

Fixes https://fburl.com/0rv8amz7

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42853

Reviewed By: yf225

Differential Revision: D23054907

Pulled By: gmagogsfm

fbshipit-source-id: a62968c90f0301d4a5546e6262cb9315401a9729
2020-08-11 14:01:23 -07:00
Yanan Cao
317b9d3bfc Implement sort for string in aten (#42398)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42375

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42398

Reviewed By: ailzhang

Differential Revision: D22884849

Pulled By: gmagogsfm

fbshipit-source-id: e53386949f0a5e166f3d1c2aa695294340bd1440
2020-08-04 15:25:35 -07:00
Yanan Cao
bdcf320bed Support custom exception message (#41907)
Summary:
Raise and assert used to have a hard-coded error message "Exception". User provided error message was ignored. This PR adds support to represent user's error message in TorchScript.

This breaks backward compatibility because now we actually need to script the user's error message, which can potentially contain unscriptable expressions. Such programs can break when scripting, but saved models can still continue to work.

Increased an op count in test_mobile_optimizer.py because now we need aten::format to form the actual exception message.

This is built upon an WIP PR:  https://github.com/pytorch/pytorch/pull/34112 by driazati

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41907

Reviewed By: ngimel

Differential Revision: D22778301

Pulled By: gmagogsfm

fbshipit-source-id: 2b94f0db4ae9fe70c4cd03f4048e519ea96323ad
2020-08-01 13:03:45 -07:00
Elias Ellison
2285a2fc11 refactor canonical ordering to also be able to do isAfter checks (#42140)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42140

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D22798378

Pulled By: eellison

fbshipit-source-id: d1a549f43b28fe927729597818a46674c58fe81d
2020-07-31 15:11:40 -07:00
Elias Ellison
0a64f99162 [JIT] Dont include view ops in autodiff graphs (#42027)
Summary:
View ops as outputs of differentiable subgraphs can cause incorrect differentiation. For now, do not include them in the subgraph. This was observed with our autograd tests for MultiheadAttention and nn.Transformer, which currently fail with the legacy executor. This commit fixes those test failures.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42027

Reviewed By: pbelevich

Differential Revision: D22798133

Pulled By: eellison

fbshipit-source-id: 2f6c08953317bbe013933c6faaad20100376c039
2020-07-29 10:17:33 -07:00
Yanan Cao
890b52e09f Reduce instability in runCleanUpPasses by reordering passes. (#41891)
Summary:
Currently constant pooling runs before const propagation, which can create more constants that need pooling. This can get in the way of serialization/deserialization stability because each time user serializes and deserializes a module, runCleanUpPasses is called upon it. Doing so multiple times would lead to different saved module.

This PR moves constant pooling after const propagation, which may slow down const propagation a little bit, but would otherwise side-step aforementioned problem.

test_constant_insertion in test_jit.py is also updated because after fixing the pass ordering, the number of constants is no longer a constant and it is extremely difficult to get the exact number with the current convoluted test structure. So for now, I changed the test to check only that CSE doesn't change number of "prim::constant" rather than comparing against a known number. Also left a TODO to improve this test.

ConstantPropagation pass is replaced by ConstantPropagationImmutableTypes because the latter is used in runCleanUpPasses. If not replaced, the former would create new CSE opportunities by folding more constants. This voids the purpose of the test case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41891

Reviewed By: colesbury

Differential Revision: D22701540

Pulled By: gmagogsfm

fbshipit-source-id: 8e60dbdcc54a93dac111d81b8d88fb39387224f5
2020-07-24 11:39:20 -07:00
Elias Ellison
da3ff5e473 [JIT] dont count constants in subgraph size (#41436)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41436

Constants are not executed as instructions, we should ignore them when counting subgraph size, as we ignore them in counting block size for loop unrolling.

Test Plan: Imported from OSS

Reviewed By: Krovatkin, ZolotukhinM

Differential Revision: D22600608

Pulled By: eellison

fbshipit-source-id: 9770b21c936144a3d6a1df89cf3be5911095187e
2020-07-23 14:48:25 -07:00
Elias Ellison
6161730174 [JIT] move remove mutation to its own test file (#41502)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41502

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D22629270

Pulled By: eellison

fbshipit-source-id: fcec6ae4ff8f108164539d67427ef3d72fa07494
2020-07-20 12:03:28 -07:00
Yanan Cao
4a3aad354a [1/N] Implement Enum JIT support (#41390)
Summary:
* Add EnumType and AnyEnumType as first-class jit type
* Add Enum-typed IValue
* Enhanced aten::eq to support Enum

Supported:
Enum-typed function targuments
using Enum type and comparing them

TODO:
Add PyThon sugared value for Enum
Support getting name/value attrs of enums
Support Enum-typed return values
Support enum values of different types in same Enum class
Support serialization and deserialization

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41390

Reviewed By: eellison

Differential Revision: D22524388

Pulled By: gmagogsfm

fbshipit-source-id: 1627154a64e752d8457cd53270f3d14aea4b1150
2020-07-18 22:15:06 -07:00
Ilia Cherniavskii
e7a09b4d17 RecordFunction in Dispatcher (#37587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37587

Lifting RecordFunction up into the dispatcher code

Test Plan: Imported from OSS

Differential Revision: D21374246

fbshipit-source-id: 19f9c1719e6fd3990e451c5bbd771121e91128f7
2020-07-17 22:20:05 -07:00
Meghan Lele
f85a27e100 [JIT] Replace "blacklist" in test_jit.py (#41453)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41453

**Test Plan**
`python test/test_jit.py`

**Fixes**
This commit partially addresses #41443.

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D22544268

Pulled By: SplitInfinity

fbshipit-source-id: 8b6b94211a626209c3960fda6c860593148dcbf2
2020-07-17 11:30:27 -07:00
Mikhail Zolotukhin
5d7046522b [JIT] Teach IRPrinter and IRParser to handle 'requires_grad' and 'device' as a part of type info. (#41507)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41507

These fields have always been a part of tensor types, this change just
makes them serializable through IR dumps.

Test Plan: Imported from OSS

Reviewed By: Krovatkin, ngimel

Differential Revision: D22563661

Pulled By: ZolotukhinM

fbshipit-source-id: f01aaa130b7e0005bf1ff21f65827fc24755b360
2020-07-17 10:27:04 -07:00
Yuxin Wu
488ee3790e Support @torch.jit.unused on a @torch.no_grad decorated function (#41496)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41496

use the wrapped function (instead of the wrapper) to obtain argument names

Test Plan:
```
buck test mode/dev-nosan //caffe2/test:jit -- 'test_unused_decorator \(test_jit\.TestScript\)'
```

Before:
```
> Traceback (most recent call last):
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/test_jit.py", line 3014, in test_unused_decorator
>     torch.jit.script(MyMod())
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_script.py", line 888, in script
>     obj, torch.jit._recursive.infer_methods_to_compile
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_recursive.py", line 317, in create_script_module
>     return create_script_module_impl(nn_module, concrete_type, stubs_fn)
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_recursive.py", line 376, in create_script_module_impl
>     create_methods_from_stubs(concrete_type, stubs)
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_recursive.py", line 292, in create_methods_from_stubs
>     concrete_type._create_methods(defs, rcbs, defaults)
> RuntimeError:
> Non-static method does not have a self argument:
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/test_jit.py", line 3012
>             def forward(self, x):
>                 return self.fn(x)
>                        ~~~~~~~ <--- HERE
>
```

Reviewed By: eellison

Differential Revision: D22554479

fbshipit-source-id: 03e432ea92ed973cc57ff044da80ae7a36f6af4c
2020-07-15 16:54:43 -07:00
Michael Suo
ca1b8ebbcb move misc implementation out of jit/__init__.py (#41154)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41154

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22445213

Pulled By: suo

fbshipit-source-id: 200545715c5ef13beb1437f49e01efb21498ddb7
2020-07-13 16:59:55 -07:00
Kimish Patel
c5dcf056ee JIT pass for add relu fusion. (#39343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39343

Building on top of previous PR that adds fused add_relu op, this PR adds
a JIT pass to transform input graph to find all fusable instancs of add
+ relu and fuses them.

Test Plan:
python test/test_jit.py TestJit.test_add_relu_fusion

Imported from OSS

Differential Revision: D21822396

fbshipit-source-id: 12c7e8db54c6d70a2402b32cc06c7e305ffbb1be
2020-07-09 16:25:13 -07:00
Zino Benaissa
690946c49d Generalize constant_table from tensor only to ivalue (#40718)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40718

Currently only constant except tensor must be inlined during serialization.
Tensor are stored in the contant table. This patch generalizes this capability
to any IValue. This is particularly useful for non ASCII string literal that
cannot be inlined.

Test Plan: Imported from OSS

Differential Revision: D22298169

Pulled By: bzinodev

fbshipit-source-id: 88cc59af9cc45e426ca8002175593b9e431f4bac
2020-07-09 09:09:40 -07:00
Dmytro Dzhulgakov
8e2841781e [easy] Use torch.typename in JIT error messages (#41024)
Summary:
Noticed while trying to script one of the models which happened to have numpy values as constants. Lacking the numpy prefix in the error message was quite confusing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41024

Differential Revision: D22426399

Pulled By: dzhulgakov

fbshipit-source-id: 06158b75355fac6871e4861f82fc637c2420e370
2020-07-08 21:49:37 -07:00
Michael Suo
c93e96fbd9 [jit] move script-related implementation out of torch/jit/__init__.py (#40902)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40902

See the bottom of this stack for context.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22360210

Pulled By: suo

fbshipit-source-id: 4275127173a36982ce9ad357aa344435b98e1faf
2020-07-08 11:38:34 -07:00
Elias Ellison
37a572f33e fix grad thrashing of shape analysis (#40939)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40939

Previously, when we would do shape analysis by running the op with representative inputs, we would always set the grad property to false. This led to a wrong static analysis when we would create differentiable subgraphs, and propagate shapes without also propagating requires_grad, and then uninline them.

Test Plan: Imported from OSS

Differential Revision: D22394676

Pulled By: eellison

fbshipit-source-id: 254e6e9f964b40d160befe0e125abe1b7aa2bd5e
2020-07-06 17:12:13 -07:00
Elias Ellison
4af8424377 shape analysis fix for default dtype' (#40938)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40938

already accepted in https://github.com/pytorch/pytorch/pull/40645

Test Plan: Imported from OSS

Reviewed By: jamesr66a, Krovatkin

Differential Revision: D22394675

Pulled By: eellison

fbshipit-source-id: 1e9dbb24a4cb564d9a68280d2166329ca9fb0425
2020-07-06 17:10:01 -07:00
Ailing Zhang
e75f12ac15 Check statstical diff rather than exact match for test_dropout_cuda. (#40883)
Summary:
There's is a TODO tracked in https://github.com/pytorch/pytorch/issues/40882

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40883

Reviewed By: pbelevich

Differential Revision: D22346087

Pulled By: ailzhang

fbshipit-source-id: b4789ca3a10f6a72c6e77276bde45633eb6cf545
2020-07-06 13:11:48 -07:00
Michael Suo
300a3aaaad [jit] move private implementation out of jit/__init__.py (#40807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40807

We pack a lot of logic into `jit/__init__.py`, making it unclear to
developers and users which parts of our API are public vs. internal. This
is one in a series of PRs intended to pull implementation out into
separate files, and leave `__init__.py` as a place to register the
public API.

This PR moves all the tracing-related stuff out, and fixes other spots up
as necessary. Followups will move other core APIs out.

The desired end-state is that we conform to the relevant rules in [PEP 8](https://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces). In particular:
- Internal implementation goes in modules prefixed by `_`.
- `__init__.py` exposes a public API from these private modules, and nothing more.
- We set `__all__` appropriately to declare our public API.
- All use of JIT-internal functionality outside the JIT are removed (in particular, ONNX is relying on a number internal APIs). Since they will need to be imported explicitly, it will be easier to catch new uses of internal APIs in review.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22320645

Pulled By: suo

fbshipit-source-id: 0720ea9976240e09837d76695207e89afcc58270
2020-07-05 22:01:11 -07:00
Will Constable
8ecd4f36aa fix __len__, __contains__, getitem inherited from interface class derived from nn container (closes #40603) (#40789)
Summary:
Define static script implementation of __len__ and __contains__ on any subclass derived from a type such as ModuleList, Sequential, or ModuleDict.  Implement getitem for classes derived from ModuleDict.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40789

Reviewed By: eellison

Differential Revision: D22325159

Pulled By: wconstab

fbshipit-source-id: fc1562c29640fe800e13b5a1dd48e595c2c7239b
2020-07-04 15:45:18 -07:00
Nikolay Korovaiko
8223858cc1 shape inference of undefined for prim::grad (#40866)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40866

Reviewed By: pbelevich

Differential Revision: D22358988

Pulled By: Krovatkin

fbshipit-source-id: 7118d7f8d4eaf056cfb71dc0d588d38b1dfb0fc7
2020-07-04 14:10:22 -07:00
Nikolay Korovaiko
88c0d886e3 update requires_gard on loop inputs correctly (master) (#40926)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40926

Reviewed By: eellison

Differential Revision: D22359471

Pulled By: Krovatkin

fbshipit-source-id: 823e87674e2d2917f075255ec926e0485972f4e2
2020-07-04 13:58:29 -07:00
Elias Ellison
e1428cf41b [JIT] fix unfold shape analysis (#40749)
Summary:
unfold on a 0-dimensioned tensor returns a 1-dim tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40749

Differential Revision: D22361481

Pulled By: eellison

fbshipit-source-id: 621597e5f97f6e39953eb86f8b85bb4142527a9f
2020-07-02 13:32:37 -07:00
Mikhail Zolotukhin
871bfaaba1 [JIT] Fix shape analysis for aten::masked_select. (#40753)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40753

The reference says that this op always returns a 1-D tensor, even if
the input and the mask are 0-D.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22300354

Pulled By: ZolotukhinM

fbshipit-source-id: f6952989c8facf87d73d00505bf6d41573eff2d6
2020-06-30 11:04:50 -07:00
Mikhail Zolotukhin
50d55b9f2b [JIT] Update type of the unsqueeze's output in shape analysis. (#40733)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40733

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22298537

Pulled By: ZolotukhinM

fbshipit-source-id: a5d4597ed10bcf14d1b28e914bf898d0cae5b4c0
2020-06-30 11:01:45 -07:00
Jeff Daily
ac8c8b028d [ROCm] restore jit tests (#40447)
Summary:
Remove `skipIfRocm` from most jit tests and enable `RUN_CUDA_HALF` tests for ROCm.

These changes passed more than three rounds of CI testing against the ROCm CI.

CC ezyang xw285cornell sunway513
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40447

Differential Revision: D22190711

Pulled By: xw285cornell

fbshipit-source-id: bac44825a2675d247b3abe2ec2f80420a95348a3
2020-06-27 01:03:59 -07:00
Will Constable
d855528186 wconstab/38034-sliced-sequential (#40445)
Summary:
Partial support for slicing of Sequential containers.

- works around missing Sequential slice functionality
   by converting to tuple
- only supports iteration of resulting tuple values,
   not direct call() on the sliced sequential
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40445

Differential Revision: D22192469

Pulled By: wconstab

fbshipit-source-id: 61c85deda2d58f6e3bea2f1fa1d5d5dde568b9b5
2020-06-24 09:05:51 -07:00
Elias Ellison
6468bc4637 [JIT] script if tracing fix (#40468)
Summary:
Currently, torchvision annotates `batched_nms` with `torch.jit.script` so the function gets compiled when it is traced and ONNX will work. Unfortunately, this means we are eagerly compiling batched_nms, which fails if torchvision isn't built with `torchvision.ops.nms`. As a result, torchvision doesn't work on torch hub right now.

`_script_if_tracing` could solve our problem here, but right now it does not correctly interact with recursive compilation. This PR fixes that bug.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40468

Reviewed By: jamesr66a

Differential Revision: D22195771

Pulled By: eellison

fbshipit-source-id: 83022ca0bab6d389a48a478aec03052c9282d2b7
2020-06-23 17:14:28 -07:00
Jerry Zhang
cbd53bfee8 [jit] Remove unnecessary clone APIs for script::Module and RecursiveScriptModule (#40297)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40297

Test Plan: Imported from OSS

Differential Revision: D22191660

fbshipit-source-id: 4b338ca82caaca04784bffe01fdae3d180c192f4
2020-06-23 16:03:22 -07:00
Jerry Zhang
f652abc1dd [jit] Enable copy.deepcopy and copy.copy for RecursiveScriptModule (#32685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32685

att

Test Plan:
.

Imported from OSS

Differential Revision: D21220755

fbshipit-source-id: 5c71e9bb9f43032cf60563a9e67579118a8d7e33
2020-06-23 09:21:12 -07:00
Wanchao Liang
4b028a8e07 [jit] support pad_sequence/pack_sequence (#39844)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39844

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D22026720

Pulled By: wanchaol

fbshipit-source-id: cc51ea77eff3689e319ec7e89a54c788646b5940
2020-06-19 19:03:14 -07:00
Mike Ruberry
4f761f325c Back out "[pytorch][PR] Removes dunder div"
Summary: NVIDIA's Apex is updating to no longer rely on this behavior, but we're reverting this Python2->Python3 update to unblock internal apex users.

Test Plan: Sandcaslte + OSS CI.

Reviewed By: ngimel

Differential Revision: D22146782

fbshipit-source-id: f9483d2cbf9dc3a469ad48a6c863edea3ae51070
2020-06-19 18:31:20 -07:00
Meghan Lele
d58b8222b7 [JIT] Add support for with statements (#34705)
Summary:
**Summary**
This commit adds support for with statements to PyTorch JIT. Each
of the with items in a with statement is represented in the JIT IR
as a pair of `prim::Enter` and `prim::Exit` nodes that call the
`__enter__` and `__exit__` methods defined on the context manager objects
returned by the expressions in the with item.

**Testing**
This commit adds unit tests for with statements with named with items,
nameless with items, and with statements that encounter exceptions.
```
$ python test/test_jit.py TestWith.test_with_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.430s

OK
```

```
$ python test/test_jit.py TestWith.test_with_no_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.264s

OK
```

```
$ python test/test_jit.py TestWith.test_with_exceptions
Fail to import hypothesis in common_utils, tests are not derandomized
Couldn't download test skip set, leaving all tests enabled...
.
----------------------------------------------------------------------
Ran 1 test in 1.053s

OK
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34705

Differential Revision: D22095945

Pulled By: SplitInfinity

fbshipit-source-id: f661565a834786725259b8ea014b4d7532f9419d
2020-06-18 16:57:18 -07:00
Wanchao Liang
442ec1dd4e [test] split remaining quantization tests out of test_jit (#40144)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40144

as title, split remaining quantization tests out of test_jit to reduce
the size of test_jit

Test Plan: Imported from OSS

Differential Revision: D22085034

Pulled By: wanchaol

fbshipit-source-id: 0c8639da01ffc3e6a72e6f470837786c73a6b3f0
2020-06-18 13:39:13 -07:00
Wanchao Liang
693ab77c00 [test] split onnx export test out of test_jit (#40143)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40143

as titled, to reduce size of test_jit

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22085036

Pulled By: wanchaol

fbshipit-source-id: 424f189fd3849c111d06ebe2e341da50d98fe0ec
2020-06-17 17:27:50 -07:00
Wanchao Liang
27d789500b [test] split tracer related tests out of test_jit (#40142)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40142

test_jit is becoming huge again, which makes editor hard to load and
write new tests, this split out the tracer related tests.

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22085035

Pulled By: wanchaol

fbshipit-source-id: 696bee84985ecfbfeac8e2ee5c27f1bdda8de394
2020-06-17 17:26:33 -07:00
Mike Ruberry
9d588f7ce2 Removes dunder div (#39151)
Summary:
BC-breaking note:

If a user is using one of these dunders directly they will not longer be available. Users should update to Python3 compatible dunders.

Original PR note:

`__div__` (and `__idiv__` and `__rdiv__`) are no longer special dunders in Python3. This PR replaces them with the `__truediv__` (`__itrudediv__`, `__rtruediv__`) dunders, since we no longer support Python2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39151

Differential Revision: D22075713

Pulled By: mruberry

fbshipit-source-id: d318b47b51f7cc4c3728b1606a34d81e49ba0fa1
2020-06-16 23:02:20 -07:00
Shihao Xu
00651b8c93 [distribtued.nn] Implement TorchScript-compatible RemoteModule API (#37139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37139

See design doc in https://github.com/pytorch/pytorch/issues/37136

ghstack-source-id: 105926270

Test Plan:
TODO:

- Make the generated Interface usable. https://github.com/pytorch/pytorch/pull/37139#discussion_r434190978
-
- Avoid generating the same template instances for Module that is not scriptable.
- Remove "infer_module_interface_cls".
- Use Python format instead of a CodeTemplate
- Use Python tempfile to track and delete file. Does it work if there is crash.

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_scripted_remote_module_template

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_non_scripted_remote_module_template
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_spawn
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_async_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_sync_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_with_kwargs

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name
```

```
buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork
```

buck test mode/opt-asan //caffe2/test:jit -- 'test_script_forward_method_replacement

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_script_forward_method_replacement'

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_imported_classes'

Differential Revision: D20499658

fbshipit-source-id: dd9383ae4eb2343366c11127664f845b91ca3b0a
2020-06-15 19:07:35 -07:00
Nikita Shulga
c6b69a4e4d Delete Python <= 3.5 specific checks from the code (#39879)
Summary:
Remove PY3 and PY34 checks from `torch/testing/_internal/common_utils.py`
 Remove PY35 global var from `torch.jit.annotations`
Always call `try_get_real_signature` in `torch/jit/annotations.py`
Use `map` instead of `imap`, since Python-2 is no longer support, so map is always lazy.
Remove all pre Python-3.6 checks from `torch/_six.py` and `torch/_appdirs.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39879

Differential Revision: D22037811

Pulled By: malfet

fbshipit-source-id: af0c79f976569c2059d39ecb49c6b8285161734f
2020-06-15 08:16:06 -07:00
Nikolay Korovaiko
7f55197a57 Peel Loop (#39434)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39434

Differential Revision: D21857037

Pulled By: Krovatkin

fbshipit-source-id: 6583da167fe93d96e93f1c3d71f46f94e7f4e982
2020-06-10 13:48:18 -07:00
Yanan Cao
c22bbb2124 [JIT] Add Type::repr_str to return human-readable str (#39544)
Summary:
Clearly expressing a type is inferred by PyTorch instead of explicitly annotated by user makes many error messages more user-friendly

Currently Type has two string conversion methods. str() for IR printing and python_str() for serialization and error message generation. If we want to include more information in type printing while maintaining serialization/deserialization correctness, we need to split python_str() into annotation_str() and repr_str().

annotation_str is solely responsible for serialization, it strictly matches format of python type annotation. repr_str() is responsible for generating a human-readable error message that includes information like "this type is inferred, not explicitly annotated"

Closes https://github.com/pytorch/pytorch/issues/39449
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39544

Differential Revision: D21978759

Pulled By: gmagogsfm

fbshipit-source-id: 733566f5a62e748b5ca4bb3c5943ebb6d5b664d0
2020-06-10 12:01:24 -07:00
Elias Ellison
428bc90978 [JIT] add dtype as type annotation (#39741)
Summary:
make torch.dtype resolve as type annotation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39741

Reviewed By: jamesr66a

Differential Revision: D21956469

Pulled By: eellison

fbshipit-source-id: 492acd7403fa827a2e2c87fd08d31450fcb3a45e
2020-06-09 15:01:00 -07:00
James Reed
f1c60c04b8 [JIT] Fix module interface test (#39592)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39592

Test Plan: Imported from OSS

Differential Revision: D21909659

Pulled By: jamesr66a

fbshipit-source-id: 831ae6b158041d4241209cee50f7a4d09cd2fcb2
2020-06-09 12:13:58 -07:00
Nikolay Korovaiko
97a2918a07 reduce number of bailout nodes (#38281)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38281

Differential Revision: D21665509

Pulled By: Krovatkin

fbshipit-source-id: c2c34b759aec30d0a161e582030ba994192ee4ec
2020-06-05 13:45:37 -07:00
Yanan Cao
0031108b60 Support torch.Tensor subclass (like Parameter) input. (#39487)
Summary:
Currently torch.Tensor subclasses (like torch.nn.Parameter) isn't a supported type annotation to torch script inputs. This PR allows it to be treated like torch.Tensor for compilation.

Closes https://github.com/pytorch/pytorch/issues/38235
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39487

Differential Revision: D21885827

Pulled By: gmagogsfm

fbshipit-source-id: 1ec51829b132b7b0293a6c526d73497b23dae113
2020-06-05 11:58:20 -07:00
Edward Yang
da2004e132 Upgrade lint. (#39483)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39483

I fixed all of the new errors that occurred because of the upgrade.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21884575

Pulled By: ezyang

fbshipit-source-id: 45c8e1f1ecb410c8d7c46dd3922ad70e982a0685
2020-06-04 12:56:43 -07:00
Elias Ellison
49b69b2ade [JIT] fix broadcasting lists of ints (#39481)
Summary:
Previously, on conversion from python -> c++ it was casted to double list through bad copy pasta. It's pretty unusual for someone to script a broadcasting list function directly since it's an internal api, so it was unlikely to affect anyone.

Fix for https://github.com/pytorch/pytorch/issues/39450
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39481

Reviewed By: jamesr66a

Differential Revision: D21870557

Pulled By: eellison

fbshipit-source-id: e704e5e87d2702a270b7d65c4df444246a134480
2020-06-04 12:16:41 -07:00
Xiang Gao
ebd4125e7e [JIT] Make torch.unique_consecutive compatible (#39339)
Summary:
A `unique_consecutive` version of https://github.com/pytorch/pytorch/pull/38156
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39339

Differential Revision: D21823997

Pulled By: eellison

fbshipit-source-id: d14596a36ba36497e296da5a344e0376cef56f1b
2020-06-02 14:54:29 -07:00
Meghan Lele
f4365cf5ba [JIT] Add support for saving/loading of lowered modules (#38893)
Summary:
**Summary**
This commit adds support for seralization and deserialization of
`ScriptModules` that have been lowered to a specific backend. Nothing
special was required to accomplish this, other than removing some code
in `unpickler.cpp` that guarded against the deserialization of `Any`
type objects. Now that lists and dicts are tagged with their types
during serialization, this check is no longer necessary.

**Test Plan**
This commit adds a unit test for testing that a lowered module still
produces the same results as Python and regular JIT after saving and
loading.

**Fixes**
This pull request fixes part of https://github.com/pytorch/pytorch/issues/37841.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38893

Differential Revision: D21825813

Pulled By: SplitInfinity

fbshipit-source-id: 77a7b84504e0dddf14c89b3ed5dd6b438c086f66
2020-06-01 23:50:52 -07:00
xuewenc
7836eaceee [JIT] JIT should let people know we inferred an argument as a tensor (#38527)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38527

This PR solves issue (#37200).
Error is encountered during IR generation while trying to resolve the call to sum.
Should let user know it inferred the value for argument 'dim' to be of type 'Tensor'
because it was not annotated with an explicit type.

Test Plan:
Add code to reprodue the issue (#37200)
`python test/test_jit.py TestJit.test_inferred_as_tensor`

Differential Revision: D21743876

Pulled By: superwizard2019

fbshipit-source-id: 370ca32afea4d53b44d454f650f7d3006f86bcc6
2020-05-29 10:41:50 -07:00
Mike Ruberry
13120bf677 Updates assertEqual to require atol and rtol, removes positional atol (#38872)
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.

In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872

Differential Revision: D21740237

Pulled By: mruberry

fbshipit-source-id: acbc027aa1d7877a49664d94db9a5fff91a07042
2020-05-27 06:31:07 -07:00
Nikolay Korovaiko
9b95f757af move num_profiled_runs to common_utils (#38687)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38687

Differential Revision: D21634080

Pulled By: Krovatkin

fbshipit-source-id: 55513124caf3885e475ffecd9d9f3dbc4729a573
2020-05-27 01:14:01 -07:00
Rohan Varma
63e545e0fe Revert D21717199: [pytorch][PR] Updates assertEqual to require atol and rtol, removes positional atol
Test Plan: revert-hammer

Differential Revision:
D21717199

Original commit changeset: 9feb856f94ee

fbshipit-source-id: bfde9c39a5ce99f0ca6183a7dde703c65b7c8259
2020-05-26 18:23:59 -07:00
Mike Ruberry
6ddca30b2d Updates assertEqual to require atol and rtol, removes positional atol (#38872)
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.

In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872

Differential Revision: D21717199

Pulled By: mruberry

fbshipit-source-id: 9feb856f94eee911b44f6c7140a1d07c1b026d3a
2020-05-26 08:30:23 -07:00
Elias Ellison
cd5d7a34b8 [JIT] Factor out aliases to separate test (#38746)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38746

Factors out testing of op alias normalization so that there is a registry used for tests.

Test Plan: Imported from OSS

Differential Revision: D21673107

Pulled By: eellison

fbshipit-source-id: e06653cdf24f14a4253dd054e4d402d171d16a11
2020-05-21 21:47:24 -07:00
Elias Ellison
f90dc741eb [JIT] Normalize op aliases (#38735)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38735

Follow up to my comment https://github.com/pytorch/pytorch/pull/36597/#issuecomment-613674329

This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc.

Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as glaringlee of the C++ API thinks we should.

Bike shedding: maybe this should be `CanonicalizeOps` instead

Test Plan: Imported from OSS

Differential Revision: D21673108

Pulled By: eellison

fbshipit-source-id: c328618907de1af22e07f57fd27fa619978c2817
2020-05-21 21:47:17 -07:00
Mike Ruberry
64584573f9 Updates tests for integer division deprecation (#38621)
Summary:
Updates our tests in preparation of integer division using torch.div and torch.addcdiv throwing a runtime error by avoiding integer division using torch.div. This creates a brief period where integer division using torch.div is untested, but that should be OK (since it will soon throw a runtime error).

These callsites were identified using https://github.com/pytorch/pytorch/issues/36897.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38621

Differential Revision: D21612823

Pulled By: mruberry

fbshipit-source-id: 749c03a69feae02590b4395335163d9bf047e162
2020-05-19 19:28:00 -07:00
Ilia Cherniavskii
235f62417d Fixes for profiling JIT code (#38453)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38453

Two fixes:
 - RecordFunction in JIT interpreter should exist during the execution
   of the frame, and not just when we enter the frame
 - When creating a JIT continuation in wait instruction, we'd want to
   preserve the original thread local context, right now when we resume
   execution in continuation we preserve the thread local state of the
   thread that set future value (i.e. executed a forked task)

Test Plan: unittest, CI

Reviewed By: ngimel

Differential Revision: D21565959

Pulled By: ilia-cher

fbshipit-source-id: 206b98e3bfb0052fc8e4031da778e372cc71afc1
2020-05-19 15:50:42 -07:00
Michael Voznesensky
f6f1384811 [JIT] Refactor attributes to support buffers and parameters as first class citizens, add support for iterating over named_buffers() (#37905)
Summary:
First part of https://github.com/pytorch/pytorch/issues/36211 - still a WIP, but asking for commentary to ensure this is the direction we want to go in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37905

Differential Revision: D21633735

Pulled By: voznesenskym

fbshipit-source-id: f4e4302e40114513776c9e48867a90d72049e2e9
2020-05-18 23:23:43 -07:00
Elias Ellison
daa85cfe2e [JIT] Exit Transform Rewrite (#38282)
Summary:
After an early return, we conditionalize all further execution. This means that currently the pattern of
`if return elif return elif return` generates better code than `if return if return if return`. It's obviously not good to have semantically equivalent code generate worse IR, so we should rewrite the graph to handle this case. This came up in https://github.com/pytorch/pytorch/pull/37171

```
torch.jit.script
def test_foo(x: bool, y: bool):
    if x:
        return 1
    return 2
print(test_foo.code)
```
generates:
```
def test_foo(x: bool,
    y: bool) -> int:
  _0 = uninitialized(int)
  if x:
    _1, _2 = True, 1
  else:
    _1, _2 = False, _0
  if _1:
    _3 = _2
  else:
    _3 = 2
  return _3
```
while
```
torch.jit.script
def test_foo(x: bool, y: bool):
    if x:
        return 1
    else:
        return 2
print(test_foo.code)
```
generates:
```
def test_foo(x: bool,
    y: bool) -> int:
  if x:
    _0 = 1
  else:
    _0 = 2
  return _0
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38282

Differential Revision: D21576733

Pulled By: eellison

fbshipit-source-id: 80cf1ad7fbda6d8d58557abbfb21c90eafae7488
2020-05-15 12:22:28 -07:00
Michael Voznesensky
960f4b51e3 [JIT] Fix @staticmethod access from self on modules (#37702)
Summary:
Closes https://github.com/pytorch/pytorch/issues/30755
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37702

Differential Revision: D21389989

Pulled By: voznesenskym

fbshipit-source-id: f9b7e26a9eab7dc3d7762a5a28f85424dac5fbb3
2020-05-14 21:12:10 -07:00
Will Feng (FAIAR)
38d141ede5 Support having a different forward method when we are not in scripting mode (#38158)
Summary:
TorchScript currently doesn’t support `*args, **kwargs` in method signature, which is extensively used in DPER3 low-level modules’ forward method. In order to make DPER3 low-level modules scriptable, I was thinking about a solution of having a forward method *only* for TorchScript, and replace the forward method when we are not in scripting mode.

This solution works today, and I would like to add a test to make sure it will always work in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38158

Differential Revision: D21485657

Pulled By: yf225

fbshipit-source-id: df7368e8a5265418be7c305e6666ffd76e595466
2020-05-14 12:13:06 -07:00
David Reiss
7f7fdb1013 Remove a use of checkScript(str) (#35623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35623

Python 2 has reached end-of-life and is no longer supported by PyTorch.
This test case is valid syntax in Python 3.

Test Plan: CI

Differential Revision: D20842874

Pulled By: dreiss

fbshipit-source-id: 9f12e046f827d4f9d5eca99b0b0b46f73e06ff51
2020-05-14 10:07:58 -07:00
Hong Xu
336e1ec592 Clean up error handling in is_nonzero and where in TensorCompare.cpp (#38150)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38150

Differential Revision: D21539736

Pulled By: ezyang

fbshipit-source-id: e390c12f5948192a552d66dcd1bb89b2cb45f170
2020-05-13 20:19:40 -07:00
Elias Ellison
8d883f5c7c [JIT] [Easy] Add location to implicit conversions (#38442)
Summary:
Previously, we weren't adding the location to implicit conversions, so the error message wouldn't show location when these ops failed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38442

Differential Revision: D21563500

Pulled By: eellison

fbshipit-source-id: 19dd786ab8580f11ed919aac669efeed0ef52dcb
2020-05-13 18:02:41 -07:00
Michael Suo
2efa7e04c2 [jit] move torchbind tests to separate file (#37473)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37473

Test Plan: Imported from OSS

Differential Revision: D21297541

Pulled By: suo

fbshipit-source-id: 65c48094b1f26fbbf251021957257ce04279922b
2020-05-13 17:37:00 -07:00
anjali411
1676c7d618 Added autograd tests, disabled jit autograd tests for complex and added a separate list for tests for complex dtype only (#38399)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38399

Test Plan: Imported from OSS

Differential Revision: D21555941

Pulled By: anjali411

fbshipit-source-id: ea9f5a76590c5bab3df6a540617b074238bfb535
2020-05-13 16:41:09 -07:00
Michael Suo
167a978a03 Fix method stub creation for function attributes (#37994)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37994

Before, reassigning a method in a module (like `forward = _forward`)
didn't work, because we look at the function object's name for our def
name when building AST. Mkae that overrideable to handle cases like
reassignment

Test Plan: Imported from OSS

Differential Revision: D21444535

Pulled By: suo

fbshipit-source-id: 4f045f18b5a146edc8005689af525d7d7ed8dd5f
2020-05-12 23:20:35 -07:00
Elias Ellison
eb3e9872c9 [JIT] make torch.unique compilable (#38156)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/37986

Follows the stack in https://github.com/pytorch/pytorch/pull/33783 stack to make functions in `torch/functional.py` resolve to their python implementations. Because the return type of `torch.unique` depends on `return_inverse` and `return_counts` I had to refactor the implementation to use our boolean_dispatch mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38156

Differential Revision: D21504449

Pulled By: eellison

fbshipit-source-id: 7efb1dff3b5c00655da10168403ac4817286ff59
2020-05-12 18:37:53 -07:00
Kimish Patel
f954dd7823 Add dropout removal pass. (#38253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38253

This pass removes dropout and dropout_ nodes when training is false. It
requires to have run freeze_module pass which does both inlining and constant
propagation, without which training variable remains as attribute instead of
constant.
ghstack-source-id: 103939141

Test Plan: python test/test_jit.py TestScript.test_remove_dropout

Reviewed By: dreiss

Differential Revision: D21505863

fbshipit-source-id: 42ea45804e4653b625b6a254c8d8480757264aa8
2020-05-12 14:38:34 -07:00
James Reed
a553935e3c [JIT] Expose magic methods on script::Object (#38167)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38167

Test Plan: Imported from OSS

Differential Revision: D21486709

Pulled By: jamesr66a

fbshipit-source-id: 17b44d979fc658768b0d64f7d8af6fb684043ea3
2020-05-11 15:01:15 -07:00
Vitaly Fedyunin
57d01be92b Replacing assertEqual with assertEqualIgnoreType wherever types missmatch (#38102)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38102

Test Plan: Imported from OSS

Differential Revision: D21477060

Pulled By: VitalyFedyunin

fbshipit-source-id: 25e0fd837ca9bfccf0ce994c80f7790c894096d4
2020-05-09 14:48:55 -07:00
Ailing Zhang
e84aa0211d [JIT]Support List variable in adv indexing. (#37966)
Summary:
Followup of https://github.com/pytorch/pytorch/issues/37848 I realized that it's better to condition on `Value` type instead of token type. So now it also support indexing through list variables (used to be list literal only).
Also apparently our eager frontend accept indexing with float list as well, so matched this edge case behavior as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37966

Reviewed By: suo

Differential Revision: D21439642

Pulled By: ailzhang

fbshipit-source-id: cedb8431ef38747d4aa9909a6bbf8e954dbe0e25
2020-05-08 15:40:11 -07:00
James Reed
c1e7758b5e Back out "Revert D20229168: [quantization] Use torchbind for Linear PackedParams" (#38101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38101

Original commit changeset: 29e8a4d3b8bf
ghstack-source-id: 103730417

Test Plan: waitforsadcastle

Differential Revision: D21471381

fbshipit-source-id: a922cdf31ba32021e7264ae1454c646c0bfd7ef4
2020-05-08 10:53:06 -07:00
Ailing Zhang
9232356e5f remove uses of type() and type_as() part 1. (#38029)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38029

Differential Revision: D21468523

Pulled By: ailzhang

fbshipit-source-id: 14b7185d43eb03f630cfaa2d70e02d637ff8551b
2020-05-08 08:16:24 -07:00
Nikita Shulga
4bc0a7f86a Revert D20229168: [quantization] Use torchbind for Linear PackedParams
Test Plan: revert-hammer

Differential Revision:
D20229168

Original commit changeset: 3607cac9aa5b

fbshipit-source-id: 29e8a4d3b8bffd95ff6a58b46c4f1c1e23770304
2020-05-07 19:47:45 -07:00
James Reed
eaf9b28c55 [quantization] Use torchbind for Linear PackedParams (#34140)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34140

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D20229168

Pulled By: jamesr66a

fbshipit-source-id: 3607cac9aa5b4b044572329742baed03350491c6
2020-05-07 19:03:44 -07:00
eellison
d5df055bbb [WIP][JIT] Add JIT backend registration API (#35833)
Summary:
**Summary**
This commit adds `torch::jit::RegisterBackend`, an API that allows
external backends to be registered for the execution of JIT subgraphs
outside the JIT interpreter. In order to register an external backend,
one must extend the provided abstract class `PyTorchBackendInterface` and provide
two additional functions: one that creates an instance of the aforementioned subclass
of `PyTorchBackendInterface`, and another that preprocesses a `ScriptModule` so that
it can run on the backend. Then, a `ScriptModule` that can compile and execute a given
JIT subgraph using the functions provided at registration time is generated
for each registered backend.

**Testing**
This commit adds a unit test that uses a minimal test backend
to make sure that the registration endpoint and generated
`ScriptModule` work.

```
$ python test/test_jit.py TestBackends
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.183s

OK

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35833

Differential Revision: D21231955

Pulled By: SplitInfinity

fbshipit-source-id: 452db1123d0e5d83f97fe5da8a00fdfdb50dbef9
2020-05-07 18:15:26 -07:00
Elias Ellison
f5b3125af7 [JIT] Peephole optimize list ops (#37612)
Summary:
Peephole optimize  `len(li)` and `li[index]` patterns.

This changes the Profiled Graph IR for the following tests:
```
(Test Name, Num ifs loops, Num non-tensor nodes)
Before:
('test_nn_Conv1d_reflect_stride2_pad2', 3, 14)
('test_nn_Conv2d_reflect_stride2_pad2', 3, 14)
('test_nn_Conv1d_circular_stride2_pad2', 5, 31)
('test_nn_Conv2d_circular_stride2_pad2', 5, 31)
('test_nn_Conv3d_circular_stride2_pad2', 5, 31)
('test_nn_Conv1d_replicate_stride2_pad2', 3, 14)
('test_nn_Conv2d_replicate_stride2_pad2', 3, 14)
('test_nn_Conv3d_replicate_stride2_pad2', 3, 14)
After
('test_nn_Conv1d_reflect_stride2_pad2', 0, 2)
('test_nn_Conv2d_reflect_stride2_pad2', 0, 2)
('test_nn_Conv1d_circular_stride2_pad2', 0, 4)
('test_nn_Conv2d_circular_stride2_pad2', 0, 7)
('test_nn_Conv3d_circular_stride2_pad2', 0, 10)
('test_nn_Conv1d_replicate_stride2_pad2', 0, 2)
('test_nn_Conv2d_replicate_stride2_pad2', 0, 2)
('test_nn_Conv3d_replicate_stride2_pad2', 0, 2)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37612

Differential Revision: D21352676

Pulled By: eellison

fbshipit-source-id: f8a0e7653b7a6a4c769f075de9b3044242ca9336
2020-05-06 15:55:18 -07:00
Elias Ellison
28ac5cdc91 fix profiling test (#37961)
Summary:
this is failing in the profiling_executor job
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37961

Differential Revision: D21434341

Pulled By: eellison

fbshipit-source-id: b34f94b1595ef6f6edee76cd200f951a2ef21f22
2020-05-06 15:04:44 -07:00
Elias Ellison
0e3a05ec00 [JIT] rename enable_profiling_mode to enable_profiling_mode_for_profiling_tests (#37825)
Summary:
The existing contextmanager only conditionally enabled_profiling_mode, which was counter intuitive. When we changed the default executor it broke internal benchmarking as a result.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37825

Differential Revision: D21404611

Pulled By: eellison

fbshipit-source-id: 306b3c333ef4eb44ab6a6e5ab4e0682e5ce312ce
2020-05-06 11:30:02 -07:00
Ailing Zhang
dd618216c5 [JIT]Support adv indexing using list. (#37848)
Summary:
We used to only support indexing through
- numbers like `x[0, 1]`
- tuple like `x[(0, 1)]`
- tensor like `x[torch.tensor([0, 1])]`

This PR adds support for indexing through list which is equivalent to tensor.
- `x[[0, 1, 5]]`
- `x[[0, 1], [0, 1]]`
- `x[[[0, 1], [0, 1]], [[0, 1], [0, 1]]]`

Note for `x[[0, 1, 5]]` we had a bug in AST conversion code so we used to treat it like `x[0, 1, 5]` which means it might accidentally run and produce wrong result(fixes https://github.com/pytorch/pytorch/issues/37286 fixes https://github.com/pytorch/pytorch/issues/18616), now that it's fixed we probably want to mark it as BC breaking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37848

Reviewed By: suo

Differential Revision: D21409840

Pulled By: ailzhang

fbshipit-source-id: 6f2d962885c6dc009cb384d98be1822f5ca7a189
2020-05-06 10:44:48 -07:00
Jerry Zhang
70f375becf [quant] ConvPackedParams with TorchBind (#35923)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35923

(Note: this ignores all push blocking failures!)

Test Plan:
tbd

Imported from OSS

Differential Revision: D20957089

fbshipit-source-id: 74d8bd628ccba64e902ea6ebabc2b883924050b0
2020-05-05 20:18:36 -07:00
Michael Suo
bd220b336b [jit] fix trace checking reporting divergent names (#37842)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37842

Fixes https://github.com/pytorch/pytorch/issues/23993.

Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample1 = torch.ones(1)
sample2 = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample1, sample2,),))
> produces a graph with something like:
> %sample1, %sample2 = prim::TupleUnpack(%input)
```

This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
> produces a graph with something like
> %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.

Test Plan: Imported from OSS

Differential Revision: D21406478

Pulled By: suo

fbshipit-source-id: 3c7066b95d4a6e9b528888309954b02dadbc1a07
2020-05-05 13:39:41 -07:00
Elias Ellison
23d0441da7 [JIT] Fix GetAttr inconsistency (#37424)
Summary:
We were previously only looking at class attributes, so that didn't include methods etc, and would silently give wrong semantics. This makes hasAttr go through the same resolution as our other attribute lookups.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37424

Differential Revision: D21282633

Pulled By: eellison

fbshipit-source-id: 8e970f365c2740d137a02331739c2ed93747b918
2020-05-05 09:06:51 -07:00
Michael Suo
804e32a467 split out docs tests into separate job (#37793)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37793

Test Plan: Imported from OSS

Differential Revision: D21392798

Pulled By: suo

fbshipit-source-id: 172fb0522d0b168ca19a382e5fb1eb87b6390acc
2020-05-04 17:58:04 -07:00
Michael Suo
b7f258bbd3 add fmt to libtorch_python.so (#37560)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37560

Test Plan: Imported from OSS

Differential Revision: D21320059

Pulled By: suo

fbshipit-source-id: 95cfe7cf26c515fdfcb4621cc58266d838a38a3e
2020-05-04 10:14:37 -07:00
Nikolay Korovaiko
831c8f362f fix the incorrect merge of profiling information of two tensor types for the same value (#36806)
Summary:
as a part of moving to the dynamic shapes we are now passing `frame_id` to each profiling callback. The implementation of that requires copying profiling callbacks into Interpreter, so `first`s are actually different for every run. The dynamic shapes merging algorithm won't be using `first`, but in the meantime, while we get there, this should be a good enough fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36806

Differential Revision: D21307173

Pulled By: Krovatkin

fbshipit-source-id: 7dade56ebcc72ebd40bb7f3d636c7b83c99b628f
2020-05-01 12:53:25 -07:00
Michael Voznesensky
91e74fd843 [JIT] Adds a code_with_constants method to module printing (#37586)
Summary:
Closes https://github.com/pytorch/pytorch/issues/36625
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37586

Differential Revision: D21331385

Pulled By: suo

fbshipit-source-id: 752e63eac8bdd06c6719efb972cdc832ad7c1535
2020-04-30 20:44:01 -07:00
James Reed
d3d10cc14a Add tests for lower_graph and fix unpack() ops dispatch (#37540)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37540

ghstack-source-id: 103169129

Test Plan:
buck test mode/no-gpu mode/dev //caffe2/test:jit -- 'test_lower_graph_conv \(test_jit\.TestScript\)'
buck test mode/no-gpu mode/dev //caffe2/test:jit -- 'test_lower_graph \(test_jit\.TestScript\)'

Differential Revision: D21313433

fbshipit-source-id: bb9942272784e517b07537ee4c149b9dc4df4c2a
2020-04-30 10:55:05 -07:00
Michael Suo
896f8130a6 Revert D21297549: [jit] fix trace checking reporting divergent names
Test Plan: revert-hammer

Differential Revision:
D21297549

Original commit changeset: 981d5879a4a2

fbshipit-source-id: 9be6e88007c644914973a305f9e7a961ef11a815
2020-04-29 16:16:44 -07:00
Michael Suo
4bfa51d405 [jit] fix trace checking reporting divergent names (#37464)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37464

Fixes https://github.com/pytorch/pytorch/issues/23993.

There are two fixes here:
1. Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample, sample,),))
# produces a graph with something like
# %sample, %sample = prim::TupleUnpack(%input)
```
This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
# produces a graph with something like
# %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.

2. Previously, our input cloning for trace checking didn't do a memoized
deep copy. So:
```
_clone_inputs((sample, sample, sample))
```
produces a tuple with three non-aliased tensors. That's wrong! Use
copy.deepcopy with a memoization argument to fix this.

Test Plan: Imported from OSS

Differential Revision: D21297549

Pulled By: suo

fbshipit-source-id: 981d5879a4a244520dd68489767129ff357f1497
2020-04-28 23:52:57 -07:00
Elias Ellison
a55d80e1c5 [JIT] remove dominated guards of functional values (#37105)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37105

If a value isn't mutated anywhere and is guarded by a node, then we can remove all other guards that are dominated by the first guard.

This reduces the number of (test name, Ifs/Loops, non-tensor nodes excluding getAttr and Bailouts) from the previous PR for the following tests:
```
Before:  ('upsample', 0, 13)
After:  ('upsample', 0, 5)
Before:  ('upsample', 0, 2)
After:  ('upsample', 0, 1)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 21)
After:  ('interpolate', 1, 18)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 21)
After:  ('interpolate', 1, 20)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 13)
After:  ('interpolate', 1, 11)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 15)
After:  ('interpolate', 1, 13)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 25)
After:  ('interpolate', 1, 21)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 27)
After:  ('interpolate', 1, 23)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('test_nn_BatchNorm1d_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_3d_input', 2, 3)
After:  ('test_nn_BatchNorm1d_3d_input', 1, 2)
Before:  ('test_nn_BatchNorm1d_affine_simple_average', 2, 5)
After:  ('test_nn_BatchNorm1d_affine_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm1d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_3d_input_not_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_3d_input_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm1d_zero_batch', 1, 2)
Before:  ('test_nn_BatchNorm2d', 2, 3)
After:  ('test_nn_BatchNorm2d', 1, 2)
Before:  ('test_nn_BatchNorm2d_2d_simple_average', 2, 5)
After:  ('test_nn_BatchNorm2d_2d_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm2d_momentum', 2, 3)
After:  ('test_nn_BatchNorm2d_momentum', 1, 2)
Before:  ('test_nn_BatchNorm2d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm2d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm2d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm2d_zero_batch', 1, 2)
Before:  ('test_nn_BatchNorm3d', 2, 3)
After:  ('test_nn_BatchNorm3d', 1, 2)
Before:  ('test_nn_BatchNorm3d_3d_simple_average', 2, 5)
After:  ('test_nn_BatchNorm3d_3d_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm3d_momentum', 2, 3)
After:  ('test_nn_BatchNorm3d_momentum', 1, 2)
Before:  ('test_nn_BatchNorm3d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm3d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm3d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm3d_zero_batch', 1, 2)
Before:  ('test_nn_Transformer', 127, 467)
After:  ('test_nn_Transformer', 122, 450)
```

Test Plan: Imported from OSS

Differential Revision: D21215652

Pulled By: eellison

fbshipit-source-id: 0365fc2e351caca7e1ccaa25428908a26e3f5343
2020-04-28 23:28:18 -07:00
Elias Ellison
45e8451b33 optimize is_float_point calls (#37012)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37012

Removes an if statement in `torch.nn.functional.affine_grid`

Test Plan: Imported from OSS

Differential Revision: D21160755

Pulled By: eellison

fbshipit-source-id: 8b030936c9fbdb05b44abc9f254805d102f2acc2
2020-04-28 23:28:12 -07:00
Elias Ellison
cde1350a5d Add support for generic list constants (#36953)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36953

Add support for generic lists as a constant. generic dicts & tuples are already implemented. This is a pretty common pattern and cuts down on the number of non-tensor nodes executed in interpolate tests.

Test Plan: Imported from OSS

Differential Revision: D21160761

Pulled By: eellison

fbshipit-source-id: 1e6b7b25b7580f09067794772d44e615601c60c4
2020-04-28 23:28:07 -07:00
Elias Ellison
92129956cf Add size peephole optimziation (#36758)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36758

Test Plan: Imported from OSS

Differential Revision: D21160760

Pulled By: eellison

fbshipit-source-id: 9cdb8eeffa71fb4670a811347ae4fad2a82ae1d8
2020-04-28 23:27:52 -07:00
Michael Suo
92b9089fd9 [jit] Fix pretty printing of functions (#37432)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37432

Fixes https://github.com/pytorch/pytorch/issues/36803.

Test Plan: Imported from OSS

Differential Revision: D21284735

Pulled By: suo

fbshipit-source-id: 8c673099b3171070bff80fd1defc91487f66d4b3
2020-04-28 21:30:49 -07:00
mattip
ec8006cc16 [ONNX] fix provider_version and add consistency test (#36797)
Summary:
forward port the test from pr gh-36795, xref issue gh-32561
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36797

Differential Revision: D21257034

Pulled By: ezyang

fbshipit-source-id: d217da0e74f00a433c904defc0bf3eb5f594fd5e
2020-04-27 11:00:23 -07:00
Nikita Shulga
47c4dca1ab Remove python-2 or python<3.5 checks from unit tests (#37252)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37252

Test Plan: CI

Differential Revision: D21241083

Pulled By: malfet

fbshipit-source-id: 44164b822f7905288abb2beda0175d2162d86143
2020-04-24 17:42:04 -07:00
Zachary DeVito
b6bb644e41 Fix long line splitting issue in python_print (#37088)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37088

For an inlined expression tree like `(e_0, (e_1, e_long))` the previous
algoritm only scanned the same statement as `e_long`, splitting the
inlined expressions across lines. Because it did not scan `e_0`, `e_0`
would still get emitted inline, causing it to reverse order with `e_1` and
`e_long`. The new algorithm scans starting at `e_long` and going all
the way back up the expression until it reaches the end of the inlined
statement. Caching of what has already been scanned has been added so that
if there was a second long long `e_long2` after `e_long`, it would not
rescan and re-inline the statements that were already split.

Test Plan: Imported from OSS

Differential Revision: D21180394

Pulled By: zdevito

fbshipit-source-id: 4d142c83a04c89a47d04282f67a513f82cf153c0
2020-04-24 15:14:39 -07:00
moto
5a27ec09b8 Add Inverse Short Time Fourier Transform in ATen native (#35569)
Summary:
Ported `torchaudio`'s implementation (test, and documentation as well) to ATen.

Note
 - Batch packing/unpacking is performed in Python. ATen implementation expects 4D input tensor.
 - The way `hop_length` is initialized in the same way as `stft` implementation. [The Torchaudio's version tried to mimic the same behavior but slightly different](7da61a4bee/torchaudio/functional.py (L152-L157)).

Closes https://github.com/pytorch/pytorch/issues/34827
Relates https://github.com/pytorch/pytorch/issues/3775
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35569

Differential Revision: D21178090

Pulled By: mthrok

fbshipit-source-id: 2701a8b241a36a6fb1b740c2fb2b07cb938185d4
2020-04-24 12:14:55 -07:00
Vishwak Srinivasan
fd5b5cd604 Allowing casting str to int in JIT (#36016)
Summary:
Changelog:
- Allow int(str) in TorchScript
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36016

Test Plan:
- Added tests in test_jit.py

Closes https://github.com/pytorch/pytorch/issues/35948

Differential Revision: D21076438

Pulled By: driazati

fbshipit-source-id: d0753dc0e1c79f4f943c303b58b2d228856ba793
2020-04-23 14:26:24 -07:00
David Reiss
e75fb4356b Remove (most) Python 2 support from Python code (#35615)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35615

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well (though using side-by-side view and ignoring
whitespace change might be helpful).

Test Plan: CI

Differential Revision: D20842886

Pulled By: dreiss

fbshipit-source-id: 8cad4e87c45895e7ce3938a88e61157a79504aed
2020-04-22 09:23:14 -07:00
Mikhail Zolotukhin
359e7f4bba Teach IRParser to parse strides along with sizes in a tensor type. (#36951)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36951

Test Plan: Imported from OSS

Differential Revision: D21139940

Pulled By: ZolotukhinM

fbshipit-source-id: b56a1fddfc9de4684da3ba9a462e344d0985e8b6
2020-04-21 17:27:15 -07:00
Mike Ruberry
bcdb0727c2 Revert D20907254: Fix long line splitting issue in python_print
Test Plan: revert-hammer

Differential Revision:
D20907254

Original commit changeset: ebfc1a4eefc2

fbshipit-source-id: 76440a8649a17728c50e2f3eeb3744a2245f6daf
2020-04-21 16:24:32 -07:00
Zachary DeVito
bf676682e7 Fix long line splitting issue in python_print (#36188)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36188

* Need to remove n^2 behavior for scanning whether to split or not
  otherwise long inline chains will take a long time re-scanning.

Test Plan: Imported from OSS

Differential Revision: D20907254

Pulled By: zdevito

fbshipit-source-id: ebfc1a4eefc26d5806381e7afd75b7a9cd4cde97
2020-04-21 15:46:42 -07:00
Mike Ruberry
71ec8b2002 Switches test_jit to use float32 as its default scalar type (#36982)
Summary:
Our test suite used to set double as its default scalar type, and when it was switched to not do so (to be more consistent with how users experience PyTorch), a few tests had to still set the default scalar type to double to function properly. Now that the jit no longer creates double tensors so frequently, it appears that test_jit no longer needs to set double as its default scalar type, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36982

Differential Revision: D21152120

Pulled By: mruberry

fbshipit-source-id: ea6d3c1ad55552dc5affa1fe1bd0e5189849e6d7
2020-04-21 11:23:28 -07:00
Brian Vaughan
54ed6fd3ee Use both absolute and relative tolerance in testing (#34258)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34258

This PR allows both atol and rtol to be specified, uses defaults based on the prior analysis (spreadsheet attached to https://github.com/pytorch/pytorch/pull/32538), but retains the absolute tolerance behavior in cases where precision was previously specified explicitly.

Test Plan: Imported from OSS

Differential Revision: D21110255

Pulled By: nairbv

fbshipit-source-id: 57b3a004c7d5ac1be80ee765f03668b1b13f4a7e
2020-04-19 06:16:49 -07:00
Wanchao Liang
24aac32171 [jit] Add dictionary as output of tracer (#36696)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36696

This PR add dictionary as a supported output of tracer under the strict
flag.

Test Plan: Imported from OSS

Reviewed By: houseroad

Differential Revision: D21056962

Pulled By: wanchaol

fbshipit-source-id: ace498182d636de853cf8a1efb3dc77f5d53db29
2020-04-16 18:12:38 -07:00
David Reiss
63e5058c88 Fix naming of "strides" method in TensorType (#36727)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36727

Looks like this was renamed by accident in 0cbd7fa46f

Test Plan:
Unit test.
Lint.

Differential Revision: D21076697

Pulled By: dreiss

fbshipit-source-id: dbd18cb41c7b26479984a7a7b12ad41a4c5b7658
2020-04-16 17:07:27 -07:00
Elias Ellison
54a575c9bd [JIT] fix torch.tensor jit dtype (#36587)
Summary:
Previously we were always creating a double tensor from `torch.tensor(1.)`, whereas python eager uses the current default dtype. Fix for https://github.com/pytorch/pytorch/issues/36369
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36587

Differential Revision: D21043617

Pulled By: eellison

fbshipit-source-id: 38da303594f52e06941d86b6e57c4a06e7d36938
2020-04-16 10:55:49 -07:00
Elias Ellison
9cbeb0faed [JIT] Dont optimize shape peepholes on inline (#36404)
Summary:
With https://github.com/pytorch/pytorch/pull/35562, we are running peephole optimization on inlining to reduce the number of nodes that are copied.

The tracer encodes the sizes in the graph like:
```
graph(%0 : Double(7)):
  %1 : Function = prim::Constant[name="tensor_size"]()
  %2 : Tensor = prim::CallFunction(%1, %0)
  return (%2)
```

however people would like to reuse the graph with different shapes so running size invalidations would invalidate that. long term it might be better for the tracer to not include shape information but there are downstream users of that.

Separates out FuseAddMM from peephole so that now there is a single `disable_size_optimizations` parameter, and onnx explicitly invokes fuseaddmm.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36404

Differential Revision: D20968974

Pulled By: eellison

fbshipit-source-id: 56f8f1699e3b0adeeccdfd5a67bb975fd41a2913
2020-04-15 17:49:48 -07:00
davidriazati
8d66f88eb1 [jit] Fix bound method copying (#36546)
Summary:
Previously we were copying the bound method of the original class to the
new script module class, which causes `self` to be wrong. This PR
changes it so we fetch the unbound function, then bind it to the new
script module, then attach it to the module.

Fixes #28280
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36546

Pulled By: driazati

Differential Revision: D21023329

fbshipit-source-id: 6b3f8404700860151792f669a9c02fbd13365272
2020-04-15 17:38:20 -07:00
Lu Fang
67e0bf14b7 Add support of Dict as output when connecting script and tracing (#36265)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36265

Reviewed By: hl475

Differential Revision: D20927160

Pulled By: houseroad

fbshipit-source-id: 5a63022e92d234b97b57d60ef7f7aa3bc41c2d22
2020-04-14 16:06:53 -07:00
Wanchao Liang
999d7f6ab2 [jit] tracer flag to guard risky behaivors (#36277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36277

This PR introduce a flag to the tracer that guard the risky behaviors
like adding list/dict as output of the tracer. Currently to ensure not
BC breaking user, we throw warning if the tracer output is list, and
will throw error when the tracer output is dict to enforce using this
flag (next PR)

Test Plan: Imported from OSS

Differential Revision: D20998157

Pulled By: wanchaol

fbshipit-source-id: 0d2c55f1a263a48b1b92dd6ad54407815e0a6f72
2020-04-13 22:35:03 -07:00
Nikita Shulga
fd008bd170 Make patterns in test_unmatched_annotations more flexible (#36422)
Summary:
To make them compatible with python3.7 and python3.8
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36422

Test Plan: CI

Differential Revision: D21006399

Pulled By: malfet

fbshipit-source-id: 725df277ff3e4479fc2c39d16a30fbf301fde9e5
2020-04-13 17:53:37 -07:00
Wanchao Liang
3526627f46 Use unittest assertWarns instead (#36411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36411

This PR remove pytorch specific defined assertwarns and use the unit
test one, also format some tests

Test Plan: Imported from OSS

Differential Revision: D20998159

Pulled By: wanchaol

fbshipit-source-id: 1280ecff2dd293b95a639d13cc7417fc819c2201
2020-04-13 15:56:42 -07:00
Elias Ellison
8cb1950805 [JIT] fix alias assertion (#36178)
Summary:
AnyType wasn't listed as a mutable type, so the assertion triggered (yay!). Also update the `isMutableTypeInternal(from) != isMutableTypeInternal` logic to be more encompassing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36178

Differential Revision: D20922356

Pulled By: eellison

fbshipit-source-id: 7060a62b18e98dc24b6004a66225c196aadb566e
2020-04-09 18:25:18 -07:00
Jerry Zhang
358466f1da [quant] Move graph mode quantization tests to test_quantize_script.py (#36324)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36324

Test Plan:
.

Imported from OSS

Differential Revision: D20948046

fbshipit-source-id: 2dd8f0c6fbe8fd84293420b97592dc586d25def9
2020-04-09 16:10:18 -07:00
Mike Ruberry
62f9312abd Revert D20783298: Fix naming of "strides" method in TensorType
Test Plan: revert-hammer

Differential Revision:
D20783298

Original commit changeset: 8fcc146284af

fbshipit-source-id: 30e3cb6d7a30d82048534d4d2e794b7e08ae01bb
2020-04-09 04:24:43 -07:00
David Reiss
16980e455f Fix naming of "strides" method in TensorType (#35170)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35170

Looks like this was renamed by accident in 0cbd7fa46f

Test Plan:
Unit test.

Imported from OSS

Differential Revision: D20783298

fbshipit-source-id: 8fcc146284af022ec1afe8d651baf6721b190ad3
2020-04-08 15:59:28 -07:00
Edward Yang
83907ded1d Revert D20895316: [pytorch][PR] [JIT] List reland
Test Plan: revert-hammer

Differential Revision:
D20895316

Original commit changeset: 9a2bc0e6bdcb

fbshipit-source-id: d135f0038cf240a0973ecfcd540121cbd4ecb5a7
2020-04-08 14:40:10 -07:00
Elias Ellison
9ada7abc18 [JIT] fix comprehension scope writes (#36105)
Summary:
In a comprehension like:
```
    def f()->int:
        i = 1
        x = [i for i in range(7)]
        return i
```
the variables inside the comprehension do not write to the function environment.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36105

Differential Revision: D20880699

Pulled By: eellison

fbshipit-source-id: 40af0f7470e0baeff7ef158cb461bf85c816d169
2020-04-08 10:00:45 -07:00
Elias Ellison
2afe171538 [JIT] List reland (#36146)
Summary:
Relanding https://github.com/pytorch/pytorch/pull/33783
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36146

Differential Revision: D20895316

Pulled By: eellison

fbshipit-source-id: 9a2bc0e6bdcbd43f9abe51eadaa28f90bccafcc9
2020-04-07 16:18:30 -07:00
Elias Ellison
6bc8ffe824 [JIT] Optimize before inlining (#35562)
Summary:
Resubmit of https://github.com/pytorch/pytorch/pull/35424, only this time I run optimizations in the right order so the PR description is actually true.

This speeds up the inlining pass of FairSeq model from 180s -> 13s, and MaskRCNN model from 5s -> 1.5s.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35562

Differential Revision: D20738922

Pulled By: eellison

fbshipit-source-id: 1439cf9d1f0bc780e2d64a744694f8b3b7ba4b70
2020-04-07 09:42:26 -07:00
James Reed
3228939f23 [JIT] Fix fake_range() (#36083)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36083

Test Plan: Imported from OSS

Differential Revision: D20874745

Pulled By: jamesr66a

fbshipit-source-id: fc57defefbc8e9840b8d5bac89b4146179e00b06
2020-04-06 14:12:35 -07:00
davidriazati
71669f0249 Fix flake8 (#35968)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35968

Pulled By: driazati

Differential Revision: D20845617

fbshipit-source-id: 1b1cedb9c5c721f7f7edf94b91fbbb97d249bc2a
2020-04-03 14:02:37 -07:00
davidriazati
6e13a7787b [jit] Fix type comparisons segfault (#35929)
Summary:
Pybind will convert `None`s to `nullptr`s, so this adds a check to make
sure those don't get into the actual type comparison logic. Fixes #35778
](https://our.intern.facebook.com/intern/diff/20831278/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35929

Pulled By: driazati

Differential Revision: D20831278

fbshipit-source-id: 5800050e5eec280072afde58141ad00c1e8db8e2
2020-04-03 11:33:48 -07:00
Zachary DeVito
9097b55479 Propagate static_if more completely. (#35834)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35834

This handles the cases we did not handle before in AND and OR statements:

    static_true || <unknown> -> static_true
    static_false && <unknown> -> static_false

Test Plan: Imported from OSS

Differential Revision: D20801125

Pulled By: zdevito

fbshipit-source-id: 0ef94c3a14c7af91580fc5248a4ccfd9e8d6d481
2020-04-02 11:44:34 -07:00
Michael Suo
866d9d4e6a [jit] Fix name collision on load (#35720)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35720

When modules are saved, all relevant types are serialized according to
their qualified name with a compilation unit. Since qualified names are
guaranteed to be unique within a compilation unit, this normally works
fine.

On load, all types are registered in a compilation unit owned by the
script::Module. Type names are not unique across compilation units, so
if you load two modules with colliding type names, make them submodules
of yet another module, and save that module, there is the potential of a
name collision. See the added tests for examples if that description is
confusing.

The solution is to unique type names when serializing code by mangling
them if we detect a name collision.

Test Plan: Imported from OSS

Differential Revision: D20749423

Pulled By: suo

fbshipit-source-id: a8827ff1d4a89f3e7964dbbb49b4381863da3e6a
2020-04-01 00:02:38 -07:00
Elias Ellison
1ec0676a33 [JIT] register list prim ops cleanup (#35768)
Summary:
This is a follow up from https://github.com/pytorch/pytorch/pull/34520, which removed specialized list ops. This removes templating from list ops.

it also has one minor other change, which is to move `aten::len(t[]) -> int` to `aten::len(Any[]) -> int` so that heterogenous tuples can be called with `len()`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35768

Differential Revision: D20772943

Pulled By: eellison

fbshipit-source-id: bc36a00920bc94ca8c5aa9eb7d5d7a640388ffbb
2020-03-31 19:24:59 -07:00
Jerry Zhang
9650f465ce [quant][graphmode] Quantization support for at::sort (#35571)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35571

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20769874

fbshipit-source-id: 7d6805754416fd9c4a3d84d42af756e1926111c2
2020-03-31 14:54:16 -07:00
Jerry Zhang
4e19e02976 [quant][graphmode] Quantization support for quantized::add_scalar_relu and quantized::add_scalar_relu_out (#35509)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35509

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20742138

fbshipit-source-id: f6216d0af5da2bd5629aa4909f05dcde7853c8b8
2020-03-30 14:44:38 -07:00
Jerry Zhang
340048b67c [quant][graphmode] Remove unused patterns (#35385)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35385

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655298

fbshipit-source-id: bc5eda2640a809adb55d3d645c65fb02a6f2f444
2020-03-29 23:48:15 -07:00
Jerry Zhang
86be6443d8 [quant][graphmode] Quantization support for aten::conv3d (#35347)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35347

Test Plan:
python test/test_jit.py TestJit.test_quantized_conv3d

Imported from OSS

Differential Revision: D20655304

fbshipit-source-id: 2ab6a977eda9064fbb8051669738f37b90f13b6f
2020-03-29 17:39:06 -07:00
Jerry Zhang
efec027653 [quant][graphmode] prepare_script takes original qconfig_dict (#35335)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35335

We'll script the qconfig_dict in `prepare_script`

Test Plan:
regression tests in `python test/test_jit.py`

Imported from OSS

Differential Revision: D20655311

fbshipit-source-id: 002bfd905ff9a9b298a8073d42e12cfffcd1eb71
2020-03-28 18:36:46 -07:00
Jerry Zhang
444332710c [quant][graphmode] Quantization support for quantized::add_scalar (#35334)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35334

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655299

fbshipit-source-id: 66e1fa215a4a40f40dc7abe442c05bb5b6b20cfe
2020-03-28 14:00:44 -07:00
Nick Korovaiko
76d5102587 add a cuda/fuser job for legacy graph executor (#35419)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35419

Differential Revision: D20719013

Pulled By: Krovatkin

fbshipit-source-id: 745d9523a5a9b7b4b556a075351ea58a82501dff
2020-03-28 12:11:18 -07:00
Jerry Zhang
f1d69cb2f8 [quant][graphmode] Quantization support for permute and repeat_interleave (#35332)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35332

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655306

fbshipit-source-id: 43dce62ce178d5c7e68b27fd88ed5d2958014c7b
2020-03-27 22:40:25 -07:00
Jerry Zhang
df27b32014 [quant][graphmode] Make interpolate/upsample work again (#35130)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35130

Test Plan:
python test/test_jit.py TestJit.test_swap_dequantize_all_ops

Imported from OSS

Differential Revision: D20655303

fbshipit-source-id: 5ad8c6de28bcabffdfab4c9bc6a61f19f1d061cc
2020-03-27 22:38:57 -07:00
Jerry Zhang
76a8d30693 [quant][graphmode] Fold quantized prepacking ops (#35077)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35077

Fold the prepack ops: `quantized::linear_prepack` and `quantized::conv2d_prepack` after
`freeze`

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655301

fbshipit-source-id: fbb4223323f788c88db7b55cfafda46fad106d49
2020-03-27 17:51:51 -07:00
Nikolay Korovaiko
9e22d15f14 Enable tensorexpr cpp tests in CI. try #2 (#35454)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35454

Differential Revision: D20665160

Pulled By: Krovatkin

fbshipit-source-id: e04cbe92b2ee5a3288f3c4e5c83533bfea85bf85
2020-03-27 12:09:55 -07:00
Martin Yuan
da4e68faed Make operator names consistent between export_opnames and the lite interpreter (#34674)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34674

Two changes to make sure the op_names dumped in export_opnames() are consistent to what are actually used in bytecode.
* Inline graph before dumping the operator names.
* Use code of the graph (which is used in bytecode) instead of the nodes of graph.

Test Plan: Imported from OSS

Differential Revision: D20610715

Pulled By: iseeyuan

fbshipit-source-id: 53fa9c3b36f4f242b7f2b99b421f4adf20d4b1f6
2020-03-26 22:50:59 -07:00
Ailing Zhang
77bbbf042d [JIT]Support converting str to float. (#35352)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35352

Differential Revision: D20649286

Pulled By: ailzhang

fbshipit-source-id: e9b09bddd0fe3c962a7514d45fd069cd0b4e6df1
2020-03-26 20:24:59 -07:00
Edward Yang
e0c227d376 Revert D20655246: [jit] add module interface tests to test_jit
Test Plan: revert-hammer

Differential Revision:
D20655246

Original commit changeset: 9e1f865b3f2d

fbshipit-source-id: 241f10738df714efb662f1c53551617dd1558b13
2020-03-26 06:53:19 -07:00
Suraj Menon
aa01a95c6d Revert D20630760: [pytorch][PR] Enable NNC tests vol. i. add test_tensorexpr.py tests [WIP]
Test Plan: revert-hammer

Differential Revision:
D20630760

Original commit changeset: 7d2f27aca6b1

fbshipit-source-id: 28ac92b3390651a4a67061d6ebf208515b9b9463
2020-03-25 20:34:46 -07:00
Nikolay Korovaiko
f3a5081bd4 Enable NNC tests vol. i. add test_tensorexpr.py tests [WIP] (#34897)
Summary:
This  PR add tensorexpr cpp tests to test_jit.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34897

Differential Revision: D20630760

Pulled By: Krovatkin

fbshipit-source-id: 7d2f27aca6b1e23e3ffed1c765d8f590688118e3
2020-03-25 17:23:48 -07:00
Jerry Zhang
ccc0e35275 [quant][graphmode] quantization support for prim::CallFunction (#34855)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34855

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655305

fbshipit-source-id: 44cc3525967048fb9d9c145b342ac7d76b22e4db
2020-03-25 17:17:19 -07:00
Wanchao Liang
d7c255d2fc [jit] add module interface tests to test_jit (#35417)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35417

surprised it's not getting runned by test_jit, added it

Test Plan: Imported from OSS

Differential Revision: D20655246

Pulled By: wanchaol

fbshipit-source-id: 9e1f865b3f2d23b63d4d605aaf2dc3a483a4f0e1
2020-03-25 15:25:28 -07:00
Jerry Zhang
15e5453977 [reland][quant][graphmode] Add quantization support for aten::cat (#34346) (#35337)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35337

Test Plan: python test/test_jit.py

Differential Revision: D20648201

Pulled By: jerryzh168

fbshipit-source-id: f6570c3ee2f48a9bc6373d2af873824ac2c8ef62
2020-03-25 12:45:21 -07:00
Elias Ellison
5b2f8cef08 [JIT] Functional Graph Pass (#33020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33020

This is a pass to create functional blocks. The other PRs in the stack help avoid some of the limitations that are are often found in graphs. It's possible that this would work well with a graph that is frozen. Follow up work items that will help this pass:

- We don't currently have any capacity in alias analysis to tell whether a Value that came from the wildcard set "re-escapes" back into the wildcard set.
- More comments on the semantics of the graph and correctness conditions
- We could consider using dynamic dag if the perf of this is a limitation.
- potential make Functional Graphs Functional Blocks instead, so that we do not repeatedly copy constants, also to make IR read easier.

Test Plan: Imported from OSS

Differential Revision: D20603188

Pulled By: eellison

fbshipit-source-id: 6822a6e65f4cc2676f8f6445fe8aa1cb858ebeeb
2020-03-24 23:44:18 -07:00
Alban Desmaison
ee7cd84fac Revert D20589145: [quant][graphmode] Add quantization support for aten::cat
Test Plan: revert-hammer

Differential Revision:
D20589145

Original commit changeset: c9159fffa88c

fbshipit-source-id: c6b8db13ed1ed19f4437b2fa70d88ce139d445e1
2020-03-24 16:24:22 -07:00
Jerry Zhang
6b5740c5f6 [quant][graphmode] Add quantization support for aten::cat (#34346)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34346

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20589145

fbshipit-source-id: c9159fffa88cf25fcdccfcc4eef2622cf4b250b5
2020-03-24 13:56:43 -07:00
davidriazati
44622bbda9 [jit] Add lazy script decorator (#34935)
Summary:
Stacked PRs
 * #34938 - [jit] Remove stray `script`
 * **#34935 - [jit] Add lazy script decorator**

Some users maintain libraries of code that is largely trace-able but not
script-able. However, some functions may need to be `torch.jit.script`ed if
they contain control flow so the tracer will use the compiler version.
This however impacts library start up time as in #33418, so this PR adds
a workaround in the form of a `torch.jit._lazy_script_while_tracing`
that will only initialize the compiler if the function is called while
actually tracing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/34935

Pulled By: driazati

Differential Revision: D20569778

fbshipit-source-id: d87c88c02b1abc86b283729ab8db94285d7d4853
2020-03-24 13:43:18 -07:00
James Reed
618c6214aa [reapply][JIT] Namespaces for TorchBind (#35254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35254

Reapply D20541090 with some BC fixes
ghstack-source-id: 100733987

Test Plan: buck test mode/dev-nosan //caffe2/torch/fb/predictor/model_repo/tests:ai_infra_representative_model_shard_6_test -- 'RepresentativeModelTest\/ShardedRepresentativeModelTest\.RunModel\/0'

Reviewed By: zdevito

Differential Revision: D20607111

fbshipit-source-id: 80f148d860571208c93e9308128cd480ff089f74
2020-03-24 00:39:48 -07:00
Jerry Zhang
537fdd77d5 [quant][graphmode] quantization support for view, transpose, contiguos (#34854)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34854

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20524456

fbshipit-source-id: e6e8fc3db6cccbd32c210d04f921274d81996fe2
2020-03-23 22:33:39 -07:00
Jerry Zhang
4a96911629 [quant][graphmode] quantization support for aten::chunk (#34806)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34806

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20524454

fbshipit-source-id: 92ac9bc251581e963258cb90dc3de73f8508c822
2020-03-23 22:33:34 -07:00
Jerry Zhang
ac4a0224f3 [quant][graphmode] Replicate quantize node for prim::If (#34804)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34804

We want to replicate the quantize node for return values in blocks of prim::If
in order to create the quantization patterns.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20524453

fbshipit-source-id: 2268ac555f646158f4e1ffc98ccc8101d7504194
2020-03-23 21:20:45 -07:00
Jerry Zhang
eff68bc872 [quant][graphmode] quantization support for aten::add (#34572)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34572

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20519607

fbshipit-source-id: c57e062cffc24a47a76b73b58aff7f9ef80183fa
2020-03-23 17:52:28 -07:00
Elias Ellison
7ab25b2e6b [JIT] add id function (#34975)
Summary:
add `id` function so to give uses a way of keeping a `seen` set of nn modules.
n practice, this is only used between values of `T` and `T` or `T` and `Optional[T]`, so in this implementation I made it so that None is the only value that can be zero. Python also only guarantees `id()` gives semantically meaningful results for pointer types.

EDIT: now only allowing id on class types
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34975

Reviewed By: driazati

Differential Revision: D20599564

Pulled By: eellison

fbshipit-source-id: 3c6666a9b9b0258198adc70969dd6332e3375e4f
2020-03-23 17:10:13 -07:00
Jerry Zhang
a00e12e755 [quant][graphmode] weight/bias of linear/conv can be reused for multiple ops (#35221)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35221

When weight is reused, we only need to insert one observer for the weight

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20602492

fbshipit-source-id: e003e6316f6615f3526f0d00fb7b722148b4749b
2020-03-23 14:21:59 -07:00
Elias Ellison
4fae5a6721 Move module graph creation to testing utils (#34917)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34917

Test Plan: Imported from OSS

Differential Revision: D20539338

Pulled By: eellison

fbshipit-source-id: 5c46c0ce50e5bcccf5abee264f432ded7d36d040
2020-03-23 11:59:02 -07:00
Elias Ellison
77ccb5c14d Move functional graph creation to testing utils (#34916)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34916

Test Plan: Imported from OSS

Differential Revision: D20539337

Pulled By: eellison

fbshipit-source-id: 9b777e369facebbe68fe198ca3eec055cf9c5257
2020-03-23 11:57:25 -07:00
Jerry Zhang
3e4076aa9c [quant][graphmode] quantization work for prim::If (#34518)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34518

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20519606

fbshipit-source-id: 94d49e18d97df642cbcb446df12376f6d2a397bc
2020-03-23 09:54:24 -07:00
albanD
0e0386b434 Revert "[JIT] add id function (#34975)" (#35209)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35209

This reverts commit 62f11f0a35.

Test Plan: Imported from OSS

Differential Revision: D20596847

Pulled By: albanD

fbshipit-source-id: e6777e42356aac772e59f0466a92cc13258218c1
2020-03-23 08:42:09 -07:00
Jerry Zhang
28bf0038e5 [quant][graphmode][fix] Insert dequantize before use node (#34411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34411

To make sure dequantize and the node that uses the dequantized value reside in the
block, so that we can do quant fusion

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20519603

fbshipit-source-id: 3e4c68d0a73142716e19ea6a64ae3a5d6d51fa41
2020-03-23 08:07:33 -07:00
Lu Fang
a100cf5146 Revert D20541090: [JIT][torchbind] Namespaces for torchbind classes
Test Plan: revert-hammer

Differential Revision:
D20541090

Original commit changeset: ce3d9391dd3c

fbshipit-source-id: acc1d660fbda611941381315507dfe594c385db1
2020-03-21 12:20:44 -07:00
James Reed
e0496a70fc [JIT][torchbind] Namespaces for torchbind classes (#35054)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35054

Test Plan: Imported from OSS

Differential Revision: D20541090

Pulled By: jamesr66a

fbshipit-source-id: ce3d9391dd3cdf619042b8f6ba2645f4c1fc875c
2020-03-20 20:07:02 -07:00
Kimish Patel
3e58cba3c5 Fixes the Conv2d batch_norm folding for various cases. (#34932)
Summary:
This PR adds a preprocessing step in Conv2dBatchNorm folding.
It traverses the module to check if the bias of Conv2d module is set to
None. If it is, it assume that this a traced module and insert
Optional[Tensor] type bias.
Furthermore it insert getAttr for bias in the forward graph and fixes
_convolution op to take values from getAttr.
It also fixes parametere extraction from BN module which may not
have weight and bias attributes if affine was set to False. In scripted
mode such a BN module will get weight and bias attributes set to None.
For the case of eps it gets const propagated in tracing. This is also
fixed.
Few tests cases are added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34932

Test Plan:
python test/test_jit.py TestJit.test_foldbn_trivial
python test/test_jit.py TestJit.test_foldbn_trivial_nobias
python test/test_jit.py TestJit.test_foldbn_in_submodule
python test/test_jit.py TestJit.test_foldbn_shared_classtype
python test/test_jit.py TestJit.test_foldbn_complex_cases
python test/test_jit.py TestJit.test_nofoldbn_complex_cases

Differential Revision: D20536478

Pulled By: kimishpatel

fbshipit-source-id: 4e842976a380d0575a71001bb4481390c08c259e
2020-03-20 20:06:44 -07:00
Elias Ellison
62f11f0a35 [JIT] add id function (#34975)
Summary:
add `id` function so to give uses a way of keeping a `seen` set of nn modules.
n practice, this is only used between values of `T` and `T` or `T` and `Optional[T]`, so in this implementation I made it so that None is the only value that can be zero. Python also only guarantees `id()` gives semantically meaningful results for pointer types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34975

Differential Revision: D20549677

Pulled By: eellison

fbshipit-source-id: cca5ed4ef013f7540f93abf49f91f9830dfdca14
2020-03-20 20:03:10 -07:00
Elias Ellison
bcbde490e4 Fix flake (#34974)
Summary:
fix flake, add overload names
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34974

Differential Revision: D20519191

Pulled By: eellison

fbshipit-source-id: d08d36b64397287cad484690074e694d8a0e472e
2020-03-18 16:45:33 -07:00
Jerry Zhang
b2e5e0cad6 [quant][graphmode] quantization support for aten::rehshape (#34803)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34803

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20504457

fbshipit-source-id: 5ca691ef4880c72d30d62390e63e3288b2f06dce
2020-03-18 15:40:43 -07:00
Jerry Zhang
d77d907f0e [quant][graphmode] Add quantization support for aten::dropout (#34347)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34347

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20504453

fbshipit-source-id: 1bab29e21d0564ed88cdeb4894addfe00ebbd390
2020-03-18 14:35:27 -07:00
Michael
f3b8a470e1 Added functionality for all to take Lists as input (#34582)
Summary:
New pull request after rebase error in pull request https://github.com/pytorch/pytorch/issues/33923
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34582

Differential Revision: D20447689

Pulled By: eellison

fbshipit-source-id: 4296b64185eccb136b1b614b532deb3af20c7544
2020-03-18 12:01:30 -07:00
Jerry Zhang
841f7600bb [quant][graphmode] Quantization pattern for aten::linear (#33854)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33854

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20493031

fbshipit-source-id: bafd0a3ba5d07327d451b3915f043db33b012b53
2020-03-17 16:36:30 -07:00
Owen Anderson
a4224886f3 Eliminate guards through max_pool ops. (#34512)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34512

Differential Revision: D20478962

Pulled By: resistor

fbshipit-source-id: 86fc926305f95cae8b334ed344d8e0cdd1ef7b2b
2020-03-17 14:00:00 -07:00
James Reed
699a4ed8f5 [testing][do not land] (#34605)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34605

Test Plan: Imported from OSS

Differential Revision: D20393219

Pulled By: jamesr66a

fbshipit-source-id: c74d886f5f01061294203a002b72b75a3c446f09
2020-03-16 23:56:00 -07:00
peter
24c9e61e79 Enable JIT tests on Windows (#27029)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27029

Reviewed By: eellison

Differential Revision: D20458664

Pulled By: jamesr66a

fbshipit-source-id: 22be918543703869f471e89b3478423198351bf3
2020-03-16 11:26:21 -07:00
Jerry Zhang
cec9758afa [quant][graphmode] Add quantization pattern for quantized::add_relu (#33532)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33532

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20354880

fbshipit-source-id: ea608a5ace395a909851f9e577ffdcb51512a3af
2020-03-16 10:20:57 -07:00
Tugrul Ince
08bc3c6cbf Remove unnecessary import (#34778)
Summary:
https://github.com/pytorch/pytorch/issues/34563 accidentally introduced a lint error due to an unused import. This PR removes this import.

Jit tests run as expected after this change:
```
> python test/test_jit.py
.....
Ran 2435 tests in 100.077s

OK (skipped=140, expected failures=1)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34778

Differential Revision: D20459708

Pulled By: tugrulince

fbshipit-source-id: bb742085fafc849ff3d9507d1557556e01fbeb4b
2020-03-15 09:56:55 -07:00
Jerry Zhang
5710374e4e [reland][quant][graphmode] Add quantized conv2d-relu fusion pattern (#33279) (#34744)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34744

att

Test Plan: python test/test_jit.py

Differential Revision: D20449667

Pulled By: jerryzh168

fbshipit-source-id: 01bbc26604fac421dcaacaf4fa1b57731f1f08b7
2020-03-14 01:03:18 -07:00
Zachary DeVito
52005b551c invokeOperatorFromPython: support overloaded operator calling (#34671)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34671

Like the python arg parser, this tries to convert to the schema in order.
It introduces schema_match_exception which gets thrown when the schema doesn't match,
allowing the overload handler to try the next option.

Behavior will not 100% match the schema argument parser but should work for
simple cases using custom binding.

Test Plan: Imported from OSS

Differential Revision: D20432206

Pulled By: zdevito

fbshipit-source-id: 280839a2205ea3497db3a9b5741fccc1e2bff9a8
2020-03-13 18:46:03 -07:00
Jerry Zhang
e7910aa9e5 [fix] use non-inplace for insert observer pass (#34190)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34190

inplace modification of ClassType might affect other tests, so we want to do non-inplace modifications.
Actually the inplace argument will be removed soon.

Test Plan:
ci

Imported from OSS

Differential Revision: D20451765

fbshipit-source-id: e87ad528c4e7f84f5774b94a8e3e85568269682d
2020-03-13 17:25:07 -07:00
Tugrul Ince
c9023e3b12 Support left and right shift operators in JIT (#34563)
Summary:
With this PR, we can now support left and right shift operators in the JIT engine for <int, int> and <Tensor, int>.

Updated tests pass as expected:
```
> python test/test_jit.py
...
Ran 2427 tests in 84.861s

OK (skipped=139, expected failures=1)
```

Running the following code with Python results in the output below:
```
> cat ~/expressions.py
import torch

torch.jit.script
def fn(a, b):
    # type: (int, int)
    return (
        a << b,  # supported
        b >> a,  # supported
        a & b,
        a | b,
        a ^ b
    )
print(fn.graph)
```

```
> python ~/expressions.py
graph(%a.1 : int,
      %b.1 : int):
  %4 : int = aten::leftshift(%a.1, %b.1) # /home/ince/expressions.py:7:8
  %7 : int = aten::rightshift(%b.1, %a.1) # /home/ince/expressions.py:8:8
  %10 : int = aten::__and__(%a.1, %b.1) # /home/ince/expressions.py:9:8
  %13 : int = aten::__or__(%a.1, %b.1) # /home/ince/expressions.py:10:8
  %16 : int = aten::__xor__(%a.1, %b.1) # /home/ince/expressions.py:11:8
  %17 : (int, int, int, int, int) = prim::TupleConstruct(%4, %7, %10, %13, %16)
  return (%17)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34563

Differential Revision: D20434209

Pulled By: tugrulince

fbshipit-source-id: 886386c59755106e17b84778b8e495b80a6269cd
2020-03-13 13:00:33 -07:00
Jerry Zhang
e9a660a160 Revert D20354878: [quant][graphmode] Add quantized conv2d-relu fusion pattern
Test Plan: revert-hammer

Differential Revision:
D20354878

Original commit changeset: 2b19797d4b3f

fbshipit-source-id: 18f447074794af0d579e145df02af47d01746921
2020-03-12 21:29:08 -07:00
Jerry Zhang
0ff4d37933 [quant][graphmode] Add quantized conv2d-relu fusion pattern (#33279)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33279

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20354878

fbshipit-source-id: 2b19797d4b3fd96918164a58bfbd768211ad6c6d
2020-03-12 19:49:57 -07:00
Jerry Zhang
90ca7a1feb [quant][graphmode] Add Finalize function that inlines graph and produce quantized ops (#33927)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33927

Test Plan:
test will be added in later PRs

Imported from OSS

Differential Revision: D20354879

fbshipit-source-id: 03976f4b86c46dbdc4e45764a1e72f1a3855a404
2020-03-12 14:52:58 -07:00
Elias Ellison
514cba0661 [JIT] remove builtin interpolate functions (#34514)
Summary:
`torch.nn.functional.interpolate` was written as a builtin op when we scripted the standard library, because it has four possible overloads. As a result, whenever we make a change to `interpolate`, we need to make changes in two places, and it also makes it impossible to optimize the interpolate op. The builtin is tech debt.

I talked with ailzhang, and the symbolic script changes are good to remove (i guess that makes a third place we needed to re-implement interpolate).

I'm trying to get rid of unneccessary builtin operators because we're standardizing mobile bytecode soon, so we should try to get this landed as soon as possible.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34514

Differential Revision: D20391089

Pulled By: eellison

fbshipit-source-id: abc84cdecfac67332bcba6b308fca4db44303121
2020-03-12 09:21:33 -07:00
James Reed
1f834b5c2a [JIT] Torchbind error if python instantiate class that doesnt exist (#34568)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34568

Test Plan: Imported from OSS

Differential Revision: D20378106

Pulled By: jamesr66a

fbshipit-source-id: 395a3b05d23727b9cfd074440b2d0e8ef002ec09
2020-03-11 13:13:08 -07:00
ettiee
2cf576e9ea small typos (#34589)
Summary:
Spotted a couple of small typos 🙏
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34589

Differential Revision: D20387653

Pulled By: ngimel

fbshipit-source-id: 3089fe606ccb8c8ee57cf7a900aba714fd0ce567
2020-03-11 11:01:31 -07:00
Nikolay Korovaiko
e16908cb1f profile block outputs; helps guard elimination (#33889)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33889

Reviewed By: zdevito

Differential Revision: D20294979

Pulled By: Krovatkin

fbshipit-source-id: 2a68710ec8f8f854c99dfe173f49da442a39e498
2020-03-09 17:12:58 -07:00
Nikolay Korovaiko
0a4a558c2c Dictionary Constants (#32869)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32869

Differential Revision: D19909339

Pulled By: Krovatkin

fbshipit-source-id: 6fe2a9b470768f84b957c69cdf9af3a1bd9b1ca9
2020-03-09 16:12:36 -07:00
Jerry Zhang
2e7eef41ac [quant][graphmode] Swap quantized functional linear with aten::linear (#33853)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33853

Quant fusion relies on inline, but inline will break the CallFunction("linaer", ...) into a if block
it will be hard to recognize this block and swap it with quantized::linear, in order to
preserve the op, we will swap all quantized functional linear into aten::linear.
They might produce different backward graph, but this is called in the step before we get quantized
model, so it shouldn't affect anything.
We'll integrate this with convert_script later in the new "finalize_quant" API

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20343873

fbshipit-source-id: 423e03bf893b79267d2dc97bc997ee1bfe54ec0f
2020-03-09 15:45:20 -07:00
davidriazati
2c0f3536b6 [jit] Make ModuleLists a sugared value (#34320)
Summary:
Previously when emitting subscripts we only emitted actual values, but
now they may sometimes emit a `ModuleValue`, so it should stay as a
`SugaredValue`. This allows for the result of the subscript to be
treated as a real module (i.e. you can just do `self.modlist[1](inputs)`
instead of `self.modlist[1].forward(inputs)`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34320

Pulled By: driazati

Differential Revision: D20345642

fbshipit-source-id: 2bedf9a454af747b704422f6bbb8370cbdf4bf61
2020-03-09 15:36:46 -07:00
Jerry Zhang
776d2a1e8f [quant][graphmode] Handling ops doesn't require observation in insertObservers (#33481)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33481

We have to propagate observed property of values through ops like max_pool2d, flatten and
avoid inserting duplicated observers.
For example:
```
x1 = self.conv(x)
x2 = maxpool(x1)
x3 = self.conv(x2)
```
If x1 is observed, we should propagate this information through maxpool and
we should consider x2 as observed as well.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20261897

fbshipit-source-id: 7de354a3ccb2b6e1708f5c743d4d9f7272691a93
2020-03-09 13:15:54 -07:00
Adam Paszke
e3d50c4dda Retain the order of parameters while generating ConcreteModuleTypes (#34131)
Summary:
`ConcreteModuleTypeBuilder` used to keep parameters together with all others attributes in an `unordered_map` often leading to reordering them while building up the type. Parameter order is semantically meaningful, so we need to preserve it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34131

Differential Revision: D20331542

Pulled By: suo

fbshipit-source-id: 5b860025f7902654d6099751d3fb14b12f6f5a67
2020-03-09 10:25:45 -07:00
Shen Li
30680196e4 Revert D20121915: [JIT] Add support for list()
Test Plan: revert-hammer

Differential Revision:
D20121915

Original commit changeset: c6c4ef444dbf

fbshipit-source-id: 829adb58780f4d0f41acebb3e7640a9c68bdbc1b
2020-03-06 07:16:40 -08:00
Elias Ellison
38857734f0 [JIT] fix py35 test (#34350)
Summary:
test_module_interfaces was using syntax only supported in >= 3.6
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34350

Reviewed By: mrshenli

Differential Revision: D20298869

Pulled By: eellison

fbshipit-source-id: 22319ca403113cff2eedf57767bb34d9580e6db3
2020-03-05 21:31:19 -08:00
Elias Ellison
78aebbcb88 [JIT] add other module apis (#34106)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34106

Test Plan: Imported from OSS

Differential Revision: D20283996

Pulled By: eellison

fbshipit-source-id: 88e7bc4547e96717d6c8efe0b25ede0d198d9e68
2020-03-05 16:12:29 -08:00
Elias Ellison
f218842f2e [JIT] Add support for list() (#33818)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33818

Test Plan: Imported from OSS

Differential Revision: D20121915

Pulled By: eellison

fbshipit-source-id: c6c4ef444dbf1d4134dccb28c13315e225945b64
2020-03-05 14:48:20 -08:00
Elias Ellison
479c3b0aa5 [JIT] add support for torch.norm (#33783)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33783

Fix for https://github.com/pytorch/pytorch/issues/20113

Test Plan: Imported from OSS

Differential Revision: D20121917

Pulled By: eellison

fbshipit-source-id: ffedcc40678cd80f5529ff9323088eed544e5158
2020-03-05 14:46:24 -08:00
Jerry Zhang
6f52562e75 [quant][graphmode] Add add_relu pattern in skip values (#32816)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32816

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20208786

fbshipit-source-id: ef84b77f46f88b192a75c123aabaa203836a7dfb
2020-03-04 09:36:02 -08:00
Jerry Zhang
e5bbd23ca7 [quant][graphmode] Skip quantizing input and output in matched module (#32814)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32814

We skip quantization for the intermediate values for patterns like `Conv - ReLU`,
but currently we didn't skip quantizing the input/output of the graphs of matched modules,
since we now changed the way we add observers, this also needs to be updated.

Test Plan:
python test/test_jit.py -- 'TestJit.test_insert_observers_skip_values'

Imported from OSS

Differential Revision: D20208785

fbshipit-source-id: ce30f2c4c8ce737500d0b41357c80ec8b33aecf9
2020-03-03 18:38:36 -08:00
Elias Ellison
04378eb618 [JIT] Add modulelist indexing for integer literal (#29236)
Summary:
Allow indexing into modulelists for integer literals.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29236

Differential Revision: D19583935

Pulled By: eellison

fbshipit-source-id: 24d54051422a69769dac5e82f3bf622ded2bd8a6
2020-03-03 14:47:31 -08:00
Jerry Zhang
f26bbb5f86 [fix] flake8 lint error (#34146)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34146

Test Plan:
.

Imported from OSS

Differential Revision: D20228830

fbshipit-source-id: 41de3c27c10256939ae6309d25b0499f708a3dca
2020-03-03 13:15:27 -08:00
Zachary DeVito
358450e02b improved TorchScript traceback (#33834)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33834

This changes how we report Tracebacks to make them more clear when
there are both serialized and non-serialized ranges. It now looks like:

```
Traceback (most recent call last):
  File "foo.py", line 25, in <module>
    s2(a, b)
  File "/scratch/zdevito/pytorch/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__.py", line 7, in forward
    x: Tensor,
    y: Tensor) -> Tensor:
    return (self).bar(x, y, )
            ~~~~~~~~~ <--- HERE
  def bar(self: __torch__.Moo,
    x: Tensor,
  File "code/__torch__.py", line 11, in bar
    x: Tensor,
    y: Tensor) -> Tensor:
    _0 = (self).baz(x, y, )
          ~~~~~~~~~ <--- HERE
    _1 = torch.ones([3], dtype=None, layout=None, device=None, pin_memory=None)
    return torch.add(_0, _1, alpha=1)
  File "code/__torch__.py", line 17, in baz
    x: Tensor,
    y: Tensor) -> Tensor:
    return torch.add(x, y, alpha=1)
           ~~~~~~~~~ <--- HERE

Traceback of TorchScript, original code (most recent call last):
  File "foo.py", line 11, in forward
    def forward(self, x, y):
        return self.bar(x, y)
               ~~~~~~~~ <--- HERE
  File "foo.py", line 9, in bar
    def bar(self, x, y):
        return self.baz(x, y) + torch.ones(3)
               ~~~~~~~~ <--- HERE
  File "foo.py", line 7, in baz
    def baz(self, x, y):
        return x + y
               ~~~~~ <--- HERE
RuntimeError: The size of tensor a (4) must match the size of tensor b (5) at non-singleton dimension 1
```

It follows Python convension of having the most important information last
and reading from the bottom up.

Changes:
* Moved the error message to the end, to copy Python
* Report original traceback separate from serialized traceback
* Make sure root functions have names in the interpreter trace.

Test Plan: Imported from OSS

Differential Revision: D20126136

Pulled By: zdevito

fbshipit-source-id: fd01f9985e5d74e04c4d064c02e8bc320f4fac13
2020-03-03 12:27:38 -08:00