Commit Graph

1913 Commits

Author SHA1 Message Date
Ilia Cherniavskii
e7a09b4d17 RecordFunction in Dispatcher (#37587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37587

Lifting RecordFunction up into the dispatcher code

Test Plan: Imported from OSS

Differential Revision: D21374246

fbshipit-source-id: 19f9c1719e6fd3990e451c5bbd771121e91128f7
2020-07-17 22:20:05 -07:00
Meghan Lele
f85a27e100 [JIT] Replace "blacklist" in test_jit.py (#41453)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41453

**Test Plan**
`python test/test_jit.py`

**Fixes**
This commit partially addresses #41443.

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D22544268

Pulled By: SplitInfinity

fbshipit-source-id: 8b6b94211a626209c3960fda6c860593148dcbf2
2020-07-17 11:30:27 -07:00
Mikhail Zolotukhin
5d7046522b [JIT] Teach IRPrinter and IRParser to handle 'requires_grad' and 'device' as a part of type info. (#41507)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41507

These fields have always been a part of tensor types, this change just
makes them serializable through IR dumps.

Test Plan: Imported from OSS

Reviewed By: Krovatkin, ngimel

Differential Revision: D22563661

Pulled By: ZolotukhinM

fbshipit-source-id: f01aaa130b7e0005bf1ff21f65827fc24755b360
2020-07-17 10:27:04 -07:00
Yuxin Wu
488ee3790e Support @torch.jit.unused on a @torch.no_grad decorated function (#41496)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41496

use the wrapped function (instead of the wrapper) to obtain argument names

Test Plan:
```
buck test mode/dev-nosan //caffe2/test:jit -- 'test_unused_decorator \(test_jit\.TestScript\)'
```

Before:
```
> Traceback (most recent call last):
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/test_jit.py", line 3014, in test_unused_decorator
>     torch.jit.script(MyMod())
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_script.py", line 888, in script
>     obj, torch.jit._recursive.infer_methods_to_compile
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_recursive.py", line 317, in create_script_module
>     return create_script_module_impl(nn_module, concrete_type, stubs_fn)
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_recursive.py", line 376, in create_script_module_impl
>     create_methods_from_stubs(concrete_type, stubs)
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/torch/jit/_recursive.py", line 292, in create_methods_from_stubs
>     concrete_type._create_methods(defs, rcbs, defaults)
> RuntimeError:
> Non-static method does not have a self argument:
>   File "/data/users/yuxinwu/fbsource2/fbcode/buck-out/dev/gen/caffe2/test/jit#binary,link-tree/test_jit.py", line 3012
>             def forward(self, x):
>                 return self.fn(x)
>                        ~~~~~~~ <--- HERE
>
```

Reviewed By: eellison

Differential Revision: D22554479

fbshipit-source-id: 03e432ea92ed973cc57ff044da80ae7a36f6af4c
2020-07-15 16:54:43 -07:00
Michael Suo
ca1b8ebbcb move misc implementation out of jit/__init__.py (#41154)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41154

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22445213

Pulled By: suo

fbshipit-source-id: 200545715c5ef13beb1437f49e01efb21498ddb7
2020-07-13 16:59:55 -07:00
Kimish Patel
c5dcf056ee JIT pass for add relu fusion. (#39343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39343

Building on top of previous PR that adds fused add_relu op, this PR adds
a JIT pass to transform input graph to find all fusable instancs of add
+ relu and fuses them.

Test Plan:
python test/test_jit.py TestJit.test_add_relu_fusion

Imported from OSS

Differential Revision: D21822396

fbshipit-source-id: 12c7e8db54c6d70a2402b32cc06c7e305ffbb1be
2020-07-09 16:25:13 -07:00
Zino Benaissa
690946c49d Generalize constant_table from tensor only to ivalue (#40718)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40718

Currently only constant except tensor must be inlined during serialization.
Tensor are stored in the contant table. This patch generalizes this capability
to any IValue. This is particularly useful for non ASCII string literal that
cannot be inlined.

Test Plan: Imported from OSS

Differential Revision: D22298169

Pulled By: bzinodev

fbshipit-source-id: 88cc59af9cc45e426ca8002175593b9e431f4bac
2020-07-09 09:09:40 -07:00
Dmytro Dzhulgakov
8e2841781e [easy] Use torch.typename in JIT error messages (#41024)
Summary:
Noticed while trying to script one of the models which happened to have numpy values as constants. Lacking the numpy prefix in the error message was quite confusing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41024

Differential Revision: D22426399

Pulled By: dzhulgakov

fbshipit-source-id: 06158b75355fac6871e4861f82fc637c2420e370
2020-07-08 21:49:37 -07:00
Michael Suo
c93e96fbd9 [jit] move script-related implementation out of torch/jit/__init__.py (#40902)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40902

See the bottom of this stack for context.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22360210

Pulled By: suo

fbshipit-source-id: 4275127173a36982ce9ad357aa344435b98e1faf
2020-07-08 11:38:34 -07:00
Elias Ellison
37a572f33e fix grad thrashing of shape analysis (#40939)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40939

Previously, when we would do shape analysis by running the op with representative inputs, we would always set the grad property to false. This led to a wrong static analysis when we would create differentiable subgraphs, and propagate shapes without also propagating requires_grad, and then uninline them.

Test Plan: Imported from OSS

Differential Revision: D22394676

Pulled By: eellison

fbshipit-source-id: 254e6e9f964b40d160befe0e125abe1b7aa2bd5e
2020-07-06 17:12:13 -07:00
Elias Ellison
4af8424377 shape analysis fix for default dtype' (#40938)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40938

already accepted in https://github.com/pytorch/pytorch/pull/40645

Test Plan: Imported from OSS

Reviewed By: jamesr66a, Krovatkin

Differential Revision: D22394675

Pulled By: eellison

fbshipit-source-id: 1e9dbb24a4cb564d9a68280d2166329ca9fb0425
2020-07-06 17:10:01 -07:00
Ailing Zhang
e75f12ac15 Check statstical diff rather than exact match for test_dropout_cuda. (#40883)
Summary:
There's is a TODO tracked in https://github.com/pytorch/pytorch/issues/40882

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40883

Reviewed By: pbelevich

Differential Revision: D22346087

Pulled By: ailzhang

fbshipit-source-id: b4789ca3a10f6a72c6e77276bde45633eb6cf545
2020-07-06 13:11:48 -07:00
Michael Suo
300a3aaaad [jit] move private implementation out of jit/__init__.py (#40807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40807

We pack a lot of logic into `jit/__init__.py`, making it unclear to
developers and users which parts of our API are public vs. internal. This
is one in a series of PRs intended to pull implementation out into
separate files, and leave `__init__.py` as a place to register the
public API.

This PR moves all the tracing-related stuff out, and fixes other spots up
as necessary. Followups will move other core APIs out.

The desired end-state is that we conform to the relevant rules in [PEP 8](https://www.python.org/dev/peps/pep-0008/#public-and-internal-interfaces). In particular:
- Internal implementation goes in modules prefixed by `_`.
- `__init__.py` exposes a public API from these private modules, and nothing more.
- We set `__all__` appropriately to declare our public API.
- All use of JIT-internal functionality outside the JIT are removed (in particular, ONNX is relying on a number internal APIs). Since they will need to be imported explicitly, it will be easier to catch new uses of internal APIs in review.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22320645

Pulled By: suo

fbshipit-source-id: 0720ea9976240e09837d76695207e89afcc58270
2020-07-05 22:01:11 -07:00
Will Constable
8ecd4f36aa fix __len__, __contains__, getitem inherited from interface class derived from nn container (closes #40603) (#40789)
Summary:
Define static script implementation of __len__ and __contains__ on any subclass derived from a type such as ModuleList, Sequential, or ModuleDict.  Implement getitem for classes derived from ModuleDict.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40789

Reviewed By: eellison

Differential Revision: D22325159

Pulled By: wconstab

fbshipit-source-id: fc1562c29640fe800e13b5a1dd48e595c2c7239b
2020-07-04 15:45:18 -07:00
Nikolay Korovaiko
8223858cc1 shape inference of undefined for prim::grad (#40866)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40866

Reviewed By: pbelevich

Differential Revision: D22358988

Pulled By: Krovatkin

fbshipit-source-id: 7118d7f8d4eaf056cfb71dc0d588d38b1dfb0fc7
2020-07-04 14:10:22 -07:00
Nikolay Korovaiko
88c0d886e3 update requires_gard on loop inputs correctly (master) (#40926)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40926

Reviewed By: eellison

Differential Revision: D22359471

Pulled By: Krovatkin

fbshipit-source-id: 823e87674e2d2917f075255ec926e0485972f4e2
2020-07-04 13:58:29 -07:00
Elias Ellison
e1428cf41b [JIT] fix unfold shape analysis (#40749)
Summary:
unfold on a 0-dimensioned tensor returns a 1-dim tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40749

Differential Revision: D22361481

Pulled By: eellison

fbshipit-source-id: 621597e5f97f6e39953eb86f8b85bb4142527a9f
2020-07-02 13:32:37 -07:00
Mikhail Zolotukhin
871bfaaba1 [JIT] Fix shape analysis for aten::masked_select. (#40753)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40753

The reference says that this op always returns a 1-D tensor, even if
the input and the mask are 0-D.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22300354

Pulled By: ZolotukhinM

fbshipit-source-id: f6952989c8facf87d73d00505bf6d41573eff2d6
2020-06-30 11:04:50 -07:00
Mikhail Zolotukhin
50d55b9f2b [JIT] Update type of the unsqueeze's output in shape analysis. (#40733)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40733

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22298537

Pulled By: ZolotukhinM

fbshipit-source-id: a5d4597ed10bcf14d1b28e914bf898d0cae5b4c0
2020-06-30 11:01:45 -07:00
Jeff Daily
ac8c8b028d [ROCm] restore jit tests (#40447)
Summary:
Remove `skipIfRocm` from most jit tests and enable `RUN_CUDA_HALF` tests for ROCm.

These changes passed more than three rounds of CI testing against the ROCm CI.

CC ezyang xw285cornell sunway513
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40447

Differential Revision: D22190711

Pulled By: xw285cornell

fbshipit-source-id: bac44825a2675d247b3abe2ec2f80420a95348a3
2020-06-27 01:03:59 -07:00
Will Constable
d855528186 wconstab/38034-sliced-sequential (#40445)
Summary:
Partial support for slicing of Sequential containers.

- works around missing Sequential slice functionality
   by converting to tuple
- only supports iteration of resulting tuple values,
   not direct call() on the sliced sequential
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40445

Differential Revision: D22192469

Pulled By: wconstab

fbshipit-source-id: 61c85deda2d58f6e3bea2f1fa1d5d5dde568b9b5
2020-06-24 09:05:51 -07:00
Elias Ellison
6468bc4637 [JIT] script if tracing fix (#40468)
Summary:
Currently, torchvision annotates `batched_nms` with `torch.jit.script` so the function gets compiled when it is traced and ONNX will work. Unfortunately, this means we are eagerly compiling batched_nms, which fails if torchvision isn't built with `torchvision.ops.nms`. As a result, torchvision doesn't work on torch hub right now.

`_script_if_tracing` could solve our problem here, but right now it does not correctly interact with recursive compilation. This PR fixes that bug.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40468

Reviewed By: jamesr66a

Differential Revision: D22195771

Pulled By: eellison

fbshipit-source-id: 83022ca0bab6d389a48a478aec03052c9282d2b7
2020-06-23 17:14:28 -07:00
Jerry Zhang
cbd53bfee8 [jit] Remove unnecessary clone APIs for script::Module and RecursiveScriptModule (#40297)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40297

Test Plan: Imported from OSS

Differential Revision: D22191660

fbshipit-source-id: 4b338ca82caaca04784bffe01fdae3d180c192f4
2020-06-23 16:03:22 -07:00
Jerry Zhang
f652abc1dd [jit] Enable copy.deepcopy and copy.copy for RecursiveScriptModule (#32685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32685

att

Test Plan:
.

Imported from OSS

Differential Revision: D21220755

fbshipit-source-id: 5c71e9bb9f43032cf60563a9e67579118a8d7e33
2020-06-23 09:21:12 -07:00
Wanchao Liang
4b028a8e07 [jit] support pad_sequence/pack_sequence (#39844)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39844

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D22026720

Pulled By: wanchaol

fbshipit-source-id: cc51ea77eff3689e319ec7e89a54c788646b5940
2020-06-19 19:03:14 -07:00
Mike Ruberry
4f761f325c Back out "[pytorch][PR] Removes dunder div"
Summary: NVIDIA's Apex is updating to no longer rely on this behavior, but we're reverting this Python2->Python3 update to unblock internal apex users.

Test Plan: Sandcaslte + OSS CI.

Reviewed By: ngimel

Differential Revision: D22146782

fbshipit-source-id: f9483d2cbf9dc3a469ad48a6c863edea3ae51070
2020-06-19 18:31:20 -07:00
Meghan Lele
d58b8222b7 [JIT] Add support for with statements (#34705)
Summary:
**Summary**
This commit adds support for with statements to PyTorch JIT. Each
of the with items in a with statement is represented in the JIT IR
as a pair of `prim::Enter` and `prim::Exit` nodes that call the
`__enter__` and `__exit__` methods defined on the context manager objects
returned by the expressions in the with item.

**Testing**
This commit adds unit tests for with statements with named with items,
nameless with items, and with statements that encounter exceptions.
```
$ python test/test_jit.py TestWith.test_with_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.430s

OK
```

```
$ python test/test_jit.py TestWith.test_with_no_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.264s

OK
```

```
$ python test/test_jit.py TestWith.test_with_exceptions
Fail to import hypothesis in common_utils, tests are not derandomized
Couldn't download test skip set, leaving all tests enabled...
.
----------------------------------------------------------------------
Ran 1 test in 1.053s

OK
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34705

Differential Revision: D22095945

Pulled By: SplitInfinity

fbshipit-source-id: f661565a834786725259b8ea014b4d7532f9419d
2020-06-18 16:57:18 -07:00
Wanchao Liang
442ec1dd4e [test] split remaining quantization tests out of test_jit (#40144)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40144

as title, split remaining quantization tests out of test_jit to reduce
the size of test_jit

Test Plan: Imported from OSS

Differential Revision: D22085034

Pulled By: wanchaol

fbshipit-source-id: 0c8639da01ffc3e6a72e6f470837786c73a6b3f0
2020-06-18 13:39:13 -07:00
Wanchao Liang
693ab77c00 [test] split onnx export test out of test_jit (#40143)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40143

as titled, to reduce size of test_jit

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22085036

Pulled By: wanchaol

fbshipit-source-id: 424f189fd3849c111d06ebe2e341da50d98fe0ec
2020-06-17 17:27:50 -07:00
Wanchao Liang
27d789500b [test] split tracer related tests out of test_jit (#40142)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40142

test_jit is becoming huge again, which makes editor hard to load and
write new tests, this split out the tracer related tests.

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22085035

Pulled By: wanchaol

fbshipit-source-id: 696bee84985ecfbfeac8e2ee5c27f1bdda8de394
2020-06-17 17:26:33 -07:00
Mike Ruberry
9d588f7ce2 Removes dunder div (#39151)
Summary:
BC-breaking note:

If a user is using one of these dunders directly they will not longer be available. Users should update to Python3 compatible dunders.

Original PR note:

`__div__` (and `__idiv__` and `__rdiv__`) are no longer special dunders in Python3. This PR replaces them with the `__truediv__` (`__itrudediv__`, `__rtruediv__`) dunders, since we no longer support Python2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39151

Differential Revision: D22075713

Pulled By: mruberry

fbshipit-source-id: d318b47b51f7cc4c3728b1606a34d81e49ba0fa1
2020-06-16 23:02:20 -07:00
Shihao Xu
00651b8c93 [distribtued.nn] Implement TorchScript-compatible RemoteModule API (#37139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37139

See design doc in https://github.com/pytorch/pytorch/issues/37136

ghstack-source-id: 105926270

Test Plan:
TODO:

- Make the generated Interface usable. https://github.com/pytorch/pytorch/pull/37139#discussion_r434190978
-
- Avoid generating the same template instances for Module that is not scriptable.
- Remove "infer_module_interface_cls".
- Use Python format instead of a CodeTemplate
- Use Python tempfile to track and delete file. Does it work if there is crash.

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_scripted_remote_module_template

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_non_scripted_remote_module_template
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_spawn
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_async_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_sync_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_with_kwargs

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name
```

```
buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork
```

buck test mode/opt-asan //caffe2/test:jit -- 'test_script_forward_method_replacement

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_script_forward_method_replacement'

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_imported_classes'

Differential Revision: D20499658

fbshipit-source-id: dd9383ae4eb2343366c11127664f845b91ca3b0a
2020-06-15 19:07:35 -07:00
Nikita Shulga
c6b69a4e4d Delete Python <= 3.5 specific checks from the code (#39879)
Summary:
Remove PY3 and PY34 checks from `torch/testing/_internal/common_utils.py`
 Remove PY35 global var from `torch.jit.annotations`
Always call `try_get_real_signature` in `torch/jit/annotations.py`
Use `map` instead of `imap`, since Python-2 is no longer support, so map is always lazy.
Remove all pre Python-3.6 checks from `torch/_six.py` and `torch/_appdirs.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39879

Differential Revision: D22037811

Pulled By: malfet

fbshipit-source-id: af0c79f976569c2059d39ecb49c6b8285161734f
2020-06-15 08:16:06 -07:00
Nikolay Korovaiko
7f55197a57 Peel Loop (#39434)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39434

Differential Revision: D21857037

Pulled By: Krovatkin

fbshipit-source-id: 6583da167fe93d96e93f1c3d71f46f94e7f4e982
2020-06-10 13:48:18 -07:00
Yanan Cao
c22bbb2124 [JIT] Add Type::repr_str to return human-readable str (#39544)
Summary:
Clearly expressing a type is inferred by PyTorch instead of explicitly annotated by user makes many error messages more user-friendly

Currently Type has two string conversion methods. str() for IR printing and python_str() for serialization and error message generation. If we want to include more information in type printing while maintaining serialization/deserialization correctness, we need to split python_str() into annotation_str() and repr_str().

annotation_str is solely responsible for serialization, it strictly matches format of python type annotation. repr_str() is responsible for generating a human-readable error message that includes information like "this type is inferred, not explicitly annotated"

Closes https://github.com/pytorch/pytorch/issues/39449
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39544

Differential Revision: D21978759

Pulled By: gmagogsfm

fbshipit-source-id: 733566f5a62e748b5ca4bb3c5943ebb6d5b664d0
2020-06-10 12:01:24 -07:00
Elias Ellison
428bc90978 [JIT] add dtype as type annotation (#39741)
Summary:
make torch.dtype resolve as type annotation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39741

Reviewed By: jamesr66a

Differential Revision: D21956469

Pulled By: eellison

fbshipit-source-id: 492acd7403fa827a2e2c87fd08d31450fcb3a45e
2020-06-09 15:01:00 -07:00
James Reed
f1c60c04b8 [JIT] Fix module interface test (#39592)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39592

Test Plan: Imported from OSS

Differential Revision: D21909659

Pulled By: jamesr66a

fbshipit-source-id: 831ae6b158041d4241209cee50f7a4d09cd2fcb2
2020-06-09 12:13:58 -07:00
Nikolay Korovaiko
97a2918a07 reduce number of bailout nodes (#38281)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38281

Differential Revision: D21665509

Pulled By: Krovatkin

fbshipit-source-id: c2c34b759aec30d0a161e582030ba994192ee4ec
2020-06-05 13:45:37 -07:00
Yanan Cao
0031108b60 Support torch.Tensor subclass (like Parameter) input. (#39487)
Summary:
Currently torch.Tensor subclasses (like torch.nn.Parameter) isn't a supported type annotation to torch script inputs. This PR allows it to be treated like torch.Tensor for compilation.

Closes https://github.com/pytorch/pytorch/issues/38235
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39487

Differential Revision: D21885827

Pulled By: gmagogsfm

fbshipit-source-id: 1ec51829b132b7b0293a6c526d73497b23dae113
2020-06-05 11:58:20 -07:00
Edward Yang
da2004e132 Upgrade lint. (#39483)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39483

I fixed all of the new errors that occurred because of the upgrade.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21884575

Pulled By: ezyang

fbshipit-source-id: 45c8e1f1ecb410c8d7c46dd3922ad70e982a0685
2020-06-04 12:56:43 -07:00
Elias Ellison
49b69b2ade [JIT] fix broadcasting lists of ints (#39481)
Summary:
Previously, on conversion from python -> c++ it was casted to double list through bad copy pasta. It's pretty unusual for someone to script a broadcasting list function directly since it's an internal api, so it was unlikely to affect anyone.

Fix for https://github.com/pytorch/pytorch/issues/39450
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39481

Reviewed By: jamesr66a

Differential Revision: D21870557

Pulled By: eellison

fbshipit-source-id: e704e5e87d2702a270b7d65c4df444246a134480
2020-06-04 12:16:41 -07:00
Xiang Gao
ebd4125e7e [JIT] Make torch.unique_consecutive compatible (#39339)
Summary:
A `unique_consecutive` version of https://github.com/pytorch/pytorch/pull/38156
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39339

Differential Revision: D21823997

Pulled By: eellison

fbshipit-source-id: d14596a36ba36497e296da5a344e0376cef56f1b
2020-06-02 14:54:29 -07:00
Meghan Lele
f4365cf5ba [JIT] Add support for saving/loading of lowered modules (#38893)
Summary:
**Summary**
This commit adds support for seralization and deserialization of
`ScriptModules` that have been lowered to a specific backend. Nothing
special was required to accomplish this, other than removing some code
in `unpickler.cpp` that guarded against the deserialization of `Any`
type objects. Now that lists and dicts are tagged with their types
during serialization, this check is no longer necessary.

**Test Plan**
This commit adds a unit test for testing that a lowered module still
produces the same results as Python and regular JIT after saving and
loading.

**Fixes**
This pull request fixes part of https://github.com/pytorch/pytorch/issues/37841.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38893

Differential Revision: D21825813

Pulled By: SplitInfinity

fbshipit-source-id: 77a7b84504e0dddf14c89b3ed5dd6b438c086f66
2020-06-01 23:50:52 -07:00
xuewenc
7836eaceee [JIT] JIT should let people know we inferred an argument as a tensor (#38527)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38527

This PR solves issue (#37200).
Error is encountered during IR generation while trying to resolve the call to sum.
Should let user know it inferred the value for argument 'dim' to be of type 'Tensor'
because it was not annotated with an explicit type.

Test Plan:
Add code to reprodue the issue (#37200)
`python test/test_jit.py TestJit.test_inferred_as_tensor`

Differential Revision: D21743876

Pulled By: superwizard2019

fbshipit-source-id: 370ca32afea4d53b44d454f650f7d3006f86bcc6
2020-05-29 10:41:50 -07:00
Mike Ruberry
13120bf677 Updates assertEqual to require atol and rtol, removes positional atol (#38872)
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.

In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872

Differential Revision: D21740237

Pulled By: mruberry

fbshipit-source-id: acbc027aa1d7877a49664d94db9a5fff91a07042
2020-05-27 06:31:07 -07:00
Nikolay Korovaiko
9b95f757af move num_profiled_runs to common_utils (#38687)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38687

Differential Revision: D21634080

Pulled By: Krovatkin

fbshipit-source-id: 55513124caf3885e475ffecd9d9f3dbc4729a573
2020-05-27 01:14:01 -07:00
Rohan Varma
63e545e0fe Revert D21717199: [pytorch][PR] Updates assertEqual to require atol and rtol, removes positional atol
Test Plan: revert-hammer

Differential Revision:
D21717199

Original commit changeset: 9feb856f94ee

fbshipit-source-id: bfde9c39a5ce99f0ca6183a7dde703c65b7c8259
2020-05-26 18:23:59 -07:00
Mike Ruberry
6ddca30b2d Updates assertEqual to require atol and rtol, removes positional atol (#38872)
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.

In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872

Differential Revision: D21717199

Pulled By: mruberry

fbshipit-source-id: 9feb856f94eee911b44f6c7140a1d07c1b026d3a
2020-05-26 08:30:23 -07:00
Elias Ellison
cd5d7a34b8 [JIT] Factor out aliases to separate test (#38746)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38746

Factors out testing of op alias normalization so that there is a registry used for tests.

Test Plan: Imported from OSS

Differential Revision: D21673107

Pulled By: eellison

fbshipit-source-id: e06653cdf24f14a4253dd054e4d402d171d16a11
2020-05-21 21:47:24 -07:00
Elias Ellison
f90dc741eb [JIT] Normalize op aliases (#38735)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38735

Follow up to my comment https://github.com/pytorch/pytorch/pull/36597/#issuecomment-613674329

This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc.

Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as glaringlee of the C++ API thinks we should.

Bike shedding: maybe this should be `CanonicalizeOps` instead

Test Plan: Imported from OSS

Differential Revision: D21673108

Pulled By: eellison

fbshipit-source-id: c328618907de1af22e07f57fd27fa619978c2817
2020-05-21 21:47:17 -07:00
Mike Ruberry
64584573f9 Updates tests for integer division deprecation (#38621)
Summary:
Updates our tests in preparation of integer division using torch.div and torch.addcdiv throwing a runtime error by avoiding integer division using torch.div. This creates a brief period where integer division using torch.div is untested, but that should be OK (since it will soon throw a runtime error).

These callsites were identified using https://github.com/pytorch/pytorch/issues/36897.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38621

Differential Revision: D21612823

Pulled By: mruberry

fbshipit-source-id: 749c03a69feae02590b4395335163d9bf047e162
2020-05-19 19:28:00 -07:00
Ilia Cherniavskii
235f62417d Fixes for profiling JIT code (#38453)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38453

Two fixes:
 - RecordFunction in JIT interpreter should exist during the execution
   of the frame, and not just when we enter the frame
 - When creating a JIT continuation in wait instruction, we'd want to
   preserve the original thread local context, right now when we resume
   execution in continuation we preserve the thread local state of the
   thread that set future value (i.e. executed a forked task)

Test Plan: unittest, CI

Reviewed By: ngimel

Differential Revision: D21565959

Pulled By: ilia-cher

fbshipit-source-id: 206b98e3bfb0052fc8e4031da778e372cc71afc1
2020-05-19 15:50:42 -07:00
Michael Voznesensky
f6f1384811 [JIT] Refactor attributes to support buffers and parameters as first class citizens, add support for iterating over named_buffers() (#37905)
Summary:
First part of https://github.com/pytorch/pytorch/issues/36211 - still a WIP, but asking for commentary to ensure this is the direction we want to go in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37905

Differential Revision: D21633735

Pulled By: voznesenskym

fbshipit-source-id: f4e4302e40114513776c9e48867a90d72049e2e9
2020-05-18 23:23:43 -07:00
Elias Ellison
daa85cfe2e [JIT] Exit Transform Rewrite (#38282)
Summary:
After an early return, we conditionalize all further execution. This means that currently the pattern of
`if return elif return elif return` generates better code than `if return if return if return`. It's obviously not good to have semantically equivalent code generate worse IR, so we should rewrite the graph to handle this case. This came up in https://github.com/pytorch/pytorch/pull/37171

```
torch.jit.script
def test_foo(x: bool, y: bool):
    if x:
        return 1
    return 2
print(test_foo.code)
```
generates:
```
def test_foo(x: bool,
    y: bool) -> int:
  _0 = uninitialized(int)
  if x:
    _1, _2 = True, 1
  else:
    _1, _2 = False, _0
  if _1:
    _3 = _2
  else:
    _3 = 2
  return _3
```
while
```
torch.jit.script
def test_foo(x: bool, y: bool):
    if x:
        return 1
    else:
        return 2
print(test_foo.code)
```
generates:
```
def test_foo(x: bool,
    y: bool) -> int:
  if x:
    _0 = 1
  else:
    _0 = 2
  return _0
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38282

Differential Revision: D21576733

Pulled By: eellison

fbshipit-source-id: 80cf1ad7fbda6d8d58557abbfb21c90eafae7488
2020-05-15 12:22:28 -07:00
Michael Voznesensky
960f4b51e3 [JIT] Fix @staticmethod access from self on modules (#37702)
Summary:
Closes https://github.com/pytorch/pytorch/issues/30755
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37702

Differential Revision: D21389989

Pulled By: voznesenskym

fbshipit-source-id: f9b7e26a9eab7dc3d7762a5a28f85424dac5fbb3
2020-05-14 21:12:10 -07:00
Will Feng (FAIAR)
38d141ede5 Support having a different forward method when we are not in scripting mode (#38158)
Summary:
TorchScript currently doesn’t support `*args, **kwargs` in method signature, which is extensively used in DPER3 low-level modules’ forward method. In order to make DPER3 low-level modules scriptable, I was thinking about a solution of having a forward method *only* for TorchScript, and replace the forward method when we are not in scripting mode.

This solution works today, and I would like to add a test to make sure it will always work in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38158

Differential Revision: D21485657

Pulled By: yf225

fbshipit-source-id: df7368e8a5265418be7c305e6666ffd76e595466
2020-05-14 12:13:06 -07:00
David Reiss
7f7fdb1013 Remove a use of checkScript(str) (#35623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35623

Python 2 has reached end-of-life and is no longer supported by PyTorch.
This test case is valid syntax in Python 3.

Test Plan: CI

Differential Revision: D20842874

Pulled By: dreiss

fbshipit-source-id: 9f12e046f827d4f9d5eca99b0b0b46f73e06ff51
2020-05-14 10:07:58 -07:00
Hong Xu
336e1ec592 Clean up error handling in is_nonzero and where in TensorCompare.cpp (#38150)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38150

Differential Revision: D21539736

Pulled By: ezyang

fbshipit-source-id: e390c12f5948192a552d66dcd1bb89b2cb45f170
2020-05-13 20:19:40 -07:00
Elias Ellison
8d883f5c7c [JIT] [Easy] Add location to implicit conversions (#38442)
Summary:
Previously, we weren't adding the location to implicit conversions, so the error message wouldn't show location when these ops failed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38442

Differential Revision: D21563500

Pulled By: eellison

fbshipit-source-id: 19dd786ab8580f11ed919aac669efeed0ef52dcb
2020-05-13 18:02:41 -07:00
Michael Suo
2efa7e04c2 [jit] move torchbind tests to separate file (#37473)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37473

Test Plan: Imported from OSS

Differential Revision: D21297541

Pulled By: suo

fbshipit-source-id: 65c48094b1f26fbbf251021957257ce04279922b
2020-05-13 17:37:00 -07:00
anjali411
1676c7d618 Added autograd tests, disabled jit autograd tests for complex and added a separate list for tests for complex dtype only (#38399)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38399

Test Plan: Imported from OSS

Differential Revision: D21555941

Pulled By: anjali411

fbshipit-source-id: ea9f5a76590c5bab3df6a540617b074238bfb535
2020-05-13 16:41:09 -07:00
Michael Suo
167a978a03 Fix method stub creation for function attributes (#37994)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37994

Before, reassigning a method in a module (like `forward = _forward`)
didn't work, because we look at the function object's name for our def
name when building AST. Mkae that overrideable to handle cases like
reassignment

Test Plan: Imported from OSS

Differential Revision: D21444535

Pulled By: suo

fbshipit-source-id: 4f045f18b5a146edc8005689af525d7d7ed8dd5f
2020-05-12 23:20:35 -07:00
Elias Ellison
eb3e9872c9 [JIT] make torch.unique compilable (#38156)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/37986

Follows the stack in https://github.com/pytorch/pytorch/pull/33783 stack to make functions in `torch/functional.py` resolve to their python implementations. Because the return type of `torch.unique` depends on `return_inverse` and `return_counts` I had to refactor the implementation to use our boolean_dispatch mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38156

Differential Revision: D21504449

Pulled By: eellison

fbshipit-source-id: 7efb1dff3b5c00655da10168403ac4817286ff59
2020-05-12 18:37:53 -07:00
Kimish Patel
f954dd7823 Add dropout removal pass. (#38253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38253

This pass removes dropout and dropout_ nodes when training is false. It
requires to have run freeze_module pass which does both inlining and constant
propagation, without which training variable remains as attribute instead of
constant.
ghstack-source-id: 103939141

Test Plan: python test/test_jit.py TestScript.test_remove_dropout

Reviewed By: dreiss

Differential Revision: D21505863

fbshipit-source-id: 42ea45804e4653b625b6a254c8d8480757264aa8
2020-05-12 14:38:34 -07:00
James Reed
a553935e3c [JIT] Expose magic methods on script::Object (#38167)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38167

Test Plan: Imported from OSS

Differential Revision: D21486709

Pulled By: jamesr66a

fbshipit-source-id: 17b44d979fc658768b0d64f7d8af6fb684043ea3
2020-05-11 15:01:15 -07:00
Vitaly Fedyunin
57d01be92b Replacing assertEqual with assertEqualIgnoreType wherever types missmatch (#38102)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38102

Test Plan: Imported from OSS

Differential Revision: D21477060

Pulled By: VitalyFedyunin

fbshipit-source-id: 25e0fd837ca9bfccf0ce994c80f7790c894096d4
2020-05-09 14:48:55 -07:00
Ailing Zhang
e84aa0211d [JIT]Support List variable in adv indexing. (#37966)
Summary:
Followup of https://github.com/pytorch/pytorch/issues/37848 I realized that it's better to condition on `Value` type instead of token type. So now it also support indexing through list variables (used to be list literal only).
Also apparently our eager frontend accept indexing with float list as well, so matched this edge case behavior as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37966

Reviewed By: suo

Differential Revision: D21439642

Pulled By: ailzhang

fbshipit-source-id: cedb8431ef38747d4aa9909a6bbf8e954dbe0e25
2020-05-08 15:40:11 -07:00
James Reed
c1e7758b5e Back out "Revert D20229168: [quantization] Use torchbind for Linear PackedParams" (#38101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38101

Original commit changeset: 29e8a4d3b8bf
ghstack-source-id: 103730417

Test Plan: waitforsadcastle

Differential Revision: D21471381

fbshipit-source-id: a922cdf31ba32021e7264ae1454c646c0bfd7ef4
2020-05-08 10:53:06 -07:00
Ailing Zhang
9232356e5f remove uses of type() and type_as() part 1. (#38029)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38029

Differential Revision: D21468523

Pulled By: ailzhang

fbshipit-source-id: 14b7185d43eb03f630cfaa2d70e02d637ff8551b
2020-05-08 08:16:24 -07:00
Nikita Shulga
4bc0a7f86a Revert D20229168: [quantization] Use torchbind for Linear PackedParams
Test Plan: revert-hammer

Differential Revision:
D20229168

Original commit changeset: 3607cac9aa5b

fbshipit-source-id: 29e8a4d3b8bffd95ff6a58b46c4f1c1e23770304
2020-05-07 19:47:45 -07:00
James Reed
eaf9b28c55 [quantization] Use torchbind for Linear PackedParams (#34140)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34140

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D20229168

Pulled By: jamesr66a

fbshipit-source-id: 3607cac9aa5b4b044572329742baed03350491c6
2020-05-07 19:03:44 -07:00
eellison
d5df055bbb [WIP][JIT] Add JIT backend registration API (#35833)
Summary:
**Summary**
This commit adds `torch::jit::RegisterBackend`, an API that allows
external backends to be registered for the execution of JIT subgraphs
outside the JIT interpreter. In order to register an external backend,
one must extend the provided abstract class `PyTorchBackendInterface` and provide
two additional functions: one that creates an instance of the aforementioned subclass
of `PyTorchBackendInterface`, and another that preprocesses a `ScriptModule` so that
it can run on the backend. Then, a `ScriptModule` that can compile and execute a given
JIT subgraph using the functions provided at registration time is generated
for each registered backend.

**Testing**
This commit adds a unit test that uses a minimal test backend
to make sure that the registration endpoint and generated
`ScriptModule` work.

```
$ python test/test_jit.py TestBackends
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.183s

OK

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35833

Differential Revision: D21231955

Pulled By: SplitInfinity

fbshipit-source-id: 452db1123d0e5d83f97fe5da8a00fdfdb50dbef9
2020-05-07 18:15:26 -07:00
Elias Ellison
f5b3125af7 [JIT] Peephole optimize list ops (#37612)
Summary:
Peephole optimize  `len(li)` and `li[index]` patterns.

This changes the Profiled Graph IR for the following tests:
```
(Test Name, Num ifs loops, Num non-tensor nodes)
Before:
('test_nn_Conv1d_reflect_stride2_pad2', 3, 14)
('test_nn_Conv2d_reflect_stride2_pad2', 3, 14)
('test_nn_Conv1d_circular_stride2_pad2', 5, 31)
('test_nn_Conv2d_circular_stride2_pad2', 5, 31)
('test_nn_Conv3d_circular_stride2_pad2', 5, 31)
('test_nn_Conv1d_replicate_stride2_pad2', 3, 14)
('test_nn_Conv2d_replicate_stride2_pad2', 3, 14)
('test_nn_Conv3d_replicate_stride2_pad2', 3, 14)
After
('test_nn_Conv1d_reflect_stride2_pad2', 0, 2)
('test_nn_Conv2d_reflect_stride2_pad2', 0, 2)
('test_nn_Conv1d_circular_stride2_pad2', 0, 4)
('test_nn_Conv2d_circular_stride2_pad2', 0, 7)
('test_nn_Conv3d_circular_stride2_pad2', 0, 10)
('test_nn_Conv1d_replicate_stride2_pad2', 0, 2)
('test_nn_Conv2d_replicate_stride2_pad2', 0, 2)
('test_nn_Conv3d_replicate_stride2_pad2', 0, 2)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37612

Differential Revision: D21352676

Pulled By: eellison

fbshipit-source-id: f8a0e7653b7a6a4c769f075de9b3044242ca9336
2020-05-06 15:55:18 -07:00
Elias Ellison
28ac5cdc91 fix profiling test (#37961)
Summary:
this is failing in the profiling_executor job
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37961

Differential Revision: D21434341

Pulled By: eellison

fbshipit-source-id: b34f94b1595ef6f6edee76cd200f951a2ef21f22
2020-05-06 15:04:44 -07:00
Elias Ellison
0e3a05ec00 [JIT] rename enable_profiling_mode to enable_profiling_mode_for_profiling_tests (#37825)
Summary:
The existing contextmanager only conditionally enabled_profiling_mode, which was counter intuitive. When we changed the default executor it broke internal benchmarking as a result.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37825

Differential Revision: D21404611

Pulled By: eellison

fbshipit-source-id: 306b3c333ef4eb44ab6a6e5ab4e0682e5ce312ce
2020-05-06 11:30:02 -07:00
Ailing Zhang
dd618216c5 [JIT]Support adv indexing using list. (#37848)
Summary:
We used to only support indexing through
- numbers like `x[0, 1]`
- tuple like `x[(0, 1)]`
- tensor like `x[torch.tensor([0, 1])]`

This PR adds support for indexing through list which is equivalent to tensor.
- `x[[0, 1, 5]]`
- `x[[0, 1], [0, 1]]`
- `x[[[0, 1], [0, 1]], [[0, 1], [0, 1]]]`

Note for `x[[0, 1, 5]]` we had a bug in AST conversion code so we used to treat it like `x[0, 1, 5]` which means it might accidentally run and produce wrong result(fixes https://github.com/pytorch/pytorch/issues/37286 fixes https://github.com/pytorch/pytorch/issues/18616), now that it's fixed we probably want to mark it as BC breaking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37848

Reviewed By: suo

Differential Revision: D21409840

Pulled By: ailzhang

fbshipit-source-id: 6f2d962885c6dc009cb384d98be1822f5ca7a189
2020-05-06 10:44:48 -07:00
Jerry Zhang
70f375becf [quant] ConvPackedParams with TorchBind (#35923)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35923

(Note: this ignores all push blocking failures!)

Test Plan:
tbd

Imported from OSS

Differential Revision: D20957089

fbshipit-source-id: 74d8bd628ccba64e902ea6ebabc2b883924050b0
2020-05-05 20:18:36 -07:00
Michael Suo
bd220b336b [jit] fix trace checking reporting divergent names (#37842)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37842

Fixes https://github.com/pytorch/pytorch/issues/23993.

Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample1 = torch.ones(1)
sample2 = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample1, sample2,),))
> produces a graph with something like:
> %sample1, %sample2 = prim::TupleUnpack(%input)
```

This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
> produces a graph with something like
> %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.

Test Plan: Imported from OSS

Differential Revision: D21406478

Pulled By: suo

fbshipit-source-id: 3c7066b95d4a6e9b528888309954b02dadbc1a07
2020-05-05 13:39:41 -07:00
Elias Ellison
23d0441da7 [JIT] Fix GetAttr inconsistency (#37424)
Summary:
We were previously only looking at class attributes, so that didn't include methods etc, and would silently give wrong semantics. This makes hasAttr go through the same resolution as our other attribute lookups.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37424

Differential Revision: D21282633

Pulled By: eellison

fbshipit-source-id: 8e970f365c2740d137a02331739c2ed93747b918
2020-05-05 09:06:51 -07:00
Michael Suo
804e32a467 split out docs tests into separate job (#37793)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37793

Test Plan: Imported from OSS

Differential Revision: D21392798

Pulled By: suo

fbshipit-source-id: 172fb0522d0b168ca19a382e5fb1eb87b6390acc
2020-05-04 17:58:04 -07:00
Michael Suo
b7f258bbd3 add fmt to libtorch_python.so (#37560)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37560

Test Plan: Imported from OSS

Differential Revision: D21320059

Pulled By: suo

fbshipit-source-id: 95cfe7cf26c515fdfcb4621cc58266d838a38a3e
2020-05-04 10:14:37 -07:00
Nikolay Korovaiko
831c8f362f fix the incorrect merge of profiling information of two tensor types for the same value (#36806)
Summary:
as a part of moving to the dynamic shapes we are now passing `frame_id` to each profiling callback. The implementation of that requires copying profiling callbacks into Interpreter, so `first`s are actually different for every run. The dynamic shapes merging algorithm won't be using `first`, but in the meantime, while we get there, this should be a good enough fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36806

Differential Revision: D21307173

Pulled By: Krovatkin

fbshipit-source-id: 7dade56ebcc72ebd40bb7f3d636c7b83c99b628f
2020-05-01 12:53:25 -07:00
Michael Voznesensky
91e74fd843 [JIT] Adds a code_with_constants method to module printing (#37586)
Summary:
Closes https://github.com/pytorch/pytorch/issues/36625
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37586

Differential Revision: D21331385

Pulled By: suo

fbshipit-source-id: 752e63eac8bdd06c6719efb972cdc832ad7c1535
2020-04-30 20:44:01 -07:00
James Reed
d3d10cc14a Add tests for lower_graph and fix unpack() ops dispatch (#37540)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37540

ghstack-source-id: 103169129

Test Plan:
buck test mode/no-gpu mode/dev //caffe2/test:jit -- 'test_lower_graph_conv \(test_jit\.TestScript\)'
buck test mode/no-gpu mode/dev //caffe2/test:jit -- 'test_lower_graph \(test_jit\.TestScript\)'

Differential Revision: D21313433

fbshipit-source-id: bb9942272784e517b07537ee4c149b9dc4df4c2a
2020-04-30 10:55:05 -07:00
Michael Suo
896f8130a6 Revert D21297549: [jit] fix trace checking reporting divergent names
Test Plan: revert-hammer

Differential Revision:
D21297549

Original commit changeset: 981d5879a4a2

fbshipit-source-id: 9be6e88007c644914973a305f9e7a961ef11a815
2020-04-29 16:16:44 -07:00
Michael Suo
4bfa51d405 [jit] fix trace checking reporting divergent names (#37464)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37464

Fixes https://github.com/pytorch/pytorch/issues/23993.

There are two fixes here:
1. Previously our name lookup function for the tracer was looking in
f.globals for names. For example:
```
sample = torch.ones(1)
traced = torch.jit.trace(my_mod, ((sample, sample,),))
# produces a graph with something like
# %sample, %sample = prim::TupleUnpack(%input)
```
This is not great if you are, e.g. trace checking, because a non-local
bit of interpreter state is affected the graph produced:
```
traced = torch.jit.trace(my_mod, _clone_inputs((sample, sample,),))
# produces a graph with something like
# %0, %1 = prim::TupleUnpack(%input)
```
I have removed this functionality, as I don't think it provides huge
value. Things that look locally for names will still work, so e.g.
inputs, intermediate variables, and the like will be named correctly.

2. Previously, our input cloning for trace checking didn't do a memoized
deep copy. So:
```
_clone_inputs((sample, sample, sample))
```
produces a tuple with three non-aliased tensors. That's wrong! Use
copy.deepcopy with a memoization argument to fix this.

Test Plan: Imported from OSS

Differential Revision: D21297549

Pulled By: suo

fbshipit-source-id: 981d5879a4a244520dd68489767129ff357f1497
2020-04-28 23:52:57 -07:00
Elias Ellison
a55d80e1c5 [JIT] remove dominated guards of functional values (#37105)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37105

If a value isn't mutated anywhere and is guarded by a node, then we can remove all other guards that are dominated by the first guard.

This reduces the number of (test name, Ifs/Loops, non-tensor nodes excluding getAttr and Bailouts) from the previous PR for the following tests:
```
Before:  ('upsample', 0, 13)
After:  ('upsample', 0, 5)
Before:  ('upsample', 0, 2)
After:  ('upsample', 0, 1)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 12)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 7)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 17)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 21)
After:  ('interpolate', 1, 18)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 21)
After:  ('interpolate', 1, 20)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 13)
After:  ('interpolate', 1, 11)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 15)
After:  ('interpolate', 1, 13)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('interpolate', 1, 25)
After:  ('interpolate', 1, 21)
Before:  ('interpolate', 0, 1)
After:  ('interpolate', 0, 0)
Before:  ('interpolate', 1, 27)
After:  ('interpolate', 1, 23)
Before:  ('interpolate', 0, 3)
After:  ('interpolate', 0, 2)
Before:  ('test_nn_BatchNorm1d_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_3d_input', 2, 3)
After:  ('test_nn_BatchNorm1d_3d_input', 1, 2)
Before:  ('test_nn_BatchNorm1d_affine_simple_average', 2, 5)
After:  ('test_nn_BatchNorm1d_affine_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm1d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_3d_input_not_affine', 2, 3)
After:  ('test_nn_BatchNorm1d_3d_input_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm1d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm1d_zero_batch', 1, 2)
Before:  ('test_nn_BatchNorm2d', 2, 3)
After:  ('test_nn_BatchNorm2d', 1, 2)
Before:  ('test_nn_BatchNorm2d_2d_simple_average', 2, 5)
After:  ('test_nn_BatchNorm2d_2d_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm2d_momentum', 2, 3)
After:  ('test_nn_BatchNorm2d_momentum', 1, 2)
Before:  ('test_nn_BatchNorm2d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm2d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm2d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm2d_zero_batch', 1, 2)
Before:  ('test_nn_BatchNorm3d', 2, 3)
After:  ('test_nn_BatchNorm3d', 1, 2)
Before:  ('test_nn_BatchNorm3d_3d_simple_average', 2, 5)
After:  ('test_nn_BatchNorm3d_3d_simple_average', 1, 4)
Before:  ('test_nn_BatchNorm3d_momentum', 2, 3)
After:  ('test_nn_BatchNorm3d_momentum', 1, 2)
Before:  ('test_nn_BatchNorm3d_not_affine', 2, 3)
After:  ('test_nn_BatchNorm3d_not_affine', 1, 2)
Before:  ('test_nn_BatchNorm3d_zero_batch', 2, 3)
After:  ('test_nn_BatchNorm3d_zero_batch', 1, 2)
Before:  ('test_nn_Transformer', 127, 467)
After:  ('test_nn_Transformer', 122, 450)
```

Test Plan: Imported from OSS

Differential Revision: D21215652

Pulled By: eellison

fbshipit-source-id: 0365fc2e351caca7e1ccaa25428908a26e3f5343
2020-04-28 23:28:18 -07:00
Elias Ellison
45e8451b33 optimize is_float_point calls (#37012)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37012

Removes an if statement in `torch.nn.functional.affine_grid`

Test Plan: Imported from OSS

Differential Revision: D21160755

Pulled By: eellison

fbshipit-source-id: 8b030936c9fbdb05b44abc9f254805d102f2acc2
2020-04-28 23:28:12 -07:00
Elias Ellison
cde1350a5d Add support for generic list constants (#36953)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36953

Add support for generic lists as a constant. generic dicts & tuples are already implemented. This is a pretty common pattern and cuts down on the number of non-tensor nodes executed in interpolate tests.

Test Plan: Imported from OSS

Differential Revision: D21160761

Pulled By: eellison

fbshipit-source-id: 1e6b7b25b7580f09067794772d44e615601c60c4
2020-04-28 23:28:07 -07:00
Elias Ellison
92129956cf Add size peephole optimziation (#36758)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36758

Test Plan: Imported from OSS

Differential Revision: D21160760

Pulled By: eellison

fbshipit-source-id: 9cdb8eeffa71fb4670a811347ae4fad2a82ae1d8
2020-04-28 23:27:52 -07:00
Michael Suo
92b9089fd9 [jit] Fix pretty printing of functions (#37432)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37432

Fixes https://github.com/pytorch/pytorch/issues/36803.

Test Plan: Imported from OSS

Differential Revision: D21284735

Pulled By: suo

fbshipit-source-id: 8c673099b3171070bff80fd1defc91487f66d4b3
2020-04-28 21:30:49 -07:00
mattip
ec8006cc16 [ONNX] fix provider_version and add consistency test (#36797)
Summary:
forward port the test from pr gh-36795, xref issue gh-32561
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36797

Differential Revision: D21257034

Pulled By: ezyang

fbshipit-source-id: d217da0e74f00a433c904defc0bf3eb5f594fd5e
2020-04-27 11:00:23 -07:00
Nikita Shulga
47c4dca1ab Remove python-2 or python<3.5 checks from unit tests (#37252)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37252

Test Plan: CI

Differential Revision: D21241083

Pulled By: malfet

fbshipit-source-id: 44164b822f7905288abb2beda0175d2162d86143
2020-04-24 17:42:04 -07:00
Zachary DeVito
b6bb644e41 Fix long line splitting issue in python_print (#37088)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37088

For an inlined expression tree like `(e_0, (e_1, e_long))` the previous
algoritm only scanned the same statement as `e_long`, splitting the
inlined expressions across lines. Because it did not scan `e_0`, `e_0`
would still get emitted inline, causing it to reverse order with `e_1` and
`e_long`. The new algorithm scans starting at `e_long` and going all
the way back up the expression until it reaches the end of the inlined
statement. Caching of what has already been scanned has been added so that
if there was a second long long `e_long2` after `e_long`, it would not
rescan and re-inline the statements that were already split.

Test Plan: Imported from OSS

Differential Revision: D21180394

Pulled By: zdevito

fbshipit-source-id: 4d142c83a04c89a47d04282f67a513f82cf153c0
2020-04-24 15:14:39 -07:00
moto
5a27ec09b8 Add Inverse Short Time Fourier Transform in ATen native (#35569)
Summary:
Ported `torchaudio`'s implementation (test, and documentation as well) to ATen.

Note
 - Batch packing/unpacking is performed in Python. ATen implementation expects 4D input tensor.
 - The way `hop_length` is initialized in the same way as `stft` implementation. [The Torchaudio's version tried to mimic the same behavior but slightly different](7da61a4bee/torchaudio/functional.py (L152-L157)).

Closes https://github.com/pytorch/pytorch/issues/34827
Relates https://github.com/pytorch/pytorch/issues/3775
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35569

Differential Revision: D21178090

Pulled By: mthrok

fbshipit-source-id: 2701a8b241a36a6fb1b740c2fb2b07cb938185d4
2020-04-24 12:14:55 -07:00
Vishwak Srinivasan
fd5b5cd604 Allowing casting str to int in JIT (#36016)
Summary:
Changelog:
- Allow int(str) in TorchScript
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36016

Test Plan:
- Added tests in test_jit.py

Closes https://github.com/pytorch/pytorch/issues/35948

Differential Revision: D21076438

Pulled By: driazati

fbshipit-source-id: d0753dc0e1c79f4f943c303b58b2d228856ba793
2020-04-23 14:26:24 -07:00
David Reiss
e75fb4356b Remove (most) Python 2 support from Python code (#35615)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35615

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well (though using side-by-side view and ignoring
whitespace change might be helpful).

Test Plan: CI

Differential Revision: D20842886

Pulled By: dreiss

fbshipit-source-id: 8cad4e87c45895e7ce3938a88e61157a79504aed
2020-04-22 09:23:14 -07:00
Mikhail Zolotukhin
359e7f4bba Teach IRParser to parse strides along with sizes in a tensor type. (#36951)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36951

Test Plan: Imported from OSS

Differential Revision: D21139940

Pulled By: ZolotukhinM

fbshipit-source-id: b56a1fddfc9de4684da3ba9a462e344d0985e8b6
2020-04-21 17:27:15 -07:00
Mike Ruberry
bcdb0727c2 Revert D20907254: Fix long line splitting issue in python_print
Test Plan: revert-hammer

Differential Revision:
D20907254

Original commit changeset: ebfc1a4eefc2

fbshipit-source-id: 76440a8649a17728c50e2f3eeb3744a2245f6daf
2020-04-21 16:24:32 -07:00
Zachary DeVito
bf676682e7 Fix long line splitting issue in python_print (#36188)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36188

* Need to remove n^2 behavior for scanning whether to split or not
  otherwise long inline chains will take a long time re-scanning.

Test Plan: Imported from OSS

Differential Revision: D20907254

Pulled By: zdevito

fbshipit-source-id: ebfc1a4eefc26d5806381e7afd75b7a9cd4cde97
2020-04-21 15:46:42 -07:00
Mike Ruberry
71ec8b2002 Switches test_jit to use float32 as its default scalar type (#36982)
Summary:
Our test suite used to set double as its default scalar type, and when it was switched to not do so (to be more consistent with how users experience PyTorch), a few tests had to still set the default scalar type to double to function properly. Now that the jit no longer creates double tensors so frequently, it appears that test_jit no longer needs to set double as its default scalar type, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36982

Differential Revision: D21152120

Pulled By: mruberry

fbshipit-source-id: ea6d3c1ad55552dc5affa1fe1bd0e5189849e6d7
2020-04-21 11:23:28 -07:00
Brian Vaughan
54ed6fd3ee Use both absolute and relative tolerance in testing (#34258)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34258

This PR allows both atol and rtol to be specified, uses defaults based on the prior analysis (spreadsheet attached to https://github.com/pytorch/pytorch/pull/32538), but retains the absolute tolerance behavior in cases where precision was previously specified explicitly.

Test Plan: Imported from OSS

Differential Revision: D21110255

Pulled By: nairbv

fbshipit-source-id: 57b3a004c7d5ac1be80ee765f03668b1b13f4a7e
2020-04-19 06:16:49 -07:00
Wanchao Liang
24aac32171 [jit] Add dictionary as output of tracer (#36696)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36696

This PR add dictionary as a supported output of tracer under the strict
flag.

Test Plan: Imported from OSS

Reviewed By: houseroad

Differential Revision: D21056962

Pulled By: wanchaol

fbshipit-source-id: ace498182d636de853cf8a1efb3dc77f5d53db29
2020-04-16 18:12:38 -07:00
David Reiss
63e5058c88 Fix naming of "strides" method in TensorType (#36727)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36727

Looks like this was renamed by accident in 0cbd7fa46f

Test Plan:
Unit test.
Lint.

Differential Revision: D21076697

Pulled By: dreiss

fbshipit-source-id: dbd18cb41c7b26479984a7a7b12ad41a4c5b7658
2020-04-16 17:07:27 -07:00
Elias Ellison
54a575c9bd [JIT] fix torch.tensor jit dtype (#36587)
Summary:
Previously we were always creating a double tensor from `torch.tensor(1.)`, whereas python eager uses the current default dtype. Fix for https://github.com/pytorch/pytorch/issues/36369
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36587

Differential Revision: D21043617

Pulled By: eellison

fbshipit-source-id: 38da303594f52e06941d86b6e57c4a06e7d36938
2020-04-16 10:55:49 -07:00
Elias Ellison
9cbeb0faed [JIT] Dont optimize shape peepholes on inline (#36404)
Summary:
With https://github.com/pytorch/pytorch/pull/35562, we are running peephole optimization on inlining to reduce the number of nodes that are copied.

The tracer encodes the sizes in the graph like:
```
graph(%0 : Double(7)):
  %1 : Function = prim::Constant[name="tensor_size"]()
  %2 : Tensor = prim::CallFunction(%1, %0)
  return (%2)
```

however people would like to reuse the graph with different shapes so running size invalidations would invalidate that. long term it might be better for the tracer to not include shape information but there are downstream users of that.

Separates out FuseAddMM from peephole so that now there is a single `disable_size_optimizations` parameter, and onnx explicitly invokes fuseaddmm.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36404

Differential Revision: D20968974

Pulled By: eellison

fbshipit-source-id: 56f8f1699e3b0adeeccdfd5a67bb975fd41a2913
2020-04-15 17:49:48 -07:00
davidriazati
8d66f88eb1 [jit] Fix bound method copying (#36546)
Summary:
Previously we were copying the bound method of the original class to the
new script module class, which causes `self` to be wrong. This PR
changes it so we fetch the unbound function, then bind it to the new
script module, then attach it to the module.

Fixes #28280
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36546

Pulled By: driazati

Differential Revision: D21023329

fbshipit-source-id: 6b3f8404700860151792f669a9c02fbd13365272
2020-04-15 17:38:20 -07:00
Lu Fang
67e0bf14b7 Add support of Dict as output when connecting script and tracing (#36265)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36265

Reviewed By: hl475

Differential Revision: D20927160

Pulled By: houseroad

fbshipit-source-id: 5a63022e92d234b97b57d60ef7f7aa3bc41c2d22
2020-04-14 16:06:53 -07:00
Wanchao Liang
999d7f6ab2 [jit] tracer flag to guard risky behaivors (#36277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36277

This PR introduce a flag to the tracer that guard the risky behaviors
like adding list/dict as output of the tracer. Currently to ensure not
BC breaking user, we throw warning if the tracer output is list, and
will throw error when the tracer output is dict to enforce using this
flag (next PR)

Test Plan: Imported from OSS

Differential Revision: D20998157

Pulled By: wanchaol

fbshipit-source-id: 0d2c55f1a263a48b1b92dd6ad54407815e0a6f72
2020-04-13 22:35:03 -07:00
Nikita Shulga
fd008bd170 Make patterns in test_unmatched_annotations more flexible (#36422)
Summary:
To make them compatible with python3.7 and python3.8
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36422

Test Plan: CI

Differential Revision: D21006399

Pulled By: malfet

fbshipit-source-id: 725df277ff3e4479fc2c39d16a30fbf301fde9e5
2020-04-13 17:53:37 -07:00
Wanchao Liang
3526627f46 Use unittest assertWarns instead (#36411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36411

This PR remove pytorch specific defined assertwarns and use the unit
test one, also format some tests

Test Plan: Imported from OSS

Differential Revision: D20998159

Pulled By: wanchaol

fbshipit-source-id: 1280ecff2dd293b95a639d13cc7417fc819c2201
2020-04-13 15:56:42 -07:00
Elias Ellison
8cb1950805 [JIT] fix alias assertion (#36178)
Summary:
AnyType wasn't listed as a mutable type, so the assertion triggered (yay!). Also update the `isMutableTypeInternal(from) != isMutableTypeInternal` logic to be more encompassing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36178

Differential Revision: D20922356

Pulled By: eellison

fbshipit-source-id: 7060a62b18e98dc24b6004a66225c196aadb566e
2020-04-09 18:25:18 -07:00
Jerry Zhang
358466f1da [quant] Move graph mode quantization tests to test_quantize_script.py (#36324)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36324

Test Plan:
.

Imported from OSS

Differential Revision: D20948046

fbshipit-source-id: 2dd8f0c6fbe8fd84293420b97592dc586d25def9
2020-04-09 16:10:18 -07:00
Mike Ruberry
62f9312abd Revert D20783298: Fix naming of "strides" method in TensorType
Test Plan: revert-hammer

Differential Revision:
D20783298

Original commit changeset: 8fcc146284af

fbshipit-source-id: 30e3cb6d7a30d82048534d4d2e794b7e08ae01bb
2020-04-09 04:24:43 -07:00
David Reiss
16980e455f Fix naming of "strides" method in TensorType (#35170)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35170

Looks like this was renamed by accident in 0cbd7fa46f

Test Plan:
Unit test.

Imported from OSS

Differential Revision: D20783298

fbshipit-source-id: 8fcc146284af022ec1afe8d651baf6721b190ad3
2020-04-08 15:59:28 -07:00
Edward Yang
83907ded1d Revert D20895316: [pytorch][PR] [JIT] List reland
Test Plan: revert-hammer

Differential Revision:
D20895316

Original commit changeset: 9a2bc0e6bdcb

fbshipit-source-id: d135f0038cf240a0973ecfcd540121cbd4ecb5a7
2020-04-08 14:40:10 -07:00
Elias Ellison
9ada7abc18 [JIT] fix comprehension scope writes (#36105)
Summary:
In a comprehension like:
```
    def f()->int:
        i = 1
        x = [i for i in range(7)]
        return i
```
the variables inside the comprehension do not write to the function environment.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36105

Differential Revision: D20880699

Pulled By: eellison

fbshipit-source-id: 40af0f7470e0baeff7ef158cb461bf85c816d169
2020-04-08 10:00:45 -07:00
Elias Ellison
2afe171538 [JIT] List reland (#36146)
Summary:
Relanding https://github.com/pytorch/pytorch/pull/33783
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36146

Differential Revision: D20895316

Pulled By: eellison

fbshipit-source-id: 9a2bc0e6bdcbd43f9abe51eadaa28f90bccafcc9
2020-04-07 16:18:30 -07:00
Elias Ellison
6bc8ffe824 [JIT] Optimize before inlining (#35562)
Summary:
Resubmit of https://github.com/pytorch/pytorch/pull/35424, only this time I run optimizations in the right order so the PR description is actually true.

This speeds up the inlining pass of FairSeq model from 180s -> 13s, and MaskRCNN model from 5s -> 1.5s.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35562

Differential Revision: D20738922

Pulled By: eellison

fbshipit-source-id: 1439cf9d1f0bc780e2d64a744694f8b3b7ba4b70
2020-04-07 09:42:26 -07:00
James Reed
3228939f23 [JIT] Fix fake_range() (#36083)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36083

Test Plan: Imported from OSS

Differential Revision: D20874745

Pulled By: jamesr66a

fbshipit-source-id: fc57defefbc8e9840b8d5bac89b4146179e00b06
2020-04-06 14:12:35 -07:00
davidriazati
71669f0249 Fix flake8 (#35968)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35968

Pulled By: driazati

Differential Revision: D20845617

fbshipit-source-id: 1b1cedb9c5c721f7f7edf94b91fbbb97d249bc2a
2020-04-03 14:02:37 -07:00
davidriazati
6e13a7787b [jit] Fix type comparisons segfault (#35929)
Summary:
Pybind will convert `None`s to `nullptr`s, so this adds a check to make
sure those don't get into the actual type comparison logic. Fixes #35778
](https://our.intern.facebook.com/intern/diff/20831278/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35929

Pulled By: driazati

Differential Revision: D20831278

fbshipit-source-id: 5800050e5eec280072afde58141ad00c1e8db8e2
2020-04-03 11:33:48 -07:00
Zachary DeVito
9097b55479 Propagate static_if more completely. (#35834)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35834

This handles the cases we did not handle before in AND and OR statements:

    static_true || <unknown> -> static_true
    static_false && <unknown> -> static_false

Test Plan: Imported from OSS

Differential Revision: D20801125

Pulled By: zdevito

fbshipit-source-id: 0ef94c3a14c7af91580fc5248a4ccfd9e8d6d481
2020-04-02 11:44:34 -07:00
Michael Suo
866d9d4e6a [jit] Fix name collision on load (#35720)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35720

When modules are saved, all relevant types are serialized according to
their qualified name with a compilation unit. Since qualified names are
guaranteed to be unique within a compilation unit, this normally works
fine.

On load, all types are registered in a compilation unit owned by the
script::Module. Type names are not unique across compilation units, so
if you load two modules with colliding type names, make them submodules
of yet another module, and save that module, there is the potential of a
name collision. See the added tests for examples if that description is
confusing.

The solution is to unique type names when serializing code by mangling
them if we detect a name collision.

Test Plan: Imported from OSS

Differential Revision: D20749423

Pulled By: suo

fbshipit-source-id: a8827ff1d4a89f3e7964dbbb49b4381863da3e6a
2020-04-01 00:02:38 -07:00
Elias Ellison
1ec0676a33 [JIT] register list prim ops cleanup (#35768)
Summary:
This is a follow up from https://github.com/pytorch/pytorch/pull/34520, which removed specialized list ops. This removes templating from list ops.

it also has one minor other change, which is to move `aten::len(t[]) -> int` to `aten::len(Any[]) -> int` so that heterogenous tuples can be called with `len()`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35768

Differential Revision: D20772943

Pulled By: eellison

fbshipit-source-id: bc36a00920bc94ca8c5aa9eb7d5d7a640388ffbb
2020-03-31 19:24:59 -07:00
Jerry Zhang
9650f465ce [quant][graphmode] Quantization support for at::sort (#35571)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35571

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20769874

fbshipit-source-id: 7d6805754416fd9c4a3d84d42af756e1926111c2
2020-03-31 14:54:16 -07:00
Jerry Zhang
4e19e02976 [quant][graphmode] Quantization support for quantized::add_scalar_relu and quantized::add_scalar_relu_out (#35509)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35509

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20742138

fbshipit-source-id: f6216d0af5da2bd5629aa4909f05dcde7853c8b8
2020-03-30 14:44:38 -07:00
Jerry Zhang
340048b67c [quant][graphmode] Remove unused patterns (#35385)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35385

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655298

fbshipit-source-id: bc5eda2640a809adb55d3d645c65fb02a6f2f444
2020-03-29 23:48:15 -07:00
Jerry Zhang
86be6443d8 [quant][graphmode] Quantization support for aten::conv3d (#35347)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35347

Test Plan:
python test/test_jit.py TestJit.test_quantized_conv3d

Imported from OSS

Differential Revision: D20655304

fbshipit-source-id: 2ab6a977eda9064fbb8051669738f37b90f13b6f
2020-03-29 17:39:06 -07:00
Jerry Zhang
efec027653 [quant][graphmode] prepare_script takes original qconfig_dict (#35335)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35335

We'll script the qconfig_dict in `prepare_script`

Test Plan:
regression tests in `python test/test_jit.py`

Imported from OSS

Differential Revision: D20655311

fbshipit-source-id: 002bfd905ff9a9b298a8073d42e12cfffcd1eb71
2020-03-28 18:36:46 -07:00
Jerry Zhang
444332710c [quant][graphmode] Quantization support for quantized::add_scalar (#35334)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35334

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655299

fbshipit-source-id: 66e1fa215a4a40f40dc7abe442c05bb5b6b20cfe
2020-03-28 14:00:44 -07:00
Nick Korovaiko
76d5102587 add a cuda/fuser job for legacy graph executor (#35419)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35419

Differential Revision: D20719013

Pulled By: Krovatkin

fbshipit-source-id: 745d9523a5a9b7b4b556a075351ea58a82501dff
2020-03-28 12:11:18 -07:00
Jerry Zhang
f1d69cb2f8 [quant][graphmode] Quantization support for permute and repeat_interleave (#35332)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35332

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655306

fbshipit-source-id: 43dce62ce178d5c7e68b27fd88ed5d2958014c7b
2020-03-27 22:40:25 -07:00
Jerry Zhang
df27b32014 [quant][graphmode] Make interpolate/upsample work again (#35130)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35130

Test Plan:
python test/test_jit.py TestJit.test_swap_dequantize_all_ops

Imported from OSS

Differential Revision: D20655303

fbshipit-source-id: 5ad8c6de28bcabffdfab4c9bc6a61f19f1d061cc
2020-03-27 22:38:57 -07:00
Jerry Zhang
76a8d30693 [quant][graphmode] Fold quantized prepacking ops (#35077)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35077

Fold the prepack ops: `quantized::linear_prepack` and `quantized::conv2d_prepack` after
`freeze`

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655301

fbshipit-source-id: fbb4223323f788c88db7b55cfafda46fad106d49
2020-03-27 17:51:51 -07:00
Nikolay Korovaiko
9e22d15f14 Enable tensorexpr cpp tests in CI. try #2 (#35454)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35454

Differential Revision: D20665160

Pulled By: Krovatkin

fbshipit-source-id: e04cbe92b2ee5a3288f3c4e5c83533bfea85bf85
2020-03-27 12:09:55 -07:00
Martin Yuan
da4e68faed Make operator names consistent between export_opnames and the lite interpreter (#34674)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34674

Two changes to make sure the op_names dumped in export_opnames() are consistent to what are actually used in bytecode.
* Inline graph before dumping the operator names.
* Use code of the graph (which is used in bytecode) instead of the nodes of graph.

Test Plan: Imported from OSS

Differential Revision: D20610715

Pulled By: iseeyuan

fbshipit-source-id: 53fa9c3b36f4f242b7f2b99b421f4adf20d4b1f6
2020-03-26 22:50:59 -07:00
Ailing Zhang
77bbbf042d [JIT]Support converting str to float. (#35352)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35352

Differential Revision: D20649286

Pulled By: ailzhang

fbshipit-source-id: e9b09bddd0fe3c962a7514d45fd069cd0b4e6df1
2020-03-26 20:24:59 -07:00
Edward Yang
e0c227d376 Revert D20655246: [jit] add module interface tests to test_jit
Test Plan: revert-hammer

Differential Revision:
D20655246

Original commit changeset: 9e1f865b3f2d

fbshipit-source-id: 241f10738df714efb662f1c53551617dd1558b13
2020-03-26 06:53:19 -07:00
Suraj Menon
aa01a95c6d Revert D20630760: [pytorch][PR] Enable NNC tests vol. i. add test_tensorexpr.py tests [WIP]
Test Plan: revert-hammer

Differential Revision:
D20630760

Original commit changeset: 7d2f27aca6b1

fbshipit-source-id: 28ac92b3390651a4a67061d6ebf208515b9b9463
2020-03-25 20:34:46 -07:00
Nikolay Korovaiko
f3a5081bd4 Enable NNC tests vol. i. add test_tensorexpr.py tests [WIP] (#34897)
Summary:
This  PR add tensorexpr cpp tests to test_jit.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34897

Differential Revision: D20630760

Pulled By: Krovatkin

fbshipit-source-id: 7d2f27aca6b1e23e3ffed1c765d8f590688118e3
2020-03-25 17:23:48 -07:00
Jerry Zhang
ccc0e35275 [quant][graphmode] quantization support for prim::CallFunction (#34855)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34855

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20655305

fbshipit-source-id: 44cc3525967048fb9d9c145b342ac7d76b22e4db
2020-03-25 17:17:19 -07:00
Wanchao Liang
d7c255d2fc [jit] add module interface tests to test_jit (#35417)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35417

surprised it's not getting runned by test_jit, added it

Test Plan: Imported from OSS

Differential Revision: D20655246

Pulled By: wanchaol

fbshipit-source-id: 9e1f865b3f2d23b63d4d605aaf2dc3a483a4f0e1
2020-03-25 15:25:28 -07:00
Jerry Zhang
15e5453977 [reland][quant][graphmode] Add quantization support for aten::cat (#34346) (#35337)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35337

Test Plan: python test/test_jit.py

Differential Revision: D20648201

Pulled By: jerryzh168

fbshipit-source-id: f6570c3ee2f48a9bc6373d2af873824ac2c8ef62
2020-03-25 12:45:21 -07:00
Elias Ellison
5b2f8cef08 [JIT] Functional Graph Pass (#33020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33020

This is a pass to create functional blocks. The other PRs in the stack help avoid some of the limitations that are are often found in graphs. It's possible that this would work well with a graph that is frozen. Follow up work items that will help this pass:

- We don't currently have any capacity in alias analysis to tell whether a Value that came from the wildcard set "re-escapes" back into the wildcard set.
- More comments on the semantics of the graph and correctness conditions
- We could consider using dynamic dag if the perf of this is a limitation.
- potential make Functional Graphs Functional Blocks instead, so that we do not repeatedly copy constants, also to make IR read easier.

Test Plan: Imported from OSS

Differential Revision: D20603188

Pulled By: eellison

fbshipit-source-id: 6822a6e65f4cc2676f8f6445fe8aa1cb858ebeeb
2020-03-24 23:44:18 -07:00
Alban Desmaison
ee7cd84fac Revert D20589145: [quant][graphmode] Add quantization support for aten::cat
Test Plan: revert-hammer

Differential Revision:
D20589145

Original commit changeset: c9159fffa88c

fbshipit-source-id: c6b8db13ed1ed19f4437b2fa70d88ce139d445e1
2020-03-24 16:24:22 -07:00
Jerry Zhang
6b5740c5f6 [quant][graphmode] Add quantization support for aten::cat (#34346)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34346

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20589145

fbshipit-source-id: c9159fffa88cf25fcdccfcc4eef2622cf4b250b5
2020-03-24 13:56:43 -07:00
davidriazati
44622bbda9 [jit] Add lazy script decorator (#34935)
Summary:
Stacked PRs
 * #34938 - [jit] Remove stray `script`
 * **#34935 - [jit] Add lazy script decorator**

Some users maintain libraries of code that is largely trace-able but not
script-able. However, some functions may need to be `torch.jit.script`ed if
they contain control flow so the tracer will use the compiler version.
This however impacts library start up time as in #33418, so this PR adds
a workaround in the form of a `torch.jit._lazy_script_while_tracing`
that will only initialize the compiler if the function is called while
actually tracing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/34935

Pulled By: driazati

Differential Revision: D20569778

fbshipit-source-id: d87c88c02b1abc86b283729ab8db94285d7d4853
2020-03-24 13:43:18 -07:00
James Reed
618c6214aa [reapply][JIT] Namespaces for TorchBind (#35254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35254

Reapply D20541090 with some BC fixes
ghstack-source-id: 100733987

Test Plan: buck test mode/dev-nosan //caffe2/torch/fb/predictor/model_repo/tests:ai_infra_representative_model_shard_6_test -- 'RepresentativeModelTest\/ShardedRepresentativeModelTest\.RunModel\/0'

Reviewed By: zdevito

Differential Revision: D20607111

fbshipit-source-id: 80f148d860571208c93e9308128cd480ff089f74
2020-03-24 00:39:48 -07:00
Jerry Zhang
537fdd77d5 [quant][graphmode] quantization support for view, transpose, contiguos (#34854)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34854

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20524456

fbshipit-source-id: e6e8fc3db6cccbd32c210d04f921274d81996fe2
2020-03-23 22:33:39 -07:00
Jerry Zhang
4a96911629 [quant][graphmode] quantization support for aten::chunk (#34806)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34806

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20524454

fbshipit-source-id: 92ac9bc251581e963258cb90dc3de73f8508c822
2020-03-23 22:33:34 -07:00
Jerry Zhang
ac4a0224f3 [quant][graphmode] Replicate quantize node for prim::If (#34804)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34804

We want to replicate the quantize node for return values in blocks of prim::If
in order to create the quantization patterns.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20524453

fbshipit-source-id: 2268ac555f646158f4e1ffc98ccc8101d7504194
2020-03-23 21:20:45 -07:00
Jerry Zhang
eff68bc872 [quant][graphmode] quantization support for aten::add (#34572)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34572

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20519607

fbshipit-source-id: c57e062cffc24a47a76b73b58aff7f9ef80183fa
2020-03-23 17:52:28 -07:00
Elias Ellison
7ab25b2e6b [JIT] add id function (#34975)
Summary:
add `id` function so to give uses a way of keeping a `seen` set of nn modules.
n practice, this is only used between values of `T` and `T` or `T` and `Optional[T]`, so in this implementation I made it so that None is the only value that can be zero. Python also only guarantees `id()` gives semantically meaningful results for pointer types.

EDIT: now only allowing id on class types
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34975

Reviewed By: driazati

Differential Revision: D20599564

Pulled By: eellison

fbshipit-source-id: 3c6666a9b9b0258198adc70969dd6332e3375e4f
2020-03-23 17:10:13 -07:00
Jerry Zhang
a00e12e755 [quant][graphmode] weight/bias of linear/conv can be reused for multiple ops (#35221)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35221

When weight is reused, we only need to insert one observer for the weight

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20602492

fbshipit-source-id: e003e6316f6615f3526f0d00fb7b722148b4749b
2020-03-23 14:21:59 -07:00
Elias Ellison
4fae5a6721 Move module graph creation to testing utils (#34917)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34917

Test Plan: Imported from OSS

Differential Revision: D20539338

Pulled By: eellison

fbshipit-source-id: 5c46c0ce50e5bcccf5abee264f432ded7d36d040
2020-03-23 11:59:02 -07:00
Elias Ellison
77ccb5c14d Move functional graph creation to testing utils (#34916)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34916

Test Plan: Imported from OSS

Differential Revision: D20539337

Pulled By: eellison

fbshipit-source-id: 9b777e369facebbe68fe198ca3eec055cf9c5257
2020-03-23 11:57:25 -07:00
Jerry Zhang
3e4076aa9c [quant][graphmode] quantization work for prim::If (#34518)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34518

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20519606

fbshipit-source-id: 94d49e18d97df642cbcb446df12376f6d2a397bc
2020-03-23 09:54:24 -07:00
albanD
0e0386b434 Revert "[JIT] add id function (#34975)" (#35209)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35209

This reverts commit 62f11f0a35.

Test Plan: Imported from OSS

Differential Revision: D20596847

Pulled By: albanD

fbshipit-source-id: e6777e42356aac772e59f0466a92cc13258218c1
2020-03-23 08:42:09 -07:00
Jerry Zhang
28bf0038e5 [quant][graphmode][fix] Insert dequantize before use node (#34411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34411

To make sure dequantize and the node that uses the dequantized value reside in the
block, so that we can do quant fusion

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20519603

fbshipit-source-id: 3e4c68d0a73142716e19ea6a64ae3a5d6d51fa41
2020-03-23 08:07:33 -07:00
Lu Fang
a100cf5146 Revert D20541090: [JIT][torchbind] Namespaces for torchbind classes
Test Plan: revert-hammer

Differential Revision:
D20541090

Original commit changeset: ce3d9391dd3c

fbshipit-source-id: acc1d660fbda611941381315507dfe594c385db1
2020-03-21 12:20:44 -07:00
James Reed
e0496a70fc [JIT][torchbind] Namespaces for torchbind classes (#35054)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35054

Test Plan: Imported from OSS

Differential Revision: D20541090

Pulled By: jamesr66a

fbshipit-source-id: ce3d9391dd3cdf619042b8f6ba2645f4c1fc875c
2020-03-20 20:07:02 -07:00
Kimish Patel
3e58cba3c5 Fixes the Conv2d batch_norm folding for various cases. (#34932)
Summary:
This PR adds a preprocessing step in Conv2dBatchNorm folding.
It traverses the module to check if the bias of Conv2d module is set to
None. If it is, it assume that this a traced module and insert
Optional[Tensor] type bias.
Furthermore it insert getAttr for bias in the forward graph and fixes
_convolution op to take values from getAttr.
It also fixes parametere extraction from BN module which may not
have weight and bias attributes if affine was set to False. In scripted
mode such a BN module will get weight and bias attributes set to None.
For the case of eps it gets const propagated in tracing. This is also
fixed.
Few tests cases are added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34932

Test Plan:
python test/test_jit.py TestJit.test_foldbn_trivial
python test/test_jit.py TestJit.test_foldbn_trivial_nobias
python test/test_jit.py TestJit.test_foldbn_in_submodule
python test/test_jit.py TestJit.test_foldbn_shared_classtype
python test/test_jit.py TestJit.test_foldbn_complex_cases
python test/test_jit.py TestJit.test_nofoldbn_complex_cases

Differential Revision: D20536478

Pulled By: kimishpatel

fbshipit-source-id: 4e842976a380d0575a71001bb4481390c08c259e
2020-03-20 20:06:44 -07:00
Elias Ellison
62f11f0a35 [JIT] add id function (#34975)
Summary:
add `id` function so to give uses a way of keeping a `seen` set of nn modules.
n practice, this is only used between values of `T` and `T` or `T` and `Optional[T]`, so in this implementation I made it so that None is the only value that can be zero. Python also only guarantees `id()` gives semantically meaningful results for pointer types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34975

Differential Revision: D20549677

Pulled By: eellison

fbshipit-source-id: cca5ed4ef013f7540f93abf49f91f9830dfdca14
2020-03-20 20:03:10 -07:00
Elias Ellison
bcbde490e4 Fix flake (#34974)
Summary:
fix flake, add overload names
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34974

Differential Revision: D20519191

Pulled By: eellison

fbshipit-source-id: d08d36b64397287cad484690074e694d8a0e472e
2020-03-18 16:45:33 -07:00
Jerry Zhang
b2e5e0cad6 [quant][graphmode] quantization support for aten::rehshape (#34803)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34803

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20504457

fbshipit-source-id: 5ca691ef4880c72d30d62390e63e3288b2f06dce
2020-03-18 15:40:43 -07:00
Jerry Zhang
d77d907f0e [quant][graphmode] Add quantization support for aten::dropout (#34347)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34347

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20504453

fbshipit-source-id: 1bab29e21d0564ed88cdeb4894addfe00ebbd390
2020-03-18 14:35:27 -07:00
Michael
f3b8a470e1 Added functionality for all to take Lists as input (#34582)
Summary:
New pull request after rebase error in pull request https://github.com/pytorch/pytorch/issues/33923
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34582

Differential Revision: D20447689

Pulled By: eellison

fbshipit-source-id: 4296b64185eccb136b1b614b532deb3af20c7544
2020-03-18 12:01:30 -07:00
Jerry Zhang
841f7600bb [quant][graphmode] Quantization pattern for aten::linear (#33854)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33854

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20493031

fbshipit-source-id: bafd0a3ba5d07327d451b3915f043db33b012b53
2020-03-17 16:36:30 -07:00
Owen Anderson
a4224886f3 Eliminate guards through max_pool ops. (#34512)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34512

Differential Revision: D20478962

Pulled By: resistor

fbshipit-source-id: 86fc926305f95cae8b334ed344d8e0cdd1ef7b2b
2020-03-17 14:00:00 -07:00
James Reed
699a4ed8f5 [testing][do not land] (#34605)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34605

Test Plan: Imported from OSS

Differential Revision: D20393219

Pulled By: jamesr66a

fbshipit-source-id: c74d886f5f01061294203a002b72b75a3c446f09
2020-03-16 23:56:00 -07:00
peter
24c9e61e79 Enable JIT tests on Windows (#27029)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27029

Reviewed By: eellison

Differential Revision: D20458664

Pulled By: jamesr66a

fbshipit-source-id: 22be918543703869f471e89b3478423198351bf3
2020-03-16 11:26:21 -07:00
Jerry Zhang
cec9758afa [quant][graphmode] Add quantization pattern for quantized::add_relu (#33532)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33532

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20354880

fbshipit-source-id: ea608a5ace395a909851f9e577ffdcb51512a3af
2020-03-16 10:20:57 -07:00
Tugrul Ince
08bc3c6cbf Remove unnecessary import (#34778)
Summary:
https://github.com/pytorch/pytorch/issues/34563 accidentally introduced a lint error due to an unused import. This PR removes this import.

Jit tests run as expected after this change:
```
> python test/test_jit.py
.....
Ran 2435 tests in 100.077s

OK (skipped=140, expected failures=1)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34778

Differential Revision: D20459708

Pulled By: tugrulince

fbshipit-source-id: bb742085fafc849ff3d9507d1557556e01fbeb4b
2020-03-15 09:56:55 -07:00
Jerry Zhang
5710374e4e [reland][quant][graphmode] Add quantized conv2d-relu fusion pattern (#33279) (#34744)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34744

att

Test Plan: python test/test_jit.py

Differential Revision: D20449667

Pulled By: jerryzh168

fbshipit-source-id: 01bbc26604fac421dcaacaf4fa1b57731f1f08b7
2020-03-14 01:03:18 -07:00
Zachary DeVito
52005b551c invokeOperatorFromPython: support overloaded operator calling (#34671)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34671

Like the python arg parser, this tries to convert to the schema in order.
It introduces schema_match_exception which gets thrown when the schema doesn't match,
allowing the overload handler to try the next option.

Behavior will not 100% match the schema argument parser but should work for
simple cases using custom binding.

Test Plan: Imported from OSS

Differential Revision: D20432206

Pulled By: zdevito

fbshipit-source-id: 280839a2205ea3497db3a9b5741fccc1e2bff9a8
2020-03-13 18:46:03 -07:00
Jerry Zhang
e7910aa9e5 [fix] use non-inplace for insert observer pass (#34190)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34190

inplace modification of ClassType might affect other tests, so we want to do non-inplace modifications.
Actually the inplace argument will be removed soon.

Test Plan:
ci

Imported from OSS

Differential Revision: D20451765

fbshipit-source-id: e87ad528c4e7f84f5774b94a8e3e85568269682d
2020-03-13 17:25:07 -07:00
Tugrul Ince
c9023e3b12 Support left and right shift operators in JIT (#34563)
Summary:
With this PR, we can now support left and right shift operators in the JIT engine for <int, int> and <Tensor, int>.

Updated tests pass as expected:
```
> python test/test_jit.py
...
Ran 2427 tests in 84.861s

OK (skipped=139, expected failures=1)
```

Running the following code with Python results in the output below:
```
> cat ~/expressions.py
import torch

torch.jit.script
def fn(a, b):
    # type: (int, int)
    return (
        a << b,  # supported
        b >> a,  # supported
        a & b,
        a | b,
        a ^ b
    )
print(fn.graph)
```

```
> python ~/expressions.py
graph(%a.1 : int,
      %b.1 : int):
  %4 : int = aten::leftshift(%a.1, %b.1) # /home/ince/expressions.py:7:8
  %7 : int = aten::rightshift(%b.1, %a.1) # /home/ince/expressions.py:8:8
  %10 : int = aten::__and__(%a.1, %b.1) # /home/ince/expressions.py:9:8
  %13 : int = aten::__or__(%a.1, %b.1) # /home/ince/expressions.py:10:8
  %16 : int = aten::__xor__(%a.1, %b.1) # /home/ince/expressions.py:11:8
  %17 : (int, int, int, int, int) = prim::TupleConstruct(%4, %7, %10, %13, %16)
  return (%17)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34563

Differential Revision: D20434209

Pulled By: tugrulince

fbshipit-source-id: 886386c59755106e17b84778b8e495b80a6269cd
2020-03-13 13:00:33 -07:00
Jerry Zhang
e9a660a160 Revert D20354878: [quant][graphmode] Add quantized conv2d-relu fusion pattern
Test Plan: revert-hammer

Differential Revision:
D20354878

Original commit changeset: 2b19797d4b3f

fbshipit-source-id: 18f447074794af0d579e145df02af47d01746921
2020-03-12 21:29:08 -07:00
Jerry Zhang
0ff4d37933 [quant][graphmode] Add quantized conv2d-relu fusion pattern (#33279)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33279

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20354878

fbshipit-source-id: 2b19797d4b3fd96918164a58bfbd768211ad6c6d
2020-03-12 19:49:57 -07:00
Jerry Zhang
90ca7a1feb [quant][graphmode] Add Finalize function that inlines graph and produce quantized ops (#33927)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33927

Test Plan:
test will be added in later PRs

Imported from OSS

Differential Revision: D20354879

fbshipit-source-id: 03976f4b86c46dbdc4e45764a1e72f1a3855a404
2020-03-12 14:52:58 -07:00
Elias Ellison
514cba0661 [JIT] remove builtin interpolate functions (#34514)
Summary:
`torch.nn.functional.interpolate` was written as a builtin op when we scripted the standard library, because it has four possible overloads. As a result, whenever we make a change to `interpolate`, we need to make changes in two places, and it also makes it impossible to optimize the interpolate op. The builtin is tech debt.

I talked with ailzhang, and the symbolic script changes are good to remove (i guess that makes a third place we needed to re-implement interpolate).

I'm trying to get rid of unneccessary builtin operators because we're standardizing mobile bytecode soon, so we should try to get this landed as soon as possible.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34514

Differential Revision: D20391089

Pulled By: eellison

fbshipit-source-id: abc84cdecfac67332bcba6b308fca4db44303121
2020-03-12 09:21:33 -07:00
James Reed
1f834b5c2a [JIT] Torchbind error if python instantiate class that doesnt exist (#34568)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34568

Test Plan: Imported from OSS

Differential Revision: D20378106

Pulled By: jamesr66a

fbshipit-source-id: 395a3b05d23727b9cfd074440b2d0e8ef002ec09
2020-03-11 13:13:08 -07:00
ettiee
2cf576e9ea small typos (#34589)
Summary:
Spotted a couple of small typos 🙏
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34589

Differential Revision: D20387653

Pulled By: ngimel

fbshipit-source-id: 3089fe606ccb8c8ee57cf7a900aba714fd0ce567
2020-03-11 11:01:31 -07:00
Nikolay Korovaiko
e16908cb1f profile block outputs; helps guard elimination (#33889)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33889

Reviewed By: zdevito

Differential Revision: D20294979

Pulled By: Krovatkin

fbshipit-source-id: 2a68710ec8f8f854c99dfe173f49da442a39e498
2020-03-09 17:12:58 -07:00
Nikolay Korovaiko
0a4a558c2c Dictionary Constants (#32869)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32869

Differential Revision: D19909339

Pulled By: Krovatkin

fbshipit-source-id: 6fe2a9b470768f84b957c69cdf9af3a1bd9b1ca9
2020-03-09 16:12:36 -07:00
Jerry Zhang
2e7eef41ac [quant][graphmode] Swap quantized functional linear with aten::linear (#33853)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33853

Quant fusion relies on inline, but inline will break the CallFunction("linaer", ...) into a if block
it will be hard to recognize this block and swap it with quantized::linear, in order to
preserve the op, we will swap all quantized functional linear into aten::linear.
They might produce different backward graph, but this is called in the step before we get quantized
model, so it shouldn't affect anything.
We'll integrate this with convert_script later in the new "finalize_quant" API

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20343873

fbshipit-source-id: 423e03bf893b79267d2dc97bc997ee1bfe54ec0f
2020-03-09 15:45:20 -07:00
davidriazati
2c0f3536b6 [jit] Make ModuleLists a sugared value (#34320)
Summary:
Previously when emitting subscripts we only emitted actual values, but
now they may sometimes emit a `ModuleValue`, so it should stay as a
`SugaredValue`. This allows for the result of the subscript to be
treated as a real module (i.e. you can just do `self.modlist[1](inputs)`
instead of `self.modlist[1].forward(inputs)`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34320

Pulled By: driazati

Differential Revision: D20345642

fbshipit-source-id: 2bedf9a454af747b704422f6bbb8370cbdf4bf61
2020-03-09 15:36:46 -07:00
Jerry Zhang
776d2a1e8f [quant][graphmode] Handling ops doesn't require observation in insertObservers (#33481)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33481

We have to propagate observed property of values through ops like max_pool2d, flatten and
avoid inserting duplicated observers.
For example:
```
x1 = self.conv(x)
x2 = maxpool(x1)
x3 = self.conv(x2)
```
If x1 is observed, we should propagate this information through maxpool and
we should consider x2 as observed as well.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20261897

fbshipit-source-id: 7de354a3ccb2b6e1708f5c743d4d9f7272691a93
2020-03-09 13:15:54 -07:00
Adam Paszke
e3d50c4dda Retain the order of parameters while generating ConcreteModuleTypes (#34131)
Summary:
`ConcreteModuleTypeBuilder` used to keep parameters together with all others attributes in an `unordered_map` often leading to reordering them while building up the type. Parameter order is semantically meaningful, so we need to preserve it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34131

Differential Revision: D20331542

Pulled By: suo

fbshipit-source-id: 5b860025f7902654d6099751d3fb14b12f6f5a67
2020-03-09 10:25:45 -07:00
Shen Li
30680196e4 Revert D20121915: [JIT] Add support for list()
Test Plan: revert-hammer

Differential Revision:
D20121915

Original commit changeset: c6c4ef444dbf

fbshipit-source-id: 829adb58780f4d0f41acebb3e7640a9c68bdbc1b
2020-03-06 07:16:40 -08:00
Elias Ellison
38857734f0 [JIT] fix py35 test (#34350)
Summary:
test_module_interfaces was using syntax only supported in >= 3.6
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34350

Reviewed By: mrshenli

Differential Revision: D20298869

Pulled By: eellison

fbshipit-source-id: 22319ca403113cff2eedf57767bb34d9580e6db3
2020-03-05 21:31:19 -08:00
Elias Ellison
78aebbcb88 [JIT] add other module apis (#34106)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34106

Test Plan: Imported from OSS

Differential Revision: D20283996

Pulled By: eellison

fbshipit-source-id: 88e7bc4547e96717d6c8efe0b25ede0d198d9e68
2020-03-05 16:12:29 -08:00
Elias Ellison
f218842f2e [JIT] Add support for list() (#33818)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33818

Test Plan: Imported from OSS

Differential Revision: D20121915

Pulled By: eellison

fbshipit-source-id: c6c4ef444dbf1d4134dccb28c13315e225945b64
2020-03-05 14:48:20 -08:00
Elias Ellison
479c3b0aa5 [JIT] add support for torch.norm (#33783)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33783

Fix for https://github.com/pytorch/pytorch/issues/20113

Test Plan: Imported from OSS

Differential Revision: D20121917

Pulled By: eellison

fbshipit-source-id: ffedcc40678cd80f5529ff9323088eed544e5158
2020-03-05 14:46:24 -08:00
Jerry Zhang
6f52562e75 [quant][graphmode] Add add_relu pattern in skip values (#32816)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32816

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20208786

fbshipit-source-id: ef84b77f46f88b192a75c123aabaa203836a7dfb
2020-03-04 09:36:02 -08:00
Jerry Zhang
e5bbd23ca7 [quant][graphmode] Skip quantizing input and output in matched module (#32814)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32814

We skip quantization for the intermediate values for patterns like `Conv - ReLU`,
but currently we didn't skip quantizing the input/output of the graphs of matched modules,
since we now changed the way we add observers, this also needs to be updated.

Test Plan:
python test/test_jit.py -- 'TestJit.test_insert_observers_skip_values'

Imported from OSS

Differential Revision: D20208785

fbshipit-source-id: ce30f2c4c8ce737500d0b41357c80ec8b33aecf9
2020-03-03 18:38:36 -08:00
Elias Ellison
04378eb618 [JIT] Add modulelist indexing for integer literal (#29236)
Summary:
Allow indexing into modulelists for integer literals.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29236

Differential Revision: D19583935

Pulled By: eellison

fbshipit-source-id: 24d54051422a69769dac5e82f3bf622ded2bd8a6
2020-03-03 14:47:31 -08:00
Jerry Zhang
f26bbb5f86 [fix] flake8 lint error (#34146)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34146

Test Plan:
.

Imported from OSS

Differential Revision: D20228830

fbshipit-source-id: 41de3c27c10256939ae6309d25b0499f708a3dca
2020-03-03 13:15:27 -08:00
Zachary DeVito
358450e02b improved TorchScript traceback (#33834)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33834

This changes how we report Tracebacks to make them more clear when
there are both serialized and non-serialized ranges. It now looks like:

```
Traceback (most recent call last):
  File "foo.py", line 25, in <module>
    s2(a, b)
  File "/scratch/zdevito/pytorch/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__.py", line 7, in forward
    x: Tensor,
    y: Tensor) -> Tensor:
    return (self).bar(x, y, )
            ~~~~~~~~~ <--- HERE
  def bar(self: __torch__.Moo,
    x: Tensor,
  File "code/__torch__.py", line 11, in bar
    x: Tensor,
    y: Tensor) -> Tensor:
    _0 = (self).baz(x, y, )
          ~~~~~~~~~ <--- HERE
    _1 = torch.ones([3], dtype=None, layout=None, device=None, pin_memory=None)
    return torch.add(_0, _1, alpha=1)
  File "code/__torch__.py", line 17, in baz
    x: Tensor,
    y: Tensor) -> Tensor:
    return torch.add(x, y, alpha=1)
           ~~~~~~~~~ <--- HERE

Traceback of TorchScript, original code (most recent call last):
  File "foo.py", line 11, in forward
    def forward(self, x, y):
        return self.bar(x, y)
               ~~~~~~~~ <--- HERE
  File "foo.py", line 9, in bar
    def bar(self, x, y):
        return self.baz(x, y) + torch.ones(3)
               ~~~~~~~~ <--- HERE
  File "foo.py", line 7, in baz
    def baz(self, x, y):
        return x + y
               ~~~~~ <--- HERE
RuntimeError: The size of tensor a (4) must match the size of tensor b (5) at non-singleton dimension 1
```

It follows Python convension of having the most important information last
and reading from the bottom up.

Changes:
* Moved the error message to the end, to copy Python
* Report original traceback separate from serialized traceback
* Make sure root functions have names in the interpreter trace.

Test Plan: Imported from OSS

Differential Revision: D20126136

Pulled By: zdevito

fbshipit-source-id: fd01f9985e5d74e04c4d064c02e8bc320f4fac13
2020-03-03 12:27:38 -08:00
Jerry Zhang
5b9f1ada30 [quant][graphmode] Observing input/output values in call site (#33277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33277

Currently we insert observer in the called graph, which is incorrect since graphs can be shared
and the decision of whether to insert observer or not might dependend on where the graph is called.
For example, for a call sequence `self.conv1(self.conv2(x))`, we can't inserting observer correctly
if `self.conv1` and `self.conv2` are sharing the same type in the current implementation, because we insert
observer in the graph of the forward method of Conv2d right now and this call sequence requires us to insert
only one observer for the output of self.conv1/input of self.conv2.
We'll need to insert observers for input/output values of the graph in call site instead.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20208787

fbshipit-source-id: 739e1d877639c0d0ed24e573bbd36211defa6836
2020-03-03 10:53:24 -08:00
davidriazati
11843049d5 [jit] Fix flipped PackedSequence outputs in script (#32955)
Summary:
Stacked PRs
 * **#32955 - [jit] Fix flipped PackedSequence outputs in script**
 * #32953 - [jit] Support properties on `Device`

Fixes #32605

Pull Request resolved: https://github.com/pytorch/pytorch/pull/32955

Pulled By: driazati

Differential Revision: D20165514

fbshipit-source-id: a130c438b40e51ec27d36f021b0dc7869570aa6a
2020-03-02 13:50:36 -08:00
Zino Benaissa
cab8772c6c Freezing Torchscript modules (#32178)
Summary:
This patch enables folding GetAttr nodes with their corresponding
values. _jit_pass_freeze_module API returns a new TorchScipt module
where all function calls and get attributes are inlined.
Usage:

frozen_model = torch._C._freeze_module(scrited_model._c)
frozen_model.forward(...)

This API currently optimizes the forward method. We will follow up to
to preserve and optimize methods and attributes that are annotated as
 torch.jit.interface.

Several future improvements to JIT optimizations are required to maximize
clean up/de-sugar the graph and eliminate redundancies.
Ideally, we want to produce a graph that can easily be lowered to
GLOW and other low-level backends.
__
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32178

Differential Revision: D19419640

Pulled By: bzinodev

fbshipit-source-id: 52baffaba9bca2cd60a8e747baa68d57711ad42b
2020-03-02 11:38:36 -08:00
Basil Hosmer
ad769d74d9 Collapse _like overloads into a single overload. (#33705)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33705

The fact that there were two overloads appears to be a historical
artifact that dates back to when goldsborough originally added these
bindings in the first place.  If TensorOptions is made optional,
then you only need one overload, not two, as they are exactly redundant
with each other.  When MemoryFormat was added, it was made a little
harder to do this, as the C++ syntax at::empty_like(t, memory_format) would
not work if you collapsed the overload; but now it works because TensorOptions
supports MemoryFormat.

The upshot is, I can get rid of all the overloads and just have one overload.
Amazingly, this change is backwards compatible, as the test attests.  While
I was at it, I also deleted the overload name from the functions entirely.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D20073355

Pulled By: bhosmer

fbshipit-source-id: c6a8908213b32ccf6737ea864d135e2cce34f56b
2020-03-01 19:40:22 -08:00
Elias Ellison
85b1c45a45 [JIT] fix alias assertion (#33952)
Summary:
This bug has been hit a couple times recently. We need to handle all bivariant types, not just optional, when asserting mutability/immutability of pointed-to elements in alias analysis.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33952

Differential Revision: D20166025

Pulled By: eellison

fbshipit-source-id: cf3df9897a639641ef8303a08ba2b13523d01ef1
2020-02-28 19:54:29 -08:00
davidriazati
2111c4ff0c [jit] Add missing tensor properties (#33906)
Summary:
Fixes #30775

This adds TorchScript implementations (copied from `python_variable.cpp`) for the remainin `Tensor` properties that were missing from the jit, in addition to a test that ensures new properties will trigger a failure so we can decide whether we want to add them as well.

For `some_tensor`, adds:

* `some_tensor.T`
* `some_tensor.ndim`
* `some_tensor.is_leaf`
* `some_tensor.name`
](https://our.intern.facebook.com/intern/diff/20153288/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33906

Pulled By: driazati

Differential Revision: D20153288

fbshipit-source-id: 2ddc48a14267077bc176065267e5ce52181b3d6b
2020-02-28 19:06:11 -08:00
Michael Suo
bd7e9c490a [jit] stop printing crap in test_jit (#33917)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33917

Test Plan: Imported from OSS

Differential Revision: D20150750

Pulled By: suo

fbshipit-source-id: 9a35298a8856d423fb6b9043174853cccf968706
2020-02-27 19:06:43 -08:00
Brian Vaughan
910acafc79 Revert D20124224: [jit] stop printing crap in test_jit
Test Plan: revert-hammer

Differential Revision:
D20124224

Original commit changeset: 9241d21fdf94

fbshipit-source-id: 0680f9db922f9a33a4e859eedd142b87a51bbede
2020-02-27 13:40:34 -08:00
Brian Vaughan
243af17d65 Revert D20103905: [jit] Fix flipped PackedSequence outputs in script
Test Plan: revert-hammer

Differential Revision:
D20103905

Original commit changeset: 84081213ed21

fbshipit-source-id: 2b260654fac87e52fbaf8035018e4ea484928af1
2020-02-27 13:29:35 -08:00
Jerry Zhang
afbd04449e [quant][graphmode] Swap dequantize after inline for ops that doesn't require observation (#33173)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33173

How to deal with ops that’s defined for both floating point and quantized Tensor?

Category of ops: the ones that doesn’t require observers, which means the quantization parameters(scale/zero_point) of the output of this op can be inferred from the quantization parameters of inputs.
For example:
avg_pool, max_pool, flatten, transpose, upsample

Another related topic to previous one is how do we deal with things like adaptive_avg_pool2d that does not require to be observed and it works with quantized tensor as well? If we insert quant/dequant for them, even the quant fusion becomes a numerically changing operation because the scale/zero_point for input and output are different.

Proposal

We can swap the operator with dequantize whenever we see it. For example, for pattern
Let’s say aten::general_op is defined for both floating point and quantized

%r = aten::conv(...)
%q = quantize(%r)
%dq = dequantize(%q)
%f = aten::general_op(%dq)
...

We detect that all inputs of aten::general_op is produced by dequantize, we’ll first delete all the dequantize for the inputs and then insert dequantize for each use of the output of the aten::general_op, note that this should work generally for all the case we might encounter.

After transformation we’ll have:

%r = aten::conv(...)
%q = quantize(%r)
%x = aten::general_op(%q)
%f = dequantize(%x)
...

1. Multiple inputs
    1. We need to make sure all inputs of the aten::general_op are produced by dequantize before we do this transformation
2. Input used by multiple operators
    1. We already did this by inserting dequantize for each use of the value
3. Output used by multiple operators
    1. We’ll reuse the code that inserts dequantize(might need some refactor)

Note that current concat does not belong to this category right now since it does not inherit quantization parameters from inputs.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20123590

fbshipit-source-id: de2febe1f37e4079457a23acaeccbc6d9c9e1f8a
2020-02-27 12:42:29 -08:00
Shihao Xu
9733711394 [JIT] Support calling Tensor.element_size() in TorchScript (#33808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33808

# Problem

https://github.com/pytorch/pytorch/issues/33620
ghstack-source-id: 99073701

Test Plan:
```
buck test mode/dev-nosan //caffe2/test:jit -- test_numel

buck test mode/dev-nosan //caffe2/test:jit -- test_element_size

buck build mode/dev-nosan //caffe2/test:jit \
&& buck-out/gen/caffe2/test/jit\#binary.par -r test_numel

buck build mode/dev-nosan //caffe2/test:jit \
&& buck-out/gen/caffe2/test/jit\#binary.par -r test_element_size
```

Compile error

P126667043

Generated code,
```
buck-out/dev/gen/caffe2/generate-code=register_aten_ops_0.cpp/register_aten_ops_0.cpp

buck-out/dev/gen/caffe2/generate-code=register_aten_ops_2.cpp/register_aten_ops_2.cpp
```
P126667064

Differential Revision: D7050644

fbshipit-source-id: 20dbdb9c500b6d7683c23e3049d43ed0ca06d831
2020-02-26 22:30:44 -08:00
davidriazati
cea0cc8ca8 [jit] Unify augmented assign handling (#33578)
Summary:
Stacked PRs
 * **#33578 - [jit] Unify augmented assign handling**
 * #32993 - [jit] Fix aug assign for non-tensor attributes

We handle augmented assignments to `Select` and `Var` statements differently, but the actual in place update is the same for both, so this PR factors it out into a method so we don't have 2 code paths doing the same thing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/33578

Pulled By: driazati

Differential Revision: D20127647

fbshipit-source-id: 94f37acbd2551498de9d2ca09a514508266f7d31
2020-02-26 19:13:15 -08:00
Jerry Zhang
4c33222c51 [quant][graphmode] Replicate dequantize nodes (#33531)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33531

We already insert dequantize for each use of the value, but there might still be cases where we only
see the value is used multiple times after inline. This pass adds the support to replicate dequantize
after inline to ensure output of dequantize is only used by one node, which is necessary to preserve all
quantization patterns like `dequant - conv - quant`

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D20123591

fbshipit-source-id: 6edb10a4566538bcf9379d332233f870372b7a63
2020-02-26 18:59:16 -08:00
davidriazati
2b9fa4a756 [jit] Fix flipped PackedSequence outputs in script (#32955)
Summary:
Stacked PRs
 * **#32955 - [jit] Fix flipped PackedSequence outputs in script**
 * #32953 - [jit] Support properties on `Device`

Fixes #32605
](https://our.intern.facebook.com/intern/diff/20103905/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32955

Pulled By: driazati

Differential Revision: D20103905

fbshipit-source-id: 84081213ed214846e563b9f05bcab0210bb1a71b
2020-02-26 18:53:27 -08:00
Michael Suo
150e025be8 [jit] stop printing crap in test_jit (#33779)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33779

This should eliminate random warnings and print spew from test_jit.

It also fixes a bug where we weren't properly comparing captured outputs
(!)

Test Plan: Imported from OSS

Differential Revision: D20124224

Pulled By: suo

fbshipit-source-id: 9241d21fdf9470531b0437427b28e325cdf08d3a
2020-02-26 18:46:03 -08:00
Elias Ellison
857eb4145e [JIT] add support for torch.cdist (#33737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33737

Test Plan: Imported from OSS

Differential Revision: D20121916

Pulled By: eellison

fbshipit-source-id: b0427bbfd3ade1f3129c4a95a542fbc32c3abd76
2020-02-26 18:37:37 -08:00
Elias Ellison
f31b1d3453 [JIT] add support for lu_unpack (#33736)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33736

Test Plan: Imported from OSS

Differential Revision: D20121914

Pulled By: eellison

fbshipit-source-id: 1136f4d7678a2233129aefe3e30234af385b8353
2020-02-26 18:37:33 -08:00
Elias Ellison
4543cf4eb1 [JIT] add support for torch.lu to torchscript (#33724)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33724

Fix for https://github.com/pytorch/pytorch/issues/33381, partial fix of https://github.com/pytorch/pytorch/issues/30786

Test Plan: Imported from OSS

Differential Revision: D20077321

Pulled By: eellison

fbshipit-source-id: a1e6a0370712b36c9f66979098ac2f9d500ca5f6
2020-02-26 18:37:28 -08:00
Elias Ellison
fddf73250d [JIT] fix resolving of functions in torch/functional. fix compilation of torch.stft (#33504)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33504

Fix resolution fo functions that are bound onto torch in torch/functional.py. This does not fix compilation of all of those functions, those will be done in follow ups. Does torch.stft as a start.

Fixes #21478

Test Plan: Imported from OSS

Differential Revision: D20014591

Pulled By: eellison

fbshipit-source-id: bb362f1b5479adbb890e72a54111ef716679d127
2020-02-26 18:35:43 -08:00
Elias Ellison
057fd5e10d add support for _modules, reducing special casing of nn.Sequential (#29495)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29495

This PR adds support for `_modules`, making it so we no longer need to special case support for `nn.Sequential`. I was getting internal errors around the previous approach using `self.define()`, so i am adding this PR as part of the stack.

Fix for https://github.com/pytorch/pytorch/issues/28998

Test Plan: Imported from OSS

Differential Revision: D18412561

Pulled By: eellison

fbshipit-source-id: a8b24ebee39638fccf63b2701f65f8bb0de84faa
2020-02-26 18:07:19 -08:00
David Riazati
51e405743f Revert D20010383: [jit] Unify augmented assign handling
Test Plan: revert-hammer

Differential Revision:
D20010383

Original commit changeset: 52e559ce907e

fbshipit-source-id: 7ca938070d5e98c91e7a7b8485a3c1e790c3ceb2
2020-02-26 14:22:14 -08:00
davidriazati
867990dc17 [jit] Unify augmented assign handling (#33578)
Summary:
Stacked PRs
 * **#33578 - [jit] Unify augmented assign handling**
 * #32993 - [jit] Fix aug assign for non-tensor attributes

We handle augmented assignments to `Select` and `Var` statements differently, but the actual in place update is the same for both, so this PR factors it out into a method so we don't have 2 code paths doing the same thing.
](https://our.intern.facebook.com/intern/diff/20010383/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33578

Pulled By: driazati

Differential Revision: D20010383

fbshipit-source-id: 52e559ce907e95e5c169ab9d9690d0d235db36f3
2020-02-26 14:09:40 -08:00
Jerry Zhang
479e474a37 [quant][graphmode] FoldConvBatchNorm2d support shared ClassTypes (#32379)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32379

Folding Conv2d - BatchNorm2d modules means recalculate the weight and bias of Conv2d module by incorproating the parameters
of BatchNorm2d, and also change the method calls to calling only forward of Conv2d module, this involves change of both module
types and graph because the bias of Conv2d is a parameter when it has value and is an attribute when it is
None(since JIT code has assumption of prameter being Tensor in multiple places), therefore
we'll need to remove the bias attribute when it is None and add a bias attribute later. Since ClassType might be shared, we separate
remove and add in separate steps and also keep track of the processed graph to avoid modifying the graph and type multiple times.
However we'll have to record the slot index of bias as well so we can replay the slot removal on other instances of Conv2d module.

Test Plan:
tbd

Imported from OSS

Differential Revision: D20078719

fbshipit-source-id: cee5cf3764f3e0c0a4a2a167b78dbada2e3835cc
2020-02-24 17:29:13 -08:00
Thomas Viehmann
481e7f2e78 catch and propagate warnings for JIT ScriptMethods (#33010)
Summary:
We align it with ScriptFunctions by using the HANDLE_TH_ERRORS/END_HANDLE_TH_ERRORS_PYBIND macros.

Fixes https://github.com/pytorch/pytorch/issues/24155  or https://github.com/pytorch/pytorch/issues/24828 ?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33010

Differential Revision: D20053585

Pulled By: suo

fbshipit-source-id: c8876b54069285ba9638bb2328fd8738b59c396d
2020-02-24 10:28:17 -08:00
Nikolay Korovaiko
a7e22b4c6a add bailout checks to checkScript (#32802)
Summary:
this adds enough infrastructure to run bailout checks in `checkScript`. I'll need to figure out the best way to enable it for nightly builds now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32802

Differential Revision: D19974718

Pulled By: Krovatkin

fbshipit-source-id: 40485503f6d3ae14edcce98e1eec1f0559f3ad08
2020-02-21 21:18:54 -08:00
davidriazati
ee28831341 [jit] Fix aug assign for non-tensor attributes (#32993)
Summary:
Instead of erroring out this de-sugars augmented assignments to class
members from `self.a += 1` to `self.a = self.a + 1`.

Fixes #32973
](https://our.intern.facebook.com/intern/diff/19737636/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32993

Pulled By: driazati

Differential Revision: D19737636

fbshipit-source-id: 07307cde88d8c348a7affdafe26db21c74e28ec0
2020-02-21 08:42:35 -08:00
Hong Xu
a6a72ac68f Fix all occurrences of C416. (#33429)
Summary:
C416: Unnecessary (list/set) comprehension - rewrite using list/set().

See https://pypi.org/project/flake8-comprehensions/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33429

Differential Revision: D19972858

Pulled By: ezyang

fbshipit-source-id: faac042a94c59d737bd5ae983121a0a029346e23
2020-02-21 08:32:22 -08:00
Elias Ellison
faa800eb5b [JIT] remove inline everything jitter skip (#33468)
Summary:
The `not inline_everything` check was causing the jitter check to be skipped whenever we emitted a function. thanks SplitInfinity for pointing this out.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33468

Differential Revision: D19975934

Pulled By: eellison

fbshipit-source-id: 03faf8d2fd93f148100d8cf49cb67b8e15cf1f04
2020-02-20 15:58:25 -08:00
Michael Suo
416413dec4 [jit] add inlined_graph method to ScriptFunctions (#33508)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33508

Ever since we switched to not inlining by default, some users have
complained since they relied on inlining occuring to, e.g. process the
graph with some other tool. Add an inlined_graph for convenience in
those cases.

Test Plan: Imported from OSS

Differential Revision: D19977638

Pulled By: suo

fbshipit-source-id: fe1fa92ff888959203d5d1995930d488b5f9e24c
2020-02-19 15:41:25 -08:00
Zachary DeVito
83c347ff4a Remove prim::Constant op (#32804)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32804

Constants are interpreter primitives so the op was not actually used.
This cleans up some of the logic around it.

This also fixes constant prop such that failures to look up an op
do not silently stop constant propagation. Instead, only errors
inside the op implementation itself will do this.

Test Plan: Imported from OSS

Differential Revision: D19673156

Pulled By: zdevito

fbshipit-source-id: 7beee59a6a67a6c2f8261d86bd505280fefa999e
2020-02-18 15:06:56 -08:00
Owen Anderson
1d743e3154 Add guard elimination support for aten::unsqueeze. (#33371)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33371

Differential Revision: D19920041

Pulled By: resistor

fbshipit-source-id: 906af47676dba014c31eef069a4753207f2efc60
2020-02-18 13:22:58 -08:00
Owen Anderson
d35a4c202e Add support for aten::slice to guard elimination. (#33311)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33311

Differential Revision: D19911105

Pulled By: resistor

fbshipit-source-id: 402cfe5f2e03a62b78ed13157e1462cefd9eeafb
2020-02-14 22:54:37 -08:00
Elias Ellison
bf16688538 [JIT] peephole optimize values with NoneType (#33264)
Summary:
If a value has the type None, we can always replace it with a None constant.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33264

Differential Revision: D19878695

Pulled By: eellison

fbshipit-source-id: 5d0e7ffb37c5747997df093fec3183039d8dff4d
2020-02-13 12:03:49 -08:00
davidriazati
f61b45fc89 [jit] Support properties on Device (#32953)
Summary:
Stacked PRs
 * #32955 - [jit] Fix flipped PackedSequence outputs in script
 * **#32953 - [jit] Support properties on `Device`**

PyTorch devices have a `index` and `type` property. This PR adds support for both to TorchScript
](https://our.intern.facebook.com/intern/diff/19849320/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32953

Pulled By: driazati

Differential Revision: D19849320

fbshipit-source-id: ce845258c6110058dd9ea1f759ef74b7ed2e786e
2020-02-12 18:59:10 -08:00
Zachary DeVito
99349defc1 remove unnecessary Node* ops (#32760)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32760

Minor changes to the way ops are implemented to remove incidental use of Node*
in the operator implementation.

Current state for operators that previously took Node:

```
TBD:

USES NODE: prim::DifferentiableGraph(...) -> (...)
USES NODE: prim::profile(...) -> (...)
USES NODE: prim::FusionGroup(...) -> (...)
USES NODE: prim::PythonOp(...) -> (...)
USES NODE: prim::ImplicitTensorToNum(Tensor a) -> Scalar # next PR

Should be made interpreter primitives:

USES NODE: prim::TupleUnpack(...) -> (...)
USES NODE: prim::TupleSlice(...) -> (...)
USES NODE: prim::TupleConstruct(...) -> (...)
USES NODE: prim::ListUnpack(...) -> (...)
USES NODE: prim::ListConstruct(...) -> (...)
USES NODE: prim::DictConstruct(...) -> (...)
USES NODE: prim::Constant() -> (...)
USES NODE: prim::isinstance(...) -> (...)
USES NODE: prim::CreateObject(...) -> (...)
USES NODE: prim::fork(...) -> (...)
USES NODE: aten::warn(str message, *, int stacklevel=2) -> () # need stack level information, so ideally in interpreter so it can look at the stack

Should be made into vararg operators, i.e. the operators last argument should be an IValue
that contains the number of arguments.

USES NODE: prim::FusedConcat(...) -> (...)
USES NODE: prim::MMTreeReduce(...) -> (...)
USES NODE: prim::MMBatchSide(...) -> (...)
USES NODE: prim::ConstantChunk(...) -> (...)
USES NODE: prim::AutogradAnyNonZero(...) -> bool
USES NODE: prim::BroadcastSizes(...) -> (...)
USES NODE: prim::ChunkSizes(...) -> (...)
USES NODE: aten::format(str self, ...) -> str
USES NODE: prim::Print(...) -> (...)

fixed:

USES NODE: aten::extend(Tensor[](a!) self, Tensor [] other) -> ()
USES NODE: aten::copy(Tensor[](a) self) -> Tensor[]
USES NODE: aten::extend(int[](a!) self, int [] other) -> ()
USES NODE: aten::copy(int[](a) self) -> int[]
USES NODE: aten::extend(float[](a!) self, float [] other) -> ()
USES NODE: aten::copy(float[](a) self) -> float[]
USES NODE: aten::extend(bool[](a!) self, bool [] other) -> ()
USES NODE: aten::copy(bool[](a) self) -> bool[]
USES NODE: aten::extend(t[](a!) self, t [] other) -> ()
USES NODE: aten::copy(t[](a) self) -> t[]
USES NODE: aten::keys(Dict(str, t) self) -> str[](*)
USES NODE: aten::values(Dict(str, t) self) -> t[](*)
USES NODE: aten::dict((str, tVal)[] inputs) -> Dict(str, tVal)
USES NODE: aten::keys(Dict(int, t) self) -> int[](*)
USES NODE: aten::values(Dict(int, t) self) -> t[](*)
USES NODE: aten::dict((int, tVal)[] inputs) -> Dict(int, tVal)
USES NODE: aten::keys(Dict(float, t) self) -> float[](*)
USES NODE: aten::values(Dict(float, t) self) -> t[](*)
USES NODE: aten::dict((float, tVal)[] inputs) -> Dict(float, tVal)
USES NODE: aten::keys(Dict(Tensor, t) self) -> Tensor[](*)
USES NODE: aten::values(Dict(Tensor, t) self) -> t[](*)
USES NODE: aten::dict((Tensor, tVal)[] inputs) -> Dict(Tensor, tVal)
USES NODE: aten::test_vartype2(t a, t[] b) -> (t[])
USES NODE: aten::_ncf_unsqueeze(Tensor self, int ndim) -> Tensor
USES NODE: aten::_ncf_view(Tensor self, int[] input_shape, int normalized_ndim) -> Tensor
USES NODE: prim::is_none(int? a) -> bool
USES NODE: aten::__interpolate(Tensor input, int? size = None, float[]? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::__interpolate(Tensor input, int[]? size = None, float[]? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::__interpolate(Tensor input, int? size = None, float? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::__interpolate(Tensor input, int[]? size = None, float? scale_factor = None, str mode = 'nearest', bool? align_corners = None, bool? recompute_scale_factor = None) -> Tensor
USES NODE: aten::sorted(t[](a) self) -> (t[])
USES NODE: aten::sort(t[](a!) self, bool reverse=False) -> ()
USES NODE: aten::test_vartype(t[] a, t b) -> (t)
USES NODE: prim::unchecked_unwrap_optional(t(a)? optional) -> t(a)
USES NODE: prim::unchecked_cast(...) -> (...)
USES NODE: aten::dict() -> Dict(str, Tensor)
USES NODE: prim::Load(...) -> (...)
USES NODE: prim::Store(...) -> (...)
USES NODE: prim::Drop(...) -> (...)
USES NODE: aten::tensor(t[] data, *, ScalarType? dtype=None, Device? device=None, bool requires_grad=False) -> Tensor
USES NODE: aten::as_tensor(t[] data, *, ScalarType? dtype=None, Device? device=None) -> Tensor
```

Test Plan: Imported from OSS

Differential Revision: D19615387

Pulled By: zdevito

fbshipit-source-id: 95298c3c4249b9f812c332d13f0fb79daeecb662
2020-02-12 14:49:02 -08:00
Hongyu Cai
de27f4261d [jit] remove redundant variables from JIT TestCase
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29091

Differential Revision: D19746083

Pulled By: suo

fbshipit-source-id: 76fd71740fe7a3f52da361d96a7b694ec208de24
2020-02-07 10:42:33 -08:00
Michael Suo
df1d68d52e [jit] fix parser for one-line functions (#32941)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32941

The Python grammar allows single-statement one-line functions. So we
should allow it in the string parser.

Test Plan: Imported from OSS

Differential Revision: D19704153

Pulled By: suo

fbshipit-source-id: 8c06cc9c600aa2a9567b484a1ecc0360aad443e3
2020-02-05 13:11:47 -08:00
James Reed
f393adc0ed [JIT] Fix python pickle serialization for torchbind (#32878)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32878
ghstack-source-id: 97736045

Test Plan: Imported from OSS

Differential Revision: D19669879

fbshipit-source-id: 23ea91cffe7344d1eed014e2509983c281dd18d3
2020-02-04 19:29:55 -08:00
James Reed
bc4790b3aa [JIT] Trace uses of torchbind classes as module attributes (#32833)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32833
ghstack-source-id: 97736046

Test Plan: Imported from OSS

Differential Revision: D19645714

fbshipit-source-id: 10a7271f13c3588aea666b44b916e90ba7b3c666
2020-02-04 19:28:37 -08:00
Elias Ellison
040bc1d0e1 [JIT] make is_scripting a condvalue (#32871)
Summary:
Add `torch.jit.is_scripting` to the list of CondValues, or values that if they are an input to a if statement we only compile one side of the if. I'm not sure if we actually want this PR.

Pros:
- Makes it easier to add features that are not yet supported in TorchScript (like has_torch_function)
- The current idiom of writing `torch.jit.is_scripting` and factoring out the block to a function annotated with `torch.jit.ignore` is functionally equivalent and much more cumbersome

Cons:
- Makes it easier to add features that are not yet supported in TorchScript
- Perhaps is confusing as a reader what is being compiled. Potentially could give all caps name or otherwise change name to make it more visually stand out.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32871

Differential Revision: D19670383

Pulled By: eellison

fbshipit-source-id: 5257b0bd23c66f199d59a7f2c911e948301e5588
2020-01-31 18:23:42 -08:00
Elias Ellison
10bd21d550 [JIT] fix nested select assign (#32877)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/31902

```
self.sub.a = 1
 ```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32877

Differential Revision: D19670322

Pulled By: eellison

fbshipit-source-id: 6d8f350b4d1169be1d2a56050fccd7c246ad9212
2020-01-31 16:58:26 -08:00
Hong Xu
b16dab8a41 Coding header is better specified in lowercase letters (#32850)
Summary:
The Python document <https://www.python.org/dev/peps/pep-0263/> gives
all examples using lowercase letters. Although it doesn't say
straightly, the following paragraph seems to indicate that uppercase
letters aren't legitimate:

> If a source file uses both the UTF-8 BOM mark signature and a magic encoding comment, the only allowed encoding for the comment is 'utf-8'.  Any other encoding will cause an error.

My Emacs also complains about the uppercase letters every time I save
the file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32850

Differential Revision: D19663281

Pulled By: ezyang

fbshipit-source-id: 48127d3c2fd6e22dd732a2766913735136ec2ebc
2020-01-31 10:02:30 -08:00
Michael Suo
3552be1090 [jit] fix the NoneType param/buffer hack (#32745)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32745

Some parameters (like `bias` in conv) are optional. To achieve this
previously, you had to add `bias` as a constant, which would invoke some
pretty weird behavior in the frontend, summarized as:
```
if bias is not None:
  add it as a parameter normally
else: # bias is None
  add it as a constant with the value None
```

There are several things bad about this:
1. Bias is not a constant. Marking it `__constants__` is confusing.
2. It basically relies on an implementation detail (the frontend
processes parameters before constants) to work.

Okay, whatever. I don't even know why we did this originally, but
getting rid of it doesn't break anything, so I assume improved NoneType
refinement has made this a non-issue.

Note on perf: this will make no difference; if bias was `None` it's still
folded out today, if bias is a Tensor it would be added as a parameter
both before and after this change

Test Plan: Imported from OSS

Differential Revision: D19628634

Pulled By: suo

fbshipit-source-id: d9128a09c5d096b938fcf567b8c23b09ac9ab37f
2020-01-29 17:04:39 -08:00
James Reed
465ebd58ba [JIT] pickle serialization for custom bound classes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32604

Test Plan: Imported from OSS

Differential Revision: D19566633

fbshipit-source-id: 9387d3ff45cbd6ccde49ce190a52859481cc301c
2020-01-28 11:02:59 -08:00
James Reed
1719da13f9 [JIT] Support for registering C++ lambdas as methods on custom C++ class
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32553

Test Plan: Imported from OSS

Differential Revision: D19543269

Pulled By: jamesr66a

fbshipit-source-id: 7e566650295e9d1c4f2f716470e061308a6210a0
2020-01-28 11:01:07 -08:00
Michael Suo
63170431f9 [jit] fix segfault on missing getstate (#32642)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32642

Previously, if we defined `__setstate__` but not `__getstate__`, we
would segfault. This PR turns that into a comprehensible error message
(and improves another error message as well).

Fixes https://github.com/pytorch/pytorch/issues/25886

Test Plan: Imported from OSS

Differential Revision: D19596463

Pulled By: suo

fbshipit-source-id: dbe76bc36bc747d65fb0223184c009e0e9ba072c
2020-01-28 01:25:37 -08:00
James Reed
d68592a440 [JIT] Fix classes as attributes in recursive scripting
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32594

Test Plan: Imported from OSS

Differential Revision: D19562951

Pulled By: jamesr66a

fbshipit-source-id: 3d5491c1c23456f107390a78be16da687de951e6
2020-01-27 20:37:48 -08:00
Jerry Zhang
91f10a1de1 [quant][graphmode][refactor] Better API for fold_convbn (#32380)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32380

We'll clone the module first and then fold conv bn and return a new
module

Test Plan:
.

Imported from OSS

Differential Revision: D19508033

fbshipit-source-id: 328e91a2c9420761c904a7f2b62dab4cfaaa31ac
2020-01-24 15:46:47 -08:00
Nikolay Korovaiko
7d0f0b62de API for testing bailouts (#32518)
Summary:
This API seems to be quite useful to make sure all bailouts in a graph are triggered. I used it for testing torchvision models and I was wondering if this might be something we might actually want to have? zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32518

Differential Revision: D19553147

Pulled By: Krovatkin

fbshipit-source-id: 7542c99051588b622091aec6d041c70731ca5d26
2020-01-24 11:19:41 -08:00
James Reed
6745bfc31c Revert "Remove __torch__ from custom class qualname" (#32514)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32514

This reverts commit c7fdf5b251.

Test Plan: Imported from OSS

Differential Revision: D19525532

Pulled By: jamesr66a

fbshipit-source-id: 126f4e87250a2ac739bd7aa161a0f7b39f143d38
2020-01-23 14:56:25 -08:00
James Reed
69f9bf8893 [JIT] Support returning tuple from custom bound C++ method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32477

Test Plan: Imported from OSS

Differential Revision: D19509927

Pulled By: jamesr66a

fbshipit-source-id: 7d407150402cc19344c3ec3b4a27b3d7c464e8ac
2020-01-23 14:56:15 -08:00
James Reed
7e14c420ae [JIT] Test __getstate__ and __setstate__ for custom bound C++ classes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32470

Test Plan: Imported from OSS

Differential Revision: D19508250

Pulled By: jamesr66a

fbshipit-source-id: 481299fb3c18fa874c2a1d2993984bb6b3193bac
2020-01-23 14:56:06 -08:00
James Reed
dbd29e5668 [JIT] Passing custom class as arg (#32260)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32260

This makes it so you can actually pass the custom class as an arg to ScriptFunctions

Test Plan: Imported from OSS

Differential Revision: D19424252

Pulled By: jamesr66a

fbshipit-source-id: c3530186619655781dedbea03c2ad321aaff1cb8
2020-01-23 14:54:59 -08:00
Elias Ellison
ef94496b36 [JIT] throw if no self arg on ignored methods (#32503)
Summary:
There was a user who did this and it would seg fault.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32503

Differential Revision: D19538481

Pulled By: eellison

fbshipit-source-id: dc3752028b9eff6ac88c025e8a2b5f8fd44ce32f
2020-01-23 14:27:00 -08:00
Pritam Damania
f050b16dd9 Move pytorch distributed tests to separate folder for contbuild. (#30445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445

Create distributed and rpc directories under caffe/test for better management
of unit tests.

Differential Revision: D18702786

fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
2020-01-22 21:16:59 -08:00
Yanli Zhao
193ac31441 [jit] Enable IValue to hold a PyObject (#32491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32491

This PR enables IValue to be able to hold a pure PyObject by adding a
new enum tag, a new jit_type to denote PyObject existance in IValue and
the JIT type system. We don't and not plan to expose this to user.

This is the basic piece that enable ivalue to be adopted broader like
making RRef always hold IValue, it might also simplify some compiler
logic
ghstack-source-id: 97039980

Test Plan: Imported from OSS

Differential Revision: D19502234

fbshipit-source-id: 90be001706d707d376cfbea25980fd82980df84a
2020-01-22 15:48:32 -08:00
Elias Ellison
38d122eca9 implement tuple constants (#31841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31841

Add Tuple Constants to JIT. The constraint here is that all elements of a tuple must themself be insertable as a a constant. Previously tuples were special cased in constant propagation, but now that there are more passes that are inserted constants, such as freezing, we should just have tuples be representable as constants.

Test Plan: Imported from OSS

Differential Revision: D19439514

Pulled By: eellison

fbshipit-source-id: 3810ba08ee349fa5598f4b53ea64525996637b1a
2020-01-22 12:13:31 -08:00
Elias Ellison
adf0916606 Add str[] float[] constants resubmit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31791

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D19439513

Pulled By: eellison

fbshipit-source-id: a04c7401687b051f0d4fb4794963931ebe004194
2020-01-22 12:11:58 -08:00
peter
b77c25dec0 Fix dll load logic for Python 3.8 on Windows (#32215)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/31181 and https://github.com/pytorch/pytorch/pull/31162#discussion_r362495611.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32215

Differential Revision: D19501869

Pulled By: ezyang

fbshipit-source-id: 363824e52d2592ad968ecf1df345aa4c0daff915
2020-01-22 08:33:34 -08:00
Jerry Zhang
44b270d892 insert_quant_dequant pass support shared class types (#31408)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31408

We'll error out when a graph is quantized with different QSchemes.
This only occurs when we have two modules that have same types (e.g. two Conv2d modules initialized with
same arguments) and quantized with two configs that would produce different quantized graphs, for example
per tensor affine and per channel affine. This is a rare case, so it should be OK to skip for now.
Actual support will come later.

Test Plan:
test_jit.py, test_quantization.py

Imported from OSS

Differential Revision: D19162366

fbshipit-source-id: 798f06d0ddef0c8458237ce88b62159cc77eec8b
2020-01-21 22:18:49 -08:00
James Reed
1ecad2bb2b Test passing custom class instance to bound method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32320

Test Plan: Imported from OSS

Differential Revision: D19437335

Pulled By: jamesr66a

fbshipit-source-id: 8f5166dbe6fc5704b12b6224932460b12be0d39b
2020-01-17 23:09:38 -08:00
James Reed
c7078a1ce8 Fix returning instance of custom class from method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32312

Test Plan: Imported from OSS

Differential Revision: D19433511

Pulled By: jamesr66a

fbshipit-source-id: f048d5f60eaba992ee42fea2d318a59b3a156578
2020-01-17 23:09:34 -08:00
Elias Ellison
e7bc1663bd fix unchecked cast alias analysis (#32309)
Summary:
Unchecked cast just refines the type of a value, the value stays the same, so the output should alias the input.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32309

Differential Revision: D19439037

Pulled By: eellison

fbshipit-source-id: fe6902d0d9a5a9ef5e9c13e1dbd056576d8c327e
2020-01-17 12:29:28 -08:00
Nikolay Korovaiko
53708e21ed classic fixed-point liveness
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31724

Differential Revision: D19426570

Pulled By: Krovatkin

fbshipit-source-id: 3387dfb25e6e9456d5d0517eac1d2e44e61d6813
2020-01-16 15:13:22 -08:00
Michael Suo
90c65b81c3 Define repr() on IValues (#32232)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32232

Previously, we were using `operator<<` as the default way of printing
IValue constants during serialization. The semantics of `operator<<`
were ill-defined; and this bit us in particular with strings and lack of
quoting.

This PR defines the role of `operator<<`: much like Python `str()`, it
is intended to produce a human-readable-ish representation for
debugging purposes.

This PR also defines a new `repr()` function on IValue that is intended
to produce a valid Python expression that can be used to recreate an
object with the same value. `repr()` is not defined on all IValue kinds
(notably tensors!) for this reason.

Test Plan: Imported from OSS

Differential Revision: D19417036

Pulled By: suo

fbshipit-source-id: c102d509eaf95a28b6a62280bc99ca6f09603de5
2020-01-15 17:35:41 -08:00
Richard Zou
19bbb4fccb Stop building documentation in pytorch_linux_xenial_cuda*_build (#32187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32187

Fixes #32058. Previously we would build documentation during the pytorch
linux cuda build. We don't actually need to do this because we have a
dedicated python_doc_build job that builds the docs. With this change,
the CUDA build should run ~10 minutes faster, giving devs faster signal.

Test Plan: - Check the CUDA (10.1) build on this PR, make sure it doesn't build the docs.

Differential Revision: D19400417

Pulled By: zou3519

fbshipit-source-id: e8fb2b818146f33330e06760377a9afbc18a71ed
2020-01-15 07:48:42 -08:00
Nikolay Korovaiko
02c3493a84 Fix an invalid peephole transformation if input/output values are written to (#28455)
Summary:
fixes https://github.com/pytorch/pytorch/issues/28360
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28455

Differential Revision: D19374601

Pulled By: Krovatkin

fbshipit-source-id: 622f24b40aba03e79e55a6b8d25d88417f7d8bad
2020-01-14 16:28:07 -08:00
davidriazati
61e509b992 Skip un-runnable tests (#31965)
Summary:
`test_init_ops` calls `orthogonal_` which fails without lapack (this test was just missing a skip condition)

The cpp tests would fail with a `undefined symbol` error if run with `BUILD_TESTS=0`, so this PR skips them if that flag is `0`
](https://our.intern.facebook.com/intern/diff/19320064/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31965

Pulled By: driazati

Differential Revision: D19320064

fbshipit-source-id: d1dcd36714107688ded25a414e8969abe026bd03
2020-01-14 11:36:52 -08:00
Jerry Zhang
1f34801460 More robust mangling (#31978)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31978

Currently we keep a `mangleIndex_` that's intenral to compilation unit and
just increment the index when we found the original name is mangled, this doesn't
guarantee the new name is not defined.
This PR fixes the problem by querying whether the new name is defined or not.
fixes: https://github.com/pytorch/pytorch/issues/31268

Test Plan:
fixes the issue

Imported from OSS

Differential Revision: D19350535

fbshipit-source-id: fe3262b2838d4208ab72e2cd4a5970b3a792ae86
2020-01-13 11:11:50 -08:00
Elias Ellison
8ecd3f783d check for object equality in constant pooling (#31800)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31800

If we know that two constants are the same object, we can ignore other constraints and pool them together. This fixes an issue introduced by the other PR where quantization relied on constant pooling happening for correctness.

Test Plan: Imported from OSS

Differential Revision: D19269499

Pulled By: eellison

fbshipit-source-id: 9d4396125aa6899cb081863d463d4f024135cbf4
2020-01-08 16:47:07 -08:00
davidriazati
883fb5434a Use real argument names for Python functions (#29300)
Summary:
This hooks up `inspect` so that Python functions get their parameters
names attached instead of naming them `0, 1, 2, ...`. This also fixes
issue #28537 where `ignore` functions were improperly typing `self`.
](https://our.intern.facebook.com/intern/diff/19256434/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29300

Pulled By: driazati

Differential Revision: D19256434

fbshipit-source-id: 6a1fe7bd0afab708b8439517798955d0abfeb44c
2020-01-08 15:41:28 -08:00
Artem Volkhin
3a2757c682 Fix tracing for modules with List[Tensor] as output (#31343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31343

Fix an issue in TorchScript tracing for modules with `c10::List<at::Tensor>` as an output. TensorList was not supported properly.

Test Plan: unit tests

Reviewed By: wanchaol

Differential Revision: D18850722

fbshipit-source-id: 87a223104d1361fe754d55deceeb1e8bbcad629b
2020-01-07 11:57:25 -08:00
Jerry Zhang
5579611544 Enable foldbn tests (#29220)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29220

Support for accessing constant is added in previous
PRs, this PR re-enables the foldbn tests

Test Plan:
test_jit.py

Imported from OSS

Differential Revision: D18846848

fbshipit-source-id: 90ceaf42539ffee80b984e0d8b2420da66c263c3
2020-01-04 11:47:01 -08:00
Jerry Zhang
ebe69236d1 Expose class constant through attr and setattr in object (#29219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29219

We added class constant in previous PRs, this PR allows access to
class constant in the object API

Test Plan:
build/bin/test_jit
python test/test_jit.py

Imported from OSS

Differential Revision: D18846851

fbshipit-source-id: 888a6517d5f747d1f8ced283c0c2c30b2f6c72c6
2020-01-04 11:09:35 -08:00
BowenBao
c4f10e0fe7 Renaming scales parameter for interpolate (#31526)
Summary:
PR separated from https://github.com/pytorch/pytorch/pull/31274.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31526

Reviewed By: zou3519

Differential Revision: D19221931

Pulled By: gchanan

fbshipit-source-id: 81958a9910867ac9d62f2b47abc49384526c4e51
2020-01-02 08:19:30 -08:00
Lu Fang
cb1af5f61f Revert D19233558: add float[] str[] constants
Test Plan: revert-hammer

Differential Revision:
D19233558

Original commit changeset: 4f7c6d9ddbe7

fbshipit-source-id: a5020a9169e349a5970323471d673e8cd7818c66
2019-12-31 11:57:34 -08:00
Elias Ellison
dd0f2f0c19 add float[] str[] constants (#31503)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31503

Add support for float lists and string lists constants, which enables better constant propagation + constant pooling + freezing.

Test Plan: Imported from OSS

Differential Revision: D19233558

Pulled By: eellison

fbshipit-source-id: 4f7c6d9ddbe7623757a9a20606ce5f394e14e93d
2019-12-30 11:58:17 -08:00
davidriazati
6064223808 @slowTest some slow tests (#31706)
Summary:
These are all the jit tests that take > 10 seconds according to `pytest test/test_jit.py --durations=15`

```
32.76s call     test/test_jit.py::TestModels::test_super_resolution
32.20s call     test/test_jit.py::TestModels::test_neural_style
30.90s call     test/test_jit.py::TestJit::test_export_batchnorm
25.95s call     test/test_jit.py::TestJit::test_dropout_module_requires_grad
22.24s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Transformer
12.38s call     test/test_jit.py::TestScript::test_fuser_double_float_codegen
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31706

Pulled By: driazati

Differential Revision: D19251567

fbshipit-source-id: 8e76f717506b8bf28d1a63ce302feb0446dc9141
2019-12-30 11:45:24 -08:00
Mingbo Wan
647569e546 get rid of choco install (#30897)
Summary:
7zip and cmake are part of base image, no need to re-install. Remove the install step can make build/test more stable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30897

Differential Revision: D19232961

Pulled By: mingbowan

fbshipit-source-id: fa3bbd1325839a2a977bf13fdbd97fda43793b8d
2019-12-27 13:12:04 -08:00
davidriazati
446e9af5b9 Fix parsing of big float literals (#29940)
Summary:
Stacked PRs
 * **#29940 - [jit] Fix parsing of big float literals**
 * #29935 - [jit] Fix hex literal parsing
 * #29931 - [jit] Throw a better error for int too big for int64_t
](https://our.intern.facebook.com/intern/diff/19186604/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29940

Pulled By: driazati

Differential Revision: D19186604

fbshipit-source-id: 6ef66588a5cf956f281e7bd1e5584ef06f5296e9
2019-12-23 17:21:07 -08:00
Gregory Chanan
68e5172382 Support optional float parameters (float?, optional<double>). (#31517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31517

This is going to be used by upsample (which currently uses magic values to represent optionals).

For now, we just introduce a fake function for testing (torch._test_optional_float(x)).

Test Plan: Imported from OSS

Differential Revision: D19198721

Pulled By: gchanan

fbshipit-source-id: 0a1382fde0927c5d277d02d62bfb31fb574b8c74
2019-12-23 08:33:39 -08:00
James Reed
7d630278da Separate torchbind from Python (#30242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30242

Pull Request resolved: https://github.com/pytorch/pytorch/pull/29501

Currently blocked on schema serialization issue

Test Plan: Imported from OSS

Differential Revision: D18463063

Pulled By: jamesr66a

fbshipit-source-id: c12a1b644eb9bf04e68ff93cccf91d6cb3e75359
2019-12-21 22:52:40 -08:00
Martin Yuan
11854bcd38 Add test to torch.jit.export_opnames, make the _C function private
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31446

Test Plan: Imported from OSS

Differential Revision: D19172851

Pulled By: iseeyuan

fbshipit-source-id: f06d8766ed73c9abe4ebf41c402ee64880d745be
2019-12-20 13:38:43 -08:00
Nikolay Korovaiko
5375ceae80 run optimizations on pre-profiled graph (#31392)
Summary:
This is the first stab at running profile-insensitive optimizations on pre-profiled graphs. Running those optimizations has a potential to simplify graphs greatly before GuardElimination and GuardElimination should be able to remove more guards.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31392

Differential Revision: D19173639

Pulled By: Krovatkin

fbshipit-source-id: 2485a2a598c10f9b5445efb30b16439ad4551b3f
2019-12-20 10:49:08 -08:00
Zachary DeVito
457286a383 fix missing type check in dictionary literal
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31375

Test Plan: Imported from OSS

Differential Revision: D19145440

Pulled By: zdevito

fbshipit-source-id: 69909089586149ef766b4858d3420864a81b2493
2019-12-19 16:22:36 -08:00
Nikolay Korovaiko
fc3103b116 fixing a naming issue in creating a residual loop node in a bailout graph (#31400)
Summary:
This addresses the issue of differentiating between `%4` in
`%12 : int, %y.1 : Tensor = prim::Loop(%9, %6, %4, %3)` and `%y.5 : Double(3) = aten::cat(%22, %4) # test_jit.py:3772:24` in `%4` loop's body in a residual continuation loop, because these should be different values.

```
[DUMP profiling_graph_executor_impl.cpp:124] with prim::BailoutTemplate_0 = graph(%z.1 : int,
[DUMP profiling_graph_executor_impl.cpp:124]       %size.1 : int):
[DUMP profiling_graph_executor_impl.cpp:124]   %2 : Tensor = prim::Constant[value= 1  1 [ CPUDoubleType{2} ]]()
[DUMP profiling_graph_executor_impl.cpp:124]   %3 : Double(2) = prim::BailOut[index=0](%2, %z.1, %size.1)
[DUMP profiling_graph_executor_impl.cpp:124]   %4 : int = prim::Constant[value=0]() # test_jit.py:3772:54
[DUMP profiling_graph_executor_impl.cpp:124]   %5 : None = prim::Constant()
[DUMP profiling_graph_executor_impl.cpp:124]   %6 : bool = prim::Constant[value=1]() # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]   %counters.1 : int[] = prim::ListConstruct()
[DUMP profiling_graph_executor_impl.cpp:124]   %8 : int = prim::Constant[value=8]()
[DUMP profiling_graph_executor_impl.cpp:124]   %9 : int = aten::__round_to_zero_floordiv(%size.1, %8)
[DUMP profiling_graph_executor_impl.cpp:124]   %10 : int = aten::mul(%9, %8)
[DUMP profiling_graph_executor_impl.cpp:124]   %11 : int = aten::sub(%size.1, %10)
[DUMP profiling_graph_executor_impl.cpp:124]   %12 : int, %y.1 : Tensor = prim::Loop(%9, %6, %4, %3) # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]     block0(%i.2 : int, %15 : int, %y.7 : Tensor):
[DUMP profiling_graph_executor_impl.cpp:124]       %17 : Double(2) = prim::BailOut[index=1](%y.7, %z.1, %counters.1, %9, %11, %i.2, %15)
[DUMP profiling_graph_executor_impl.cpp:124]       %18 : int[] = aten::append(%counters.1, %15) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %19 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %20 : Tensor = aten::ones(%19, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %21 : Double(1) = prim::BailOut[index=2](%20, %z.1, %counters.1, %9, %11, %i.2, %15, %17)
[DUMP profiling_graph_executor_impl.cpp:124]       %22 : Tensor[] = prim::ListConstruct(%17, %21)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.5 : Double(3) = aten::cat(%22, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %24 : int = prim::Constant[value=1]()
[DUMP profiling_graph_executor_impl.cpp:124]       %25 : int = aten::add(%15, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %26 : int[] = aten::append(%counters.1, %25) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %27 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %28 : Tensor = aten::ones(%27, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %29 : Double(1) = prim::BailOut[index=3](%28, %z.1, %counters.1, %9, %11, %i.2, %y.5, %25)
[DUMP profiling_graph_executor_impl.cpp:124]       %30 : Tensor[] = prim::ListConstruct(%y.5, %29)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.9 : Double(4) = aten::cat(%30, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %32 : int = aten::add(%25, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %33 : int[] = aten::append(%counters.1, %32) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %34 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %35 : Tensor = aten::ones(%34, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %36 : Double(1) = prim::BailOut[index=4](%35, %z.1, %counters.1, %9, %11, %i.2, %y.9, %32)
[DUMP profiling_graph_executor_impl.cpp:124]       %37 : Tensor[] = prim::ListConstruct(%y.9, %36)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.10 : Double(5) = aten::cat(%37, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %39 : int = aten::add(%32, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %40 : int[] = aten::append(%counters.1, %39) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %41 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %42 : Tensor = aten::ones(%41, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %43 : Double(1) = prim::BailOut[index=5](%42, %z.1, %counters.1, %9, %11, %i.2, %y.10, %39)
[DUMP profiling_graph_executor_impl.cpp:124]       %44 : Tensor[] = prim::ListConstruct(%y.10, %43)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.11 : Double(6) = aten::cat(%44, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %46 : int = aten::add(%39, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %47 : int[] = aten::append(%counters.1, %46) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %48 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %49 : Tensor = aten::ones(%48, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %50 : Double(1) = prim::BailOut[index=6](%49, %z.1, %counters.1, %9, %11, %i.2, %y.11, %46)
[DUMP profiling_graph_executor_impl.cpp:124]       %51 : Tensor[] = prim::ListConstruct(%y.11, %50)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.12 : Double(7) = aten::cat(%51, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %53 : int = aten::add(%46, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %54 : int[] = aten::append(%counters.1, %53) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %55 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %56 : Tensor = aten::ones(%55, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %57 : Double(1) = prim::BailOut[index=7](%56, %z.1, %counters.1, %9, %11, %i.2, %y.12, %53)
[DUMP profiling_graph_executor_impl.cpp:124]       %58 : Tensor[] = prim::ListConstruct(%y.12, %57)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.13 : Double(8) = aten::cat(%58, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %60 : int = aten::add(%53, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %61 : int[] = aten::append(%counters.1, %60) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %62 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %63 : Tensor = aten::ones(%62, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %64 : Double(1) = prim::BailOut[index=8](%63, %z.1, %counters.1, %9, %11, %i.2, %y.13, %60)
[DUMP profiling_graph_executor_impl.cpp:124]       %65 : Tensor[] = prim::ListConstruct(%y.13, %64)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.14 : Double(9) = aten::cat(%65, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %67 : int = aten::add(%60, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %68 : int[] = aten::append(%counters.1, %67) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %69 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %70 : Tensor = aten::ones(%69, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %71 : Double(1) = prim::BailOut[index=9](%70, %z.1, %counters.1, %9, %11, %i.2, %y.14, %67)
[DUMP profiling_graph_executor_impl.cpp:124]       %72 : Tensor[] = prim::ListConstruct(%y.14, %71)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.15 : Tensor = aten::cat(%72, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %74 : int = aten::add(%67, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       -> (%6, %74, %y.15)
[DUMP profiling_graph_executor_impl.cpp:124]   %75 : Double(10) = prim::BailOut[index=10](%y.1, %z.1, %counters.1, %11, %12)
[DUMP profiling_graph_executor_impl.cpp:124]   %76 : int, %y : Tensor = prim::Loop(%11, %6, %12, %75) # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]     block0(%i.1 : int, %79 : int, %y.6 : Tensor):
[DUMP profiling_graph_executor_impl.cpp:124]       %81 : Double(*) = prim::BailOut[index=11](%y.6, %z.1, %counters.1, %11, %i.1, %79)
[DUMP profiling_graph_executor_impl.cpp:124]       %82 : int[] = aten::append(%counters.1, %79) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %83 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %84 : Tensor = aten::ones(%83, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %85 : Double(1) = prim::BailOut[index=12](%84, %counters.1, %11, %i.1, %79, %81)
[DUMP profiling_graph_executor_impl.cpp:124]       %86 : Tensor[] = prim::ListConstruct(%81, %85)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.4 : Tensor = aten::cat(%86, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %88 : int = prim::Constant[value=1]()
[DUMP profiling_graph_executor_impl.cpp:124]       %89 : int = aten::add(%79, %88)
[DUMP profiling_graph_executor_impl.cpp:124]       -> (%6, %89, %y.4)
[DUMP profiling_graph_executor_impl.cpp:124]   %90 : Double(12) = prim::BailOut[index=13](%y, %counters.1)
[DUMP profiling_graph_executor_impl.cpp:124]   %91 : (Tensor, int[]) = prim::TupleConstruct(%90, %counters.1)
[DUMP profiling_graph_executor_impl.cpp:124]   return (%91)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31400

Differential Revision: D19172750

Pulled By: Krovatkin

fbshipit-source-id: 85d3aac4e80b65b83b6be3c0bca8075a731a2b7e
2019-12-19 00:34:50 -08:00
Elias Ellison
fb24f7c4ad catch all exceptions in converting default values to ivalues (#31398)
Summary:
Previously we would only catch `py::cast_error` which led to incomprehensible error messages like: `TypeError: 'NoneType' object is not iterable`. We are running arbitrary pybind code here, and not doing anything with the error message, so we should be less restrictive with the types of errors we catch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31398

Differential Revision: D19166655

Pulled By: eellison

fbshipit-source-id: 84db8b3714c718b475913f2f4bb6f19e62f2d9ec
2019-12-18 20:27:46 -08:00
Jerry Zhang
fe707c7849 Use default_observer and default_weight_observer in tests (#31424)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31424

att

Test Plan:
test_jit.py

Imported from OSS

Differential Revision: D19162368

fbshipit-source-id: 33b95ba643eeeae942283bbc33f7ceda8d14c431
2019-12-18 18:35:07 -08:00
James Reed
a3cdb7eca3 Fix default instantation of dynamic quantized LSTM
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31433

Test Plan: Imported from OSS

Differential Revision: D19164539

Pulled By: jamesr66a

fbshipit-source-id: 7045817ab3dfb530c4480a10523c4c6bcdbfc7eb
2019-12-18 16:59:00 -08:00
davidriazati
148bcd3ee5 Add support for builtins as attributes (#31269)
Summary:
Fixes #27495

This adds builtins as another piece of a concrete type. They're separate from normal functions since they represent the `BuiltinFunction` sugared value (which is a direct call to a builtin op). It also moves the builtins related logic from `jit/__init__.py` to `jit/_builtins.py` so it can be used from `jit/_recursive.py` to look up functions in the builtins table.
](https://our.intern.facebook.com/intern/diff/19149779/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31269

Pulled By: driazati

Differential Revision: D19149779

fbshipit-source-id: d4e5e5d7d7d528b75a2f503e6004394251a4e82d
2019-12-18 15:24:45 -08:00
davidriazati
7692494c67 Fix hex literal parsing (#29935)
Summary:
Stacked PRs
 * #29940 - [jit] Fix parsing of big float literals
 * **#29935 - [jit] Fix hex literal parsing**
 * #29931 - [jit] Throw a better error for int too big for int64_t

Previously these were all parsed as `0`
](https://our.intern.facebook.com/intern/diff/19124944/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29935

Pulled By: driazati

Differential Revision: D19124944

fbshipit-source-id: 1ee0c1dee589933363a5efba069a2cfaf94373c5
2019-12-18 14:00:22 -08:00
davidriazati
1f50cfc24d Throw a better error for int too big for int64_t
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29931

Pulled By: driazati

Differential Revision: D19124934

fbshipit-source-id: 91841d7ba4f2f6142c51fba07b7faa14bb817e3a
2019-12-18 14:00:16 -08:00
Elias Ellison
fb30a48b4e add unsupported section (#31329)
Summary:
Add a section for unsupported ops, and modules. Automatically generate the properties and attributes that aren't bound, and for ops that have semantic mismatches set up tests so the docs stay up to date.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31329

Differential Revision: D19164472

Pulled By: eellison

fbshipit-source-id: 46290bb8a64d9de928cfb1eda5ff4558c3799c88
2019-12-18 13:56:02 -08:00
Alexander Stante
f30b14dead Fix handling of type comments in body (#30590)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/30477. Any type comment after `# type: (...) -> ` is ignored.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30590

Differential Revision: D18887351

Pulled By: driazati

fbshipit-source-id: 162c652f6d7610d14609bbcb25aaa27cdd947a76
2019-12-12 18:19:30 -08:00
Elias Ellison
bee6344d4e remove / rewrite weak module tests (#31193)
Summary:
Remove most of the testing for `weak_script`, since we removed it. Refactor a few of the existing tests to use recursive scripting api.

Fix for https://github.com/pytorch/pytorch/issues/23965
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31193

Differential Revision: D18966291

Pulled By: eellison

fbshipit-source-id: 6b1e18c293f55017868a14610d87b69be42bde12
2019-12-12 13:33:38 -08:00
Elias Ellison
56de8853da Resubmit overload v2 (#31123)
Summary:
Resubmit of https://github.com/pytorch/pytorch/pull/30356 and https://github.com/pytorch/pytorch/pull/31014 :'(

The last commit contains the fix. There was an internal FBcode error not able to compile the previous `impl_default->second.equal(default_val.second))` line. I tried various fixes in C++ internally but couldn't figure anything out. This is a good example of the programming costs of going from python -> c++ for different types of objects, because the conceptual overhead has expanded in scope from (python) -> (python, c++, pybind).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31123

Differential Revision: D18936128

Pulled By: eellison

fbshipit-source-id: 7d8fd66a6dd4a3e9838f3a0b68c219b6565a9462
2019-12-12 07:54:23 -08:00
Lara
97c1e90f46 ONNX Interpolate Add Scales Params (#28324)
Summary:
Fix for : https://github.com/pytorch/pytorch/issues/27176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28324

Reviewed By: hl475

Differential Revision: D18309133

Pulled By: houseroad

fbshipit-source-id: 348bb41393442c6b107d88fc2cd3224e0afa3ccf
2019-12-11 20:09:15 -08:00
davidriazati
679b20b1e4 Unify list elements for all list types (#30777)
Summary:
Previously list elements were only unified for tensor lists.
This improves error messages and expands the unification logic
to include all types.
](https://our.intern.facebook.com/intern/diff/18837726/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30777

Pulled By: driazati

Differential Revision: D18837726

fbshipit-source-id: c4d275562a8429700987569426d694faa8f6002e
2019-12-11 17:00:52 -08:00
David Riazati
1f87e823b8 Make nn.Transformer TorchScript compatible (#28561)
Summary:
This makes `nn.Transformer` usable from TorchScript. It preserves backwards compatibility via `__setstate__` on the encoder/decoder.

Fixes https://github.com/pytorch/pytorch/issues/24173
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28561

Differential Revision: D18124753

Pulled By: driazati

fbshipit-source-id: 7314843e5aa9c9bf974c4672e4edb24ed8ef4a6f
2019-12-11 10:57:31 -08:00
Alban Desmaison
717274c001 Add useful warnings for t.grad when it won't be populated for known reasons (#30531)
Summary:
Fix https://github.com/pytorch/pytorch/issues/2362 and https://github.com/pytorch/pytorch/issues/19778

To avoid issues with frozen model, we only consider warning for Tensors that require gradients and are neither leafs nor retain gradients.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30531

Differential Revision: D18832767

Pulled By: albanD

fbshipit-source-id: 743e863dc14ab57713e66da78b2e4d759dfba0ff
2019-12-11 09:47:18 -08:00
Elias Ellison
9f3fe78239 peephole optimize type refinements (#31024)
Summary:
Peephole optimize out type refinements when they are no longer refining the type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31024

Differential Revision: D18920958

Pulled By: eellison

fbshipit-source-id: 6d05d9812b9f9dcf001de760a78a2042fb832773
2019-12-10 18:32:28 -08:00
Pieter Noordhuis
78a00d72b4 Revert D18899127: resubmit polish up overloads on free functions
Test Plan: revert-hammer

Differential Revision:
D18899127

Original commit changeset: 9049b8718926

fbshipit-source-id: c70a8aa4120aa757dce0926a8ab3cc5c92cd6041
2019-12-10 10:51:07 -08:00
Elias Ellison
af4040d808 resubmit polish up overloads on free functions (#31014)
Summary:
Resubmitting https://github.com/pytorch/pytorch/pull/30356

Second commit has reintroduces deleted function which caused revert previously.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31014

Differential Revision: D18899127

Pulled By: eellison

fbshipit-source-id: 9049b8718926c329d9cb46bb96eac6c278e9b866
2019-12-10 07:57:47 -08:00
Elias Ellison
f48a8901c5 Add floor_divide function (#30493)
Summary:
Adds `torch.floor_divide` following the numpy's `floor_divide` api. I only implemented the out-of-place version, I can add the inplace version if requested.

Also fixes  https://github.com/pytorch/pytorch/issues/27512
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30493

Differential Revision: D18896211

Pulled By: eellison

fbshipit-source-id: ee401c96ab23a62fc114ed3bb9791b8ec150ecbd
2019-12-10 07:51:39 -08:00
Wanchao Liang
73dd8c005a Revert D18864774: polish up overloads on free functions
Test Plan: revert-hammer

Differential Revision:
D18864774

Original commit changeset: 6c566738bd6f

fbshipit-source-id: 669192605a3bc1a6ba06bbb5cae54f61637a45ae
2019-12-09 15:41:45 -08:00
Elias Ellison
446488960a polish up overloads on free functions (#30356)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30356

This finishes up the `torch.jit.overload` api for free-functions.
- defaults now required on the implementation function itself
- fully follows [overload spec](https://mypy.readthedocs.io/en/latest/more_types.html#function-overloading) such that the following is supported

```
overload
def mouse_event(x1: int, y1: int) -> ClickEvent: ...
def mouse_event(x1: int,
                y1: int,
                x2: Optional[int] = None,
                y2: Optional[int] = None): ...
```

Note: `jit.overload` isn't supported yet for UDT, but is support for modules. This PR doesn't make the same changes for modules, if reviewers think I should include them then I could do so in a follow up PR or wait to land this. Since that's still an internal api I think it's fine, and the changes here would allow us to expose `torch.jit.overload` on free functions.

Test Plan: Imported from OSS

Differential Revision: D18864774

Pulled By: eellison

fbshipit-source-id: 6c566738bd6f0551a000a9ea8d56e403636b7856
2019-12-09 15:12:18 -08:00
Elias Ellison
82268bf300 handle reassignment to inf and nan (#30877)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30877

Previously, when the environment tried to reassign variables which had been assigned to "inf" or "nan" it would fail because they are not simple values. Constant prop exposed this, a test was failing internally because of it.

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D18861016

Pulled By: eellison

fbshipit-source-id: b9b72978a26a0b00b13bf8ea7685825551f5a541
2019-12-09 14:20:17 -08:00
Elias Ellison
3eefc06feb add constant prop for immutable types (#30544)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30544

Run Constant Propagation upon compilation only on ops with non-aliasing inputs and outputs. This speeds up the first run of `torchvision.models.resnet18` by over 50% and speeds up compilation by about 25% (although the effects didn't seem additive with with https://github.com/pytorch/pytorch/pull/30503, so I'm going to land this PR first and then see if caching still has a sizable impact).

Running constant prop only with non-aliasing types does a lot of graph cleanup by removing constant ifs and a bunch of other smaller ops. It also avoids all the jitter problems we had when we tried running full constant prop previously. Bc it is idempotent it doesn't jitter, and it doesn't jitter graphs constructed from tracing because tracing doesn't emit any ops that only involve non-aliasing inputs.

Full constant prop isn't idempotent because what ops are run depends on the state of mutation in alias db, which will often change upon successive iterations of constant propagation, and bc it affects graphs constructed from tracing.

Edit: if we were okay with running constant propagation on graphs constructed from tracing (potentially making them hard to debug), an alternative would be to run constant propagation until the graph reaches a fixed point.

Test Plan: Imported from OSS

Differential Revision: D18833607

Pulled By: eellison

fbshipit-source-id: 92a0adb4882d67ed5a0db5c279f5e122aeeba54a
2019-12-09 14:20:12 -08:00
Michael Suo
62b10721fb Actually make flake8 do something (#30892)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30892

Fixes all outstanding lints and actually installs a properly configured
flake8

Test Plan: Imported from OSS

Differential Revision: D18862825

Pulled By: suo

fbshipit-source-id: 08e9083338a7309272e17bb803feaa42e348aa85
2019-12-06 17:50:50 -08:00
Edward Yang
11b3065323 Run method_tests on CUDA. (#30821)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30821

While investigating while our tests didn't catch #30704 I noticed that none
of our tests in method_tests() were being run on CUDA.  This diff moves
those tests into the new device-generic test framework so that we also get
CUDA coverage.  For expediency, I blacklisted all tests which didn't work
on CUDA (rather than fix them); that's something we can leave for future PRs.
This is done by way of a new expectedFailure gadget.

Note that all occurences of skipIfNoLapack needed to be replaced with
skipCPUIfNoLapack.

I punted for test_jit; it's possible those tests should also run CUDA but a JIT
expert should take a look here.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18840089

Pulled By: ezyang

fbshipit-source-id: 66b613b5024c91d3e391c456bb642be7e73d4785
2019-12-06 07:24:27 -08:00
Jerry Zhang
f1755d9aea Insert GetAttr for quantization parameters instead of Constant (#30551)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30551

To enable quantizing with shared types, we need to insert GetAttr nodes for
quantization parameters since the code might be shared by multiple module instances
and we'd like to make quantized module instance also share the same code but with
different values of attributes.

Test Plan:
test_jit.py, test_quantization.py

Imported from OSS

Differential Revision: D18818652

fbshipit-source-id: fc95623cac59dcedd9e3f95397524eae515e7a11
2019-12-05 22:52:45 -08:00
Edward Yang
2ced81f289 Revert "Default to not build Caffe2 operators on Windows. (#29061)" (#30740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30740

This reverts commit 7102aceaf8.

Test Plan: Imported from OSS

Differential Revision: D18834315

Pulled By: ezyang

fbshipit-source-id: 2dbd1cf686864b9840365083182cd6188a285399
2019-12-05 14:01:59 -08:00
Jerry Zhang
c4c2e23385 Supporting making submodules unique (#30037)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30037

Support quantization for modules with reused submodules, e.g. relu (automatically make unique)
We first do a pass on the graph to find all duplicate uses of the same module, and record the `Value`s of the
module instance, for each of these values we create a new module and change the access to that module.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D18821483

fbshipit-source-id: 1698b981e9e9f0c728d9f03fcbcfbd260151f679
2019-12-04 19:26:56 -08:00
Elias Ellison
d38f9117fd Cache compilation of free functions (#30503)
Summary:
We don't have to recompile free functions if we've already compiled them.

Improved compilation of resnet18 by 27%.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30503

Differential Revision: D18796501

Pulled By: eellison

fbshipit-source-id: 2dee0fc5fcf9adc5b92213f8cb813730d71b376f
2019-12-04 12:45:35 -08:00
Jerry Zhang
f73cd28082 InsertObservers for shared class types (#30548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30548

ClassTypes can be shared among different module instances, but previously we assumed
they would be unique, this PR enables the insert_observers pass to work with shared class types

Test Plan:
python test/test_jit.py
python test/test_quantization.py

Imported from OSS

Differential Revision: D18802465

fbshipit-source-id: b782e71e44a043af45577ac2b5c83e695155bb8b
2019-12-04 09:34:47 -08:00
Nikolay Korovaiko
d4c25add45 make sure the counter stays correct in between bailout transitions (#30186)
Summary:
This fixes the second issue reported in https://github.com/pytorch/pytorch/issues/29909 namely, a loop counter is assigned the wrong values after transitioning to a bailout graph.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30186

Differential Revision: D18646845

Pulled By: Krovatkin

fbshipit-source-id: 1f7c601dd9f35892979385ffa132fb0886a4f203
2019-12-03 14:59:08 -08:00
davidriazati
9c02b88791 Add pickler support for Device (#30131)
Summary:
This PR adds (un)pickling support for `c10::Device`. It also adds `torch.device` as a type annotation for device attributes.
](https://our.intern.facebook.com/intern/diff/18664421/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30131

Pulled By: driazati

Differential Revision: D18664421

fbshipit-source-id: 64378fb42b2d1bbe2bd86259e5ed10f24b5d1e49
2019-12-02 17:43:08 -08:00
Jerry Zhang
fec903ce00 Fix test case after get_qparams refactor (#30470)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30470

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D18710775

fbshipit-source-id: b1c7c0afbc538ff1d3e19c5d3d6bd425e4f94f06
2019-11-26 12:16:29 -08:00
Jerry Zhang
0b71e7e1fd Refactor QAT Conv module for better extensibility (#30362)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30362

Right now the qat modules(qat.ConvBn2d, qat.ConvBnReLU2d, qat.Conv2d)
are not convinent to support other dimensions of Conv, this PR refactors
these modules so that we can support Conv1d/Conv3d better

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18691152

fbshipit-source-id: 5b561e6b054eadd31b98cabdf1ac67a61ee9b805
2019-11-26 06:53:12 -08:00
Lingyi Liu
b8f50d9cc8 Support to add dequant for each use of Value (#30145)
Summary:
In this PR, we mainly handle the case there are multiple usage of a Value when inserting the quant-dequant pair. This change will add one dequant for each usage of the Value.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30145

Differential Revision: D18671600

Pulled By: lly-zero-one

fbshipit-source-id: 61324a98861da85b80dcf7e930381311118ae53b
2019-11-25 14:52:58 -08:00
David Riazati
8c6f0c0587 Detect TorchScript archives in torch.load (#29339)
Summary:
This PR looks for a `constants.pkl` file at the top level in a zip file
in `torch.load`. If found, it calls `torch.jit.load` instead and issues
a warning to call `torch.jit.load` directly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29339

Differential Revision: D18611095

Pulled By: driazati

fbshipit-source-id: f070a02f6b5509054fc3876b3e8356bbbcc183e1
2019-11-22 12:30:30 -08:00
James Reed
97fae401f0 Use LinearPackedParams everywhere
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30198

Test Plan: Imported from OSS

Differential Revision: D18628003

Pulled By: jamesr66a

fbshipit-source-id: 76ff0248fd859e805a15cde555d26dd2138636fa
2019-11-22 11:31:17 -08:00
Nikolay Korovaiko
e3334723b2 fix a crash due in nested bailouts (#30097)
Summary:
A prim::BailOut also needs to capture max trip counts as for some graphs they aren't constants and they are used in continuation graphs to figure out the remaining number of iterations to run.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30097

Differential Revision: D18624446

Pulled By: Krovatkin

fbshipit-source-id: 085d25981c6669f65848996cd2d50066cc252048
2019-11-21 09:53:12 -08:00
Wanchao Liang
f7b12a9858 fix aten::grad to return optional list (#29577)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29577

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan: Imported from OSS

Differential Revision: D18491642

fbshipit-source-id: d32b2b3446cf9e8b9a98f6d203a21a75643d8991
2019-11-20 22:19:10 -08:00
James Reed
1eb9f49cc6 Fix test_jit under pytest
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30212

Test Plan: Imported from OSS

Differential Revision: D18632004

Pulled By: jamesr66a

fbshipit-source-id: d5cfd351890140c604535744598d0f6ad8989450
2019-11-20 20:44:28 -08:00
James Reed
449828378d Serialize ClassType as its qualname
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30058

Test Plan: Imported from OSS

Differential Revision: D18584269

Pulled By: jamesr66a

fbshipit-source-id: 5f1d0142bd7cd94eecbd2ed9250a0de47639040b
2019-11-20 16:17:26 -08:00
Jerry Zhang
f2b851a9e5 Returning axis from calculate_qparams (#29494)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29494

`calculate_qparams` of per channel quantization should return the axis, this
PR added this and also added corresponding support in graph mode

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D18580905

fbshipit-source-id: f9691c1f043f8bca39f81716a4d0b10f60a65396
2019-11-20 11:06:48 -08:00
Jerry Zhang
64817a43d2 Test for per channel graph mode quantization (#29493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29493

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D18580907

fbshipit-source-id: 05218e012c0322bb88714670d5dbe9332252f2ee
2019-11-20 11:06:44 -08:00
Mikhail Zolotukhin
2c8dce915c Show full call stack in TorchScript exception even when calls were inlined.
Summary:
This uses newly added InlinedCallStack to print the original call stack
even if the real call stack is shallower because of inlining.
This change also makes torchscript stacktraces look like python ones.

Example:
```
torch.jit.script
def baz(c, b):
    return c + b

torch.jit.script
def foo(c, b):
    return baz(c, b)

torch.jit.script
def bar(c, b):
    return foo(c, b)

bar(torch.rand(10), torch.rand(9))
```

Output before:
```
Traceback (most recent call last):
  File "fail.py", line 25, in <module>
    bar(torch.rand(10), torch.rand(9))
RuntimeError: The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 0
The above operation failed in interpreter, with the following stack trace:
at fail.py:15:11
torch.jit.script
def baz(c, b):
    return c + b
           ~~~~~ <--- HERE
```

Output after:
```
Traceback (most recent call last):
  File "fail.py", line 41, in <module>
    bar(torch.rand(10), torch.rand(9))
RuntimeError: The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 0
The above operation failed in interpreter.
Traceback (most recent call last):
  File "fail.py", line 33
torch.jit.script
def bar(c, b):
    return foo(c, b)
           ~~~ <--- HERE
  File "fail.py", line 29, in foo
torch.jit.script
def foo(c, b):
    return baz(c, b)
           ~~~ <--- HERE
  File "fail.py", line 25, in baz
torch.jit.script
def baz(c, b):
    return c + b
           ~~~~~ <--- HERE
```

Output of non-scripted python code:
```
Traceback (most recent call last):
  File "fail.py", line 36, in <module>
    bar(torch.rand(10), torch.rand(9))
  File "fail.py", line 21, in bar
    return foo(c, b)
  File "fail.py", line 18, in foo
    return baz(c, b)
  File "fail.py", line 15, in baz
    return c + b
RuntimeError: The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 0
```

Differential Revision: D18532812

Test Plan: Imported from OSS

Pulled By: ZolotukhinM

fbshipit-source-id: e7e5ba5e4a8f1c7086406271d0f1685d9db8541a
2019-11-19 17:58:55 -08:00
Jerry Zhang
c2e576e74b Per channel quantization support in insert_prepack_unpack (#29701)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29701

att

Test Plan:
python test/test_jit.py 'TestJit.test_insert_prepack_unpack'

Imported from OSS

Differential Revision: D18580908

fbshipit-source-id: 2d1ce9b6279586198cb53a7fd2a35325fa20bf20
2019-11-19 15:53:04 -08:00
David Riazati
dca123e76d Add zipfile serialization (#29232)
Summary:
Stacked PRs
 * https://github.com/pytorch/pytorch/issues/29244 - Use custom CRC
 * **https://github.com/pytorch/pytorch/issues/29232 - Add zipfile serialization**

This adds a serialization method that uses a zipfile (https://github.com/pytorch/pytorch/issues/26567). Right now it is
guarded behind a flag `_use_new_zipfile_serialization`. In release mode it seems to have performance about the same / slightly better than the current serialization in some simple benchmarks for large/small tensors.

Follow ups:
* Flip the `_use_new_zipfile_serialization` flag
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29232

Differential Revision: D18332036

Pulled By: driazati

fbshipit-source-id: 1bac0847c4d599612cba905f2cac8248783be2f4
2019-11-19 10:17:32 -08:00
Vitaly Fedyunin
5f510374e7 Add torch.memory_format support to the TorchScript
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28544

Test Plan: Imported from OSS

Differential Revision: D18093801

Pulled By: VitalyFedyunin

fbshipit-source-id: 2c82a1508da50a24825b44939434d86546cf1e19
2019-11-18 05:35:49 -08:00
Elias Ellison
902c1f9ef1 Check for mutable default parameters (#29833)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/21545

We we were silently giving wrong semantics previously:

Python behavior:
```
def test(x=[]):
   x.append(1)
   return len(x)

print(test()) # 1
print(test()) # 2
```

By checking at the python layer, we prevent any new models from serializing this behavior but do not break existing serialized models.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29833

Differential Revision: D18513168

Pulled By: eellison

fbshipit-source-id: 6fe73f28e1f9d39dedeaf67a04718089d14401a1
2019-11-14 18:28:48 -08:00
James Reed
90ac35b7bd Fix tracing of autograd functions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29791

Test Plan: Imported from OSS

Differential Revision: D18499142

Pulled By: jamesr66a

fbshipit-source-id: 6c2864dfbfa0419c8c888d55e082a619d058b3ee
2019-11-14 11:18:07 -08:00
Nikolay Korovaiko
78bd0069d3 enable back 2 tests for simple exec
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29661

Differential Revision: D18456143

Pulled By: Krovatkin

fbshipit-source-id: 9e4ae3ae681e3c9a81ada1e8b39da1e1342ce394
2019-11-13 14:22:19 -08:00
Ailing Zhang
8875120b54 Make dropout condition on training.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29436

Reviewed By: bddppq

Differential Revision: D18438288

Pulled By: ailzhang

fbshipit-source-id: d9c6fe4bd734dc87b2154b0ccd80efcb61740ec9
2019-11-12 16:32:02 -08:00
Jerry Zhang
fd8f74e688 Remove observer module after insert_quant_dequant (#29622)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29622

Remove the observer module in the quantized model

Test Plan: python test/test_jit.py 'TestJit.test_insert_quant_dequant'

Differential Revision: D18442888

Pulled By: jerryzh168

fbshipit-source-id: 22c777569af0e814661fe51f76341b39600fae0d
2019-11-12 14:48:40 -08:00
Elias Ellison
fbe90b65fa Cleanup special handling of Containers, allowing custom forwards (#28988)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28988

Make ModuleList, Sequential, ModuleDict go through the same pathway as other modules, cleaning up a bunch of code and allowing them to define custom forwards and other methods.

EDIT: Previously, we would ignore an nn.Sequential attribute if it was not in `__constants__` ("did you forget to add it to Constants"). This PR scripts it even if it is not in `__constants__`. Is that what we want?

Test Plan: Imported from OSS

Differential Revision: D18402821

Pulled By: eellison

fbshipit-source-id: dd4f28fb0df0d1ba4ad1b3bc34ba141959a433f7
2019-11-12 14:10:38 -08:00
Junjie Bai
949d6ae184 Fix jit tracing namedtuple (#29477)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29477

When passing in a namedtuple as trcing input, __clone_inputs will call into `torch.autograd.function._nested_map` and https://github.com/pytorch/pytorch/blob/593bb14/torch/autograd/function.py#L256 will run into error (because namedtuple doesn't support this style of constructor).
ghstack-source-id: 93586773

Differential Revision: D18405504

fbshipit-source-id: 8d0135cff0bdaaabcf6e06fac63df0f75c0c50b9
2019-11-12 10:38:20 -08:00
Jianyu Huang
bbff06ee96 Convert conv_prepack to conv2d_prepack and conv_unpack to conv2d_unpack (#29529)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29529

Pull Request resolved: https://github.com/pytorch/glow/pull/3771

We would like to replace `conv_prepack` with `conv2d_prepack` and  `conv_unpack` with `conv2d_unpack`.

This makes the naming consistent between 2D and 3D conv:
```
torch.ops.quantized.conv2d_prepack
torch.ops.quantized.conv2d_unpack
torch.ops.quantized.conv2d
torch.ops.quantized.conv3d_prepack
torch.ops.quantized.conv3d_unpack
torch.ops.quantized.conv3d
```

We should do this earlier rather than later when we have more users for the quantized conv2d ops, for better engineering.

The replacement bash command is as the follows:
```
find ./ -type f -exec sed -i -e 's/quantized::conv_prepack/quantized::conv2d_prepack/g' {} \;
find ./ -type f -exec sed -i -e 's/quantized::conv_unpack/quantized::conv2d_unpack/g' {} \;
find ./ -type f -exec sed -i -e 's/torch.ops.quantized.conv_prepack/torch.ops.quantized.conv2d_prepack/g' {} \;
find ./ -type f -exec sed -i -e 's/torch.ops.quantized.conv_unpack/torch.ops.quantized.conv2d_unpack/g' {} \;
```
ghstack-source-id: 93661879

Test Plan: CI

Reviewed By: jackm321

Differential Revision: D18421079

fbshipit-source-id: 17ae8b1ee79223bd2c5d4bbccd57af6580c4ab12
2019-11-11 21:54:10 -08:00
Jerry Zhang
70f886ffa4 Revert D18253777: Remove observer module after insert_quant_dequant
Test Plan: revert-hammer

Differential Revision:
D18253777

Original commit changeset: 26081c4c3fd3

fbshipit-source-id: 88f330c34976030c9310e7982fa6ae74e093ebbf
2019-11-11 17:09:58 -08:00
Jerry Zhang
587996ef04 Remove observer module after insert_quant_dequant (#28985)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28985

Remove the observer module in the quantized model

Test Plan:
python test/test_jit.py 'TestJit.test_insert_quant_dequant'

Imported from OSS

Differential Revision: D18253777

fbshipit-source-id: 26081c4c3fd3dc049cafa8c0383219bc4c233589
2019-11-11 16:31:01 -08:00
Zachary DeVito
4e4e29a511 Simplify ScriptModule bindings. (#29432)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29432

This removes a lot of the private methods on torch._C.ScriptModule,
and instead implements functionality in terms of slot_dict_impl views
to implement _parameter, _buffers, and _modules in nn.Module.

A followup PR should also remove the _register_attribute,
_register_module, and _register_parameter methods, but this requires
more refactoring of the way tracing creates modules and replication
for data parallel works.

Test Plan: Imported from OSS

Differential Revision: D18387963

Pulled By: zdevito

fbshipit-source-id: f10d47afeb30c1e05d704ae5ac4166830933125c
2019-11-11 13:52:36 -08:00
Nikolay Korovaiko
5b702ab52b switching to a simple/full executor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29230

Differential Revision: D18402229

Pulled By: Krovatkin

fbshipit-source-id: 62f4bc9bc89c0c7369359bba1359c22a2fa80f46
2019-11-11 13:41:35 -08:00
eellison
e01fc56ecb move type inference for arange into c++ (#27629)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/17662

I'm not sure if `arange` needs to be in python_arg_parser at all, given the schemas in native_functions.yaml. In any case this at least fixes the dytpe mismatch.

In follow up PRs I will try to handle some of the other ops that do type inference at the python level, like randint.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27629

Differential Revision: D17885939

Pulled By: eellison

fbshipit-source-id: f97a8bc722b7ab77de1c42a992e49a4a3175ad60
2019-11-11 11:26:21 -08:00
Elias Ellison
91e1f07967 Check for unrolled loop in break & continue (#29474)
Summary:
For the same reason we don't allow iteration over heterogenous types (modulelists/tuples) with types that don't have a static length, we also can't break/continue within them - we need to statically know all types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29474

Differential Revision: D18406097

Pulled By: eellison

fbshipit-source-id: 70ed3fc4947b6237cdd6703135a988a5c13ce786
2019-11-08 15:51:13 -08:00
Michael Suo
52456b2eba add hasattr() (#29332)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29332

Even though we're statically typed, this can be useful, e.g. as
shorthand when iterating through a module list.

Test Plan: Imported from OSS

Differential Revision: D18393097

Pulled By: suo

fbshipit-source-id: aa42e955f88d1b8a876d0727055eb596453b9839
2019-11-08 13:58:14 -08:00
Edward Yang
4e21157e01 Revert "Revert D18171156: Merge Tensor and Variable." (#29299)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29299

This reverts commit 9c43b16df9, but also
with the changes from D18348622.  Comments there:

thpp-compatibility is used by admarket/adreview/service:adreviewservice and
libtorch is too big for the service to deal with.

thpp-compatibility doesn't support autograd, so we hack around dispatching
variables by using AutoNonVariableTypeMode everywhere we call into ATen,
so we never attempt to call into Variable stubs.  If you get it wrong,
you'll get an error like:

```
what():  Could not run 'aten::empty' with arguments from the 'VariableTensorId' backend. 'aten::empty' is only available for these backends: [SparseCPUTensorId, CPUTensorId, MkldnnCPUTensorId]. (lookup_ at caffe2/aten/src/ATen/core/dispatch/DispatchTable.h:298)
```

Test Plan:
Imported from OSS

```
buck test //thpp-compatibility/...
buck build mode/opt-clang admarket/adreview/service:adreviewservice
```

adreviewservice canary: https://our.intern.facebook.com/intern/ads/canary/422290029716387895 (comparing against parent comment due to current breakage) ==> experiment store https://our.intern.facebook.com/intern/experiment_store/experiment/43990006/
adfinder canary: https://our.intern.facebook.com/intern/ads/canary/422268535840333934
adindexer canary: https://our.intern.facebook.com/intern/ads/canary/422268550559034675

adreview second canary:  https://our.intern.facebook.com/intern/ads/canary/422307863515591925

canary without thpp-compat fixups https://our.intern.facebook.com/intern/ads/canary/422308951649168772

Reviewed By: dreiss

Differential Revision: D18353504

Pulled By: ezyang

fbshipit-source-id: 65feaba39fa07bb66762810909aeb38868668a30
2019-11-08 09:11:20 -08:00
Elias Ellison
19d3a7ad02 fix negative string indexing (#22700)
Summary:
strings allow negative indexing in python
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22700

Differential Revision: D18382382

Pulled By: eellison

fbshipit-source-id: 05c3fa0890be6234ee1467da0e65697f51236523
2019-11-07 17:28:16 -08:00
James Reed
782e80e6e7 Make jit.trace_module reentrant (#29411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29411

Fixes https://github.com/pytorch/pytorch/issues/29367

Test Plan: Imported from OSS

Differential Revision: D18380559

Pulled By: jamesr66a

fbshipit-source-id: 5caf606ccbc5dc79dac14e3c28cc02dec19ce695
2019-11-07 16:29:06 -08:00
Jerry Zhang
de9a54466d clone should preserve the type of attribute (#29269)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29269

Hit this bug when I have an attribute of type `Optional[Tensor]` which
is initialized to None and reassigned later to some tensor.

Test Plan:
.

Imported from OSS

Differential Revision: D18364338

fbshipit-source-id: d8e1277a84ab7d80331cba83f5639469d398632e
2019-11-07 15:25:20 -08:00
James Reed
1dd3c8e539 Skip flaky test
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29403

Test Plan: Imported from OSS

Differential Revision: D18377162

Pulled By: jamesr66a

fbshipit-source-id: 69052a7466d03468146e99da45f1ee2c9e85dfa8
2019-11-07 12:52:47 -08:00
Alban Desmaison
b14c5943d4 Handle warning in torchscript (#27154)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27154

Fix for #25859

* #28283 Fix clang-tidy errors in csrc/Module.cpp

Test Plan: Imported from OSS

Differential Revision: D18249631

Pulled By: albanD

fbshipit-source-id: 4e9bbad07cc39e7c7f0546ef7587bd4ab2dd644e
2019-11-07 08:35:16 -08:00
Alban Desmaison
9b875e1256 Buffer python warning to avoid deadlocks
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26613

Test Plan: Imported from OSS

Differential Revision: D18249633

Pulled By: albanD

fbshipit-source-id: 863f52400e1b97943a67a9e1abb09ae8d045e7f0
2019-11-07 08:35:06 -08:00
Zachary DeVito
796363147f Implement more of of the nn.Module API (#28828)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28828

This updates torch::script::Module to more closely match the behavior
of nn.Module. In particular, it implements the (optionally recurisive)
iterators that retrieve submodules, parameters, and buffers and makes
their names match the python versions.

This also removes the individual accessors for Parameter, Module, Buffer, etc.
and replaces them with a single `attr` function which is equivalent to
writing `a.foo` in Python (`setattr` emulates `a.foo = v`).
As we build out the user-facing API for TorchScript values this will end
up matching how an  attribute is accessed on general objects.

This PR preservers the python bindings for script::Module by emulating the
old API at the binding level. A followup will clean up the usage to more
directly match the C++ API.

Test Plan: Imported from OSS

Differential Revision: D18197611

Pulled By: zdevito

fbshipit-source-id: 7ee4dcbb258605d1c988314b05d938423f1ccee5
2019-11-06 22:58:25 -08:00
James Reed
309b28ee3a Trace module calls
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29261

Test Plan: Imported from OSS

Differential Revision: D18343363

Pulled By: jamesr66a

fbshipit-source-id: 0c6394205e2c0ea8708028d20df83fe17b466ff4
2019-11-06 15:05:49 -08:00
Michael Suo
cc457ca30f split remaining "easy" tests (#29249)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29249

This splits out all the tests that are "easy", leaving `TestJit`,
`TestScript`, the autogenerated tests, and a small docs test.

Splitting those into reasonable chunks is more effort which is less
mechanical.

Differential Revision: D18339007

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: 69164b9f9a2c379fe8923a846c98dd3c37ccb70e
2019-11-06 13:23:01 -08:00
Edward Yang
9c43b16df9 Revert D18171156: Merge Tensor and Variable.
Test Plan: revert-hammer

Differential Revision:
D18171156

Original commit changeset: 5b6a045beba3

fbshipit-source-id: f5581d902c2305018ea49f8473592be2a465560b
2019-11-06 10:57:00 -08:00
James Reed
6e38c3b89e Make get_trace_graph private
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29149

Test Plan: Imported from OSS

Differential Revision: D18307559

Pulled By: jamesr66a

fbshipit-source-id: 0b6aec2a1d10810d4e7f6b30b256cca79fc4e854
2019-11-05 17:04:36 -08:00
Elias Ellison
a5aeb37493 Don't throw when type is used in TorchScript (#28053)
Summary:
Type objects in python have an attribute `__abstractmethods__` that throws when it is accessed, so we were failing with an AttributeError whenever a type was used in TorchScript.

This pr prevents that error from happening. We can't just throw when a type is used because it could be used to access a static method: https://github.com/pytorch/pytorch/pull/27163
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28053

Differential Revision: D18332347

Pulled By: eellison

fbshipit-source-id: 9c7f2220f92674ad4d903621d9762cecc566ab0d
2019-11-05 15:15:12 -08:00
Edward Yang
25261a4776 Merge Tensor and Variable. (#28620)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28620

All Tensors are Variables now, they just happen to have requires_grad=False. Tensors ALWAYS have `VariableTensorId` in their type set.

When constructing this patch, I had to make decisions about what I would fix in this patch, and what I would leave for follow up PRs. Here is the cleanup that happens in this patch:

- The `is_variable` property is removed from TensorOptions. I removed this immediately because unlike Tensor::is_variable, TensorOptions::is_variable doesn't respect our VariableTensorId thread-local state. This means that there were a bunch of places where TensorOptions::is_variable was false, which is obviously bogus in the world when tensor and variable are merged. Instead of keeping the method as a function that always returns true, I just opted to remove it entirely (it's not public API.) All places we set `is_variable` are deleted.
  - Knock on effect: there is no longer a separate DeprecatedTypeProperties for the variable and non-variable versions of type.
  - Knock on effect: instead of asserting on TensorOptions::is_variable, instead we just test `at::impl::variable_is_excluded()`
- There is now only one copy of the cuDNN RNN dropout cache, not two (I'm not sure why we had two to begin with)

Some cleanup that doesn't happen in this patch:
- Eliminating unnecessary uses of `make_variable`
- Eliminating `Tensor::is_variable`

The most subtle part of this patch is retaining tracing behavior: the fact that everything is a Variable means that more code gets routed to VariableType than before; this can change traces. I identified two places where we didn't appropriately turn off VariableType, mostly factory functions:

- `torch.tensor` must turn off VariableType before invoking `at::empty` to construct the tensor, as it subsequently does direct data access
- `tensor_slow` (invoked when you pass a Python scalar to a tensor argument) must turn off VariableType before calling `scalar_to_tensor` so the scalar gets traced as constant, rather than as a call to `scalar_to_tensor`.

Honestly, these are all giant hacks, and should be replaced with a more specialized guard that just toggles tracing.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D18171156

Pulled By: ezyang

fbshipit-source-id: 5b6a045beba37492647e350190f495114e86504d
2019-11-04 14:59:57 -08:00
Elias Ellison
60cb56d128 Refactor iterables (#29138)
Summary:
Refactor list comprehensions so they go through the same path as other for loops, making List Comprehensions work with modulelists, also fixing https://github.com/pytorch/pytorch/issues/27255

Replacing https://github.com/pytorch/pytorch/pull/28296 which was gh-poisoned and previously accepted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29138

Differential Revision: D18303432

Pulled By: eellison

fbshipit-source-id: 8e4c0ba6f800142d5c4d921d56917cfae0c74655
2019-11-04 14:39:22 -08:00
Edward Yang
7102aceaf8 Default to not build Caffe2 operators on Windows. (#29061)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29061

It looks like we are too close to the maximum library size on
Windows.  Kill Caffe2 operators to get us lower again.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: smessmer

Differential Revision: D18281083

Pulled By: ezyang

fbshipit-source-id: 8a11f9059dbf330f659bd96cc0cc2abc947723a8
2019-11-04 14:32:47 -08:00
Elias Ellison
fdeef45852 Add Support For Module Containers as Iterables (#28255)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28255

Add support for treating Sequentials, ModuleLists, and ModuleDicts as iterables.

As previously, when emitting a for loop over a Module Container we unroll the for loop over all elements. We require that any Sugared Value in an iterable with a Module Container have a statically - determinable length.

Otherwise, if you zipped over a list of varying length and an nn.Sequential that alternated between returning a Tensor and a Dictionary, the output type would change based on the length of the list.

Fix for #17179
And https://github.com/pytorch/pytorch/issues/27401
and https://github.com/pytorch/pytorch/issues/27506

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D18278124

Pulled By: eellison

fbshipit-source-id: aca336a5b8da89c756b1f0884883649510cbde3c
2019-11-04 09:19:40 -08:00
Wanchao Liang
1e904049ca guard against inheritance on torchscript classes (#28407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28407

Given that we do not have support for inheitance or any polymorphism
strategy yet, we should guard against user from using it until we get
the full support so that user won't confuse by the weird behaviors.

Test Plan: Imported from OSS

Differential Revision: D18284310

fbshipit-source-id: f55a224f4190d57926d91ed98f6168d787387eb8
2019-11-02 16:38:56 -07:00
Jerry Zhang
5ac3df7712 Minor fix and turn off fold_convbn (#27403)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27403

In fold_convbn pass, we need to recompute the parameter(weight, bias) for
conv, update the attribute of conv and update the access of bias in conv
because if the original conv have no bias, the `self.bias` access will be
inline and replaced by Constant node `None = prim::Constant()`, we need to
update this to use `GetAttr[name="bias"]` to make this work. But there is
also some work going on the handle constants, so we'll fix this pass after
that is done.

Test Plan:
.

Imported from OSS

Differential Revision: D18182918

fbshipit-source-id: bba510bc41ab58e0eb76f7b77335b6e3ffe2862d
2019-11-01 12:15:38 -07:00
Vitaly Fedyunin
4bfe2f0900 Fix jit outplace tracing and reapply changes to *_like operators. (#28839)
Summary:
Reapply reverted and fix files `gen_variable_type.py` `test_jit.py`

https://github.com/pytorch/pytorch/issues/27891 Cleanup testing of _like operators
https://github.com/pytorch/pytorch/issues/27890 Add memory format support to randn_like operator
https://github.com/pytorch/pytorch/issues/27889 Add memory format support to randint_like operator
https://github.com/pytorch/pytorch/issues/27562 Add memory format support to zeros_like operator
https://github.com/pytorch/pytorch/issues/27561 Add memory format support to rand_like operator
https://github.com/pytorch/pytorch/issues/27270 Add memory format support to ones_like operator
https://github.com/pytorch/pytorch/issues/27262 Add memory format support to full_like operator
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28839

Test Plan:
Imported from GitHub, without a `Test Plan:` line.

buck test mode/dev //language_technology/neural_mt/os/pytorch_translate/test:test_onnx -- 'test_forced_decoder_export_vocab_reduction \(language_technology\.neural_mt\.os\.pytorch_translate\.test\.test_onnx\.TestONNX\)'

Differential Revision: D18203397

Pulled By: VitalyFedyunin

fbshipit-source-id: eea41cbd4c232cf5a54172b1e1b16b173798f298
2019-10-31 13:23:08 -07:00
Jerry Zhang
6b5bfd4cfc Make inserted child module names unique (#27237)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27237

Making inserted observer module and wrapper module names unique

Test Plan:
test_jit.py

Imported from OSS

Differential Revision: D18182917

fbshipit-source-id: 77aa5997fbf024c73085866550372b5e68ad9ae1
2019-10-29 12:30:49 -07:00
Nikolay Korovaiko
47faee2fae Switching tests to ProfilingExecutor (rebased)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28535

Differential Revision: D18197932

Pulled By: Krovatkin

fbshipit-source-id: 2639b205e899f800787ee57c157447d54e4669c3
2019-10-29 11:41:42 -07:00
James Reed
f782500ee0 Abstract tracer::enter and tracer::exit into a function
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28473

Test Plan: Imported from OSS

Differential Revision: D18121007

Pulled By: jamesr66a

fbshipit-source-id: 4c4a4344ad9bcc4630b945d2a645a0b05928933c
2019-10-26 18:41:14 -07:00
davidriazati
dbf1996f79 Support MultiheadedAttention module (#28555)
Summary:
This makes MultiheadedAttention TorchScript compatible

It also breaks BC-compatibility for old models that do not have `_qkv_same_embed_dim` as an attribute.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28555

Pulled By: driazati

Differential Revision: D18124746

fbshipit-source-id: 5c5042fc6fc0e557db859a8ae05174cba5fce6a9
2019-10-25 17:28:53 -07:00
Jerry Zhang
e280f93e31 Prepack folding for conv2d (#27119)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27119

att

Test Plan:
python test/test_jit.py 'TestJit.test_fold_prepack'

Imported from OSS

Differential Revision: D17717636

fbshipit-source-id: 97e9f8d927f7eacedf09f47b8ae1bf8216b8cad4
2019-10-23 09:03:14 -07:00
neginraoof
d2eb08d17b Fix tracing slice/select with dynamic inputs (#26549)
Summary:
Fix Slice/Select trace arguments. This PR stashes arguments to functions in order to avoid tracing them as constants.
This PR depends on a fix for select op in PR:
https://github.com/pytorch/pytorch/pull/25273
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26549

Reviewed By: hl475

Differential Revision: D17623851

Pulled By: houseroad

fbshipit-source-id: ae314004266688d2c25c5bada2dcedbfc4f39c5b
2019-10-22 17:09:40 -07:00
Michael Suo
4e033b0040 split TestLogging, TestDict, TestList
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28038

Test Plan: Imported from OSS

Differential Revision: D17954441

Pulled By: suo

fbshipit-source-id: 4703fb577adea3aa00fabb13c577b055e9ab4d7c
2019-10-21 17:15:15 -07:00
Michael Suo
f36497e687 split test_type_sharing
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28037

Test Plan: Imported from OSS

Differential Revision: D17954442

Pulled By: suo

fbshipit-source-id: 6edee4d7dee0e52b58e71d3b520c0503fb7bd0ed
2019-10-21 17:15:11 -07:00
Zachary DeVito
fb4517132f Allow 'Any' to appear as a type argument. (#26572)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26572

Combined with isinstance specialization this allows a degree of polymorphic
functions to work without needing to use our weirder overload hacks.

We do not define any operators on Any, so the only thing you can do with it
is to put it in containers or type refine it using an isinstance check.
Any is restricted from appearing in non-argument position because we
cannot restore type tags if it ends up as a field in a class.

Test Plan: Imported from OSS

Differential Revision: D17530643

Pulled By: zdevito

fbshipit-source-id: f06f78ce84819f7773953a492f3d4c49219ee94c
2019-10-16 11:07:08 -07:00
Hiroshi Ogawa
97b39a296f Fix error report highlight for unmatched type annotation (#27195)
Summary:
This PR fixes https://github.com/pytorch/pytorch/issues/25801 (see there for my verbose analysis).

As an example, for the following code:

```
import torch

torch.jit.script
def f1(x):
    # type: (int, int) -> None
    pass
```

this PR will change error message from this:

```
RuntimeError:
Number of type annotations (2) did not match the number of function parameters (1):
# type: (int, int) -> None
```

to this:

```
RuntimeError:
Number of type annotations (2) did not match the number of function parameters (1):
at __scratch__/example.py:4:0
torch.jit.script
def f1(x):
~~~~~~~~ <--- HERE
    # type: (int, int) -> None
    pass
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27195

Differential Revision: D17910902

Pulled By: driazati

fbshipit-source-id: af5c6353069d005752d6c7f0bd6a0c6db8437e55
2019-10-16 10:39:36 -07:00
davidriazati
8cdc262063 Add support for @staticmethod (#27163)
Summary:
Resolve static methods as functions

Fixes #26792
](https://our.intern.facebook.com/intern/diff/17695094/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27163

Pulled By: driazati

Differential Revision: D17695094

fbshipit-source-id: 4671cae1a92526a35c83b8d9c12a50aa5442412b
2019-10-16 10:36:38 -07:00
Zachary DeVito
cf43aa3e16 add type refinements for isinstance checks (#27772)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27772

This replaces unchecked_unwrap_optional with unchecked_cast. This
enables the generalization of type refinement so that it works for
isinstance checks as well. This also removes unchecked_unwrap_optional from
code we generate, which is good because it is a hard op to serialize well
since it doesn't directly encode the Optional[T] being unwrapped. In contrast,
unchecked_cast always explicitly lists the type.

Test Plan: Imported from OSS

Differential Revision: D17885424

Pulled By: zdevito

fbshipit-source-id: ce81077d6fbeaf2a802a2e0b17349aca85670466
2019-10-15 16:00:42 -07:00
Zachary DeVito
30d9316f35 refactor tryMatchSchema (#26499) (#27773)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27773

We've changed how these functions are used over time, so I cleaned up
the header file API to match. In particular:

* tryMatchSchemas was added since the overload logic got copy/pasted
into three separate locations.
* With this change, tryMatchSchema is no longer public, as it is not needed
  outside of tryMatchSchemas
* emitBuiltinFunction no longer needs a requires argument (it was always true)

* Argument order for all the schema matching stuff now puts the 'self'
builtin override last. This is only rarely used and was inconsistent with
matchSchema

Test Plan: Imported from OSS

Differential Revision: D17885425

Pulled By: zdevito

fbshipit-source-id: 064bc9fa4bd57b2e5366fff9f3c6ab9b9945e08b
2019-10-14 20:45:25 -07:00
Michael Suo
a4a5b6fcaa Revert D17913708: [pytorch][PR] [JIT] throw on custom forward for module containers
Test Plan: revert-hammer

Differential Revision:
D17913708

Original commit changeset: 1cc2a8a4b573

fbshipit-source-id: 19ad68a1b0fd8e0f17e1b7ab92879106517e13d2
2019-10-14 17:48:31 -07:00
Michael Suo
aaedf1b38b break out test_recursive_script (#27819)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27819

The idea here is to preserve the fact that `test_jit.py` contains all the JIT tests. So we import `JitTestCase`s from `jit/` into `test_jit.py` so that the test loader will find and run them when you do `python test_jit.py`. This also means that things like `-k` will work as expected.

The individual test files in `jit/` will throw if run directly, to prevent cases where the CI accidentally runs multiple versions of the same test.

Differential Revision: D17898105

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: 0cd6f8421c86c90a6e1bae33a3fdbe998f570e07
2019-10-14 16:00:35 -07:00
Michael Suo
151483e25d move import_class_test files around (#26722)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26722

Put them in a directory under jit/ to prep for test splitting

Test Plan: Imported from OSS

Differential Revision: D17550582

Pulled By: suo

fbshipit-source-id: a592b671ffe808f02d0a597d441bd98a18c9109e
2019-10-14 16:00:31 -07:00
James Reed
fdea0cbe40 s/TestEndToEndHybridFrontendModels/TestModels/
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27877

Test Plan: Imported from OSS

Differential Revision: D17909137

Pulled By: jamesr66a

fbshipit-source-id: d8d730eed562b0f08caed7be302dd122af61e877
2019-10-14 13:13:30 -07:00
Elias Ellison
cd6b37afa7 throw on custom forward for module containers (#27763)
Summary:
Custom forwards of containers would silently not be compiled previously. Throw an error now instead.

Fix for https://github.com/pytorch/pytorch/issues/26671
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27763

Differential Revision: D17913708

Pulled By: eellison

fbshipit-source-id: 1cc2a8a4b57356ba7f007a95ede0a31e5d61aa82
2019-10-14 13:08:10 -07:00
Mike Ruberry
f6bda1e07b Removes @default_floating_dtype decorator (#27628)
Summary:
One fewer legacy decorator cluttering the test suite.

Functions relying on this decorator were updated or, in the case of test_sparse, the test suite was put back on double by default.

Note: this PR is blocked on https://github.com/pytorch/pytorch/issues/27599.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27628

Differential Revision: D17896254

Pulled By: mruberry

fbshipit-source-id: 13d460301f50ef4af7a660372432108164c0de1f
2019-10-12 12:39:34 -07:00
Michael Suo
341262754f module dedupe (#26666)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26666

Changes:
- Introduce a `ConcreteModuleType` concept. This acts both as the key into the type
  cache, and as the source of truth for `ModuleValue::attr` queries. It needs
  to do both jobs because that's how we ensure correctness (if the types are
  different, it's because `ModuleValue::attr` would return different things).
- Now `recursive_script` will first construct a `ConcreteModuleType` and search for a
  pre-existing type before starting compilation.
- All previous paths to creating a `ScriptModule` (including inheriting from
  `ScriptModule`) are now rewritten to go through `create_script_module`, so
  that we have only a single place where construction happens.

Behavioral changes:
- Big change to `torch.jit.ScriptModule` inheritance: all attributes are now
  recursively scripted if possible, matching recursive scripting semantics.
  This makes it hard to keep something from being scripted (for example, a
  Python submodule). Possibly we'll need an `ignore()` type thing for
  attributes. In particular, this adds `self.training` to *every* ScriptModule, since
  it's present on every `nn.Module`.
- I believe this change to be transparent to existing users of the inheritance API, since if you had an attribute that is unscriptable that you never used, there is no error. In some cases, we will create new attributes (even if they are unused), which will increase serialized model size from before.

Test Plan: Imported from OSS

Differential Revision: D17551196

Pulled By: suo

fbshipit-source-id: b476d1c9feb3ddfd63406d90989aaf9dfe890591
2019-10-12 09:51:57 -07:00
Michael Suo
759c99c2e3 [jit Python None should have its type inferred as NoneType (#26665)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26665

This is actually useful. For example: in batchnorm.py, all the tracked
stats are either `nn.Parameter` or `None`. We should register them as
params if they are set, or attributes with type NoneType if they are
not.

Test Plan: Imported from OSS

Reviewed By: shannonzhu

Differential Revision: D17551197

Pulled By: suo

fbshipit-source-id: 8d6f6d76d4dab0d524c4ffdfe0c1dd465771cd00
2019-10-12 09:51:49 -07:00
Edward Yang
7135f7c263 Revert D17412856: [JIT] add type refinements for isinstance checks
Test Plan: revert-hammer

Differential Revision:
D17412856

Original commit changeset: ded47eb086c4

fbshipit-source-id: 854a6c8f322435c3f3416dbedcb642cb2d2902b1
2019-10-11 13:02:30 -07:00
Edward Yang
07fc7d05ce Revert D17488297: [jit] refactor tryMatchSchema
Test Plan: revert-hammer

Differential Revision:
D17488297

Original commit changeset: a32d838ce355

fbshipit-source-id: 2bd319d9554d81d09231bf1e34c8417bff468940
2019-10-10 17:39:48 -07:00
Zachary DeVito
51656eefb0 refactor tryMatchSchema (#26499)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26499

We've changed how these functions are used over time, so I cleaned up
the header file API to match. In particular:

* tryMatchSchemas was added since the overload logic got copy/pasted
into three separate locations.
* With this change, tryMatchSchema is no longer public, as it is not needed
  outside of tryMatchSchemas
* emitBuiltinFunction no longer needs a requires argument (it was always true)

* Argument order for all the schema matching stuff now puts the 'self'
builtin override last. This is only rarely used and was inconsistent with
matchSchema

Test Plan: Imported from OSS

Differential Revision: D17488297

Pulled By: zdevito

fbshipit-source-id: a32d838ce35544972fa8767557acc22149081b55
2019-10-09 22:11:24 -07:00
Zachary DeVito
d44b9cd4bb add type refinements for isinstance checks (#26271)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26271

This replaces unchecked_unwrap_optional with unchecked_cast. This
enables the generalization of type refinement so that it works for
isinstance checks as well. This also removes unchecked_unwrap_optional from
code we generate, which is good because it is a hard op to serialize well
since it doesn't directly encode the Optional[T] being unwrapped. In contrast,
unchecked_cast always explicitly lists the type.

Test Plan: Imported from OSS

Differential Revision: D17412856

Pulled By: zdevito

fbshipit-source-id: ded47eb086c4610998ad92bb1174225af00220f7
2019-10-09 22:11:19 -07:00
Zachary DeVito
eb9000be4e always use the closure to resolve variable names (#27515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27515

Resoving variable names using the local activation frames does not work
when using recursive scripting, but our current code tries to do it
(incorrectly) anyway. The reason it works is only because the script
call is in the same local frame as the definition. This will not be
true in practice and makes it seem like the API works in more cases
than it really does. This forces us to always use closure-based annotations,
documents it, and it fixes the tests so that they still pass.

Test Plan: Imported from OSS

Differential Revision: D17803403

Pulled By: zdevito

fbshipit-source-id: e172559c655b05f0acf96c34f5bdc849f4e09ce2
2019-10-09 12:16:15 -07:00
James Reed
e63bfb7877 Use orig source range in Node::print
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27524

Test Plan: Imported from OSS

Differential Revision: D17806454

Pulled By: jamesr66a

fbshipit-source-id: 5e3edb87fc79ad8dd1aed0b7d4a2153e7e0429ab
2019-10-08 10:30:56 -07:00
davidriazati
725810f42c Set existing attributes under recursive script (#27514)
Summary:
This is related to #27109, `training` was being skipped since modules
have it as an attribute by default, but it should be copied anyways.
](https://our.intern.facebook.com/intern/diff/17802544/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27514

Pulled By: driazati

Differential Revision: D17802544

fbshipit-source-id: 9e8f068903b67073c509c2c598b27622fcada2d7
2019-10-08 10:12:04 -07:00
Mike Ruberry
7f183a978f Stops common_utils.py from setting the default tensor type (to torch.DoubleTensor) (#27444)
Summary:
This PR stop common_utils.py from setting the default tensor type when it's imported. See issue https://github.com/pytorch/pytorch/issues/27355. This is a frequent source of confusion for test writers.

Many tests relied on this setting (whether they knew it or not), and this PR also updates the test suite to pass without common_utils.py setting the default tensor type. Some larger test files now set the default floating dtype themselves, however. These test files are:

- test_autograd.py
- test_distributions.py
- test_jit.py
- test_nn.py

This is still a significant improvement from today, however. First, these files set the default floating dtype much more clearly than importing it from common_utils. Second, the rest of the test suite no longer sets this globally. Third, this PR is a springboard to updating those tests, too. In particular, as tests are made generic they can be moved aways from relying on this global setting.

Notable technical changes in this PR are:

- Significant updates to test_torch.py to make it pass without setting the default floating dtype globally.
- The default_floating_dtype decorator is now defined in common_utils, a couple versions of this operator were defined in test files previously.
- test_torch-specific parts of common_utils were refactored into test_torch.
- tensor creation methods in common_utils were updated to accept an optional dtype and device.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27444

Differential Revision: D17795235

Pulled By: mruberry

fbshipit-source-id: 7f77271c0c836e69f183ad9057a2c4b29f09d2e1
2019-10-08 09:52:44 -07:00
davidriazati
0046092178 Reduce special casing around 'training' (#27109)
Summary:
Most of this was old cruft left over from special handling of `training` before we had a `bool` type. This makes all modules have a `training` attribute that is true by default and removes all other special handling.

Fixes #26884
](https://our.intern.facebook.com/intern/diff/17728129/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27109

Pulled By: driazati

Differential Revision: D17728129

fbshipit-source-id: 8ddc9fbb07a953dd05529538bfdd01ed88b5cb57
2019-10-07 13:52:59 -07:00
Wanchao Liang
b05ec828ad Add interface/object serialization as module attribute (#26770)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26770

This PR added the interface/object serialization as module attribute, to
allow initializing object as a interface type during python
initialization. Because interface type can be backed by any class object
that implements that interface, if we declare it in
python/module.__init__, we will need to collect the run time types of the
value and serialize them to ensure complete code information

Test Plan: Imported from OSS

Differential Revision: D17742707

fbshipit-source-id: 7f614ad4f982996d320a0e2dd3515bf47370e730
2019-10-04 17:12:08 -07:00
Zachary DeVito
9ade1e6944 improve error messages when a method or attribute is missing (#27110)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27110

Previously missing methods on some types like tensors would talk about
'builtins' which are only a thing inside of the compiler. Furthermore,
the error would only occur when the builtin was applied and it was discovered
that no builtin existed. This changes the error message so that it
discovers that method on our builtin types does not exist on attribute lookup.

Test Plan: Imported from OSS

Differential Revision: D17677616

Pulled By: zdevito

fbshipit-source-id: 2f7cf6c6093a9c832569c44f4b1044a2e56fe205
2019-10-03 21:25:01 -07:00
davidriazati
8fe5dcf699 Skip tests that use numpy if it's not present
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27165

Pulled By: driazati

Differential Revision: D17695078

fbshipit-source-id: d25c920f4c43285028537f88761d47a2c9db7b8f
2019-10-03 17:18:41 -07:00