Commit Graph

353 Commits

Author SHA1 Message Date
Karl Ostmo
4894cba572 Revert D19775659: [WIP] Move profiler to a dispatch wrapper
Test Plan: revert-hammer

Differential Revision:
D19775659

Original commit changeset: 5cbe5f736660

fbshipit-source-id: dcb41d2433697c5d521044a9dbc12c79f31e0929
2020-04-16 14:18:51 -07:00
James Reed
a85c835196 [WIP] Move profiler to a dispatch wrapper (#33057)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33057

Test Plan: Imported from OSS

Differential Revision: D19775659

Pulled By: jamesr66a

fbshipit-source-id: 5cbe5f736660c8543764ef62b16550638d9ceb72
2020-04-16 13:36:37 -07:00
Christian Sarofeen
f11c4f90c2 New CUDA Fuser: Unrolling support, interface refactor (#36435)
Summary:
Unrolling support has been added in a way that we get good performing code on GPUs. Not sure how long this link will last but an example of a generated unrolled kernel is:
https://godbolt.org/z/i0uAv3

What can be seen from there is multiple calls of "ld.global.f32" without "ld.store.f32" in between them (and vice versa). This means that we are launching multiple loads that can be run in parallel, as well as multiple stores that can be run in parallel. This can be a crucial optimization for memory bound kernels. This was generally a point of concern in TVM as an attempt of a similar kernel from TVM produces: https://godbolt.org/z/Vu97vG which surrounds load - store pairs in conditional branches preventing the benefits of unrolling.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36435

Reviewed By: ZolotukhinM

Differential Revision: D21024011

Pulled By: soumith

fbshipit-source-id: e852e282fa7a304aba962e1926f756098c011fe0
2020-04-16 09:20:24 -07:00
Elias Ellison
9cbeb0faed [JIT] Dont optimize shape peepholes on inline (#36404)
Summary:
With https://github.com/pytorch/pytorch/pull/35562, we are running peephole optimization on inlining to reduce the number of nodes that are copied.

The tracer encodes the sizes in the graph like:
```
graph(%0 : Double(7)):
  %1 : Function = prim::Constant[name="tensor_size"]()
  %2 : Tensor = prim::CallFunction(%1, %0)
  return (%2)
```

however people would like to reuse the graph with different shapes so running size invalidations would invalidate that. long term it might be better for the tracer to not include shape information but there are downstream users of that.

Separates out FuseAddMM from peephole so that now there is a single `disable_size_optimizations` parameter, and onnx explicitly invokes fuseaddmm.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36404

Differential Revision: D20968974

Pulled By: eellison

fbshipit-source-id: 56f8f1699e3b0adeeccdfd5a67bb975fd41a2913
2020-04-15 17:49:48 -07:00
Christian Sarofeen
e551bfc8de New CUDA Fuser code lowering refactor (#36199)
Summary:
This PR completely refactors the code lowering process from our IR to CUDA. Before we had one giant step that would go from a relatively high level IR straight to CUDA, now we're lowering this first into concepts like ForLoop, IfThenElse, TensorIndex, Allocate. This lowering will allow us to do more complex code lowering like reductions and unrolling. Unrolling will quickly follow this PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36199

Reviewed By: dzhulgakov

Differential Revision: D20925220

Pulled By: soumith

fbshipit-source-id: 8f621c694c68a1aad8653e625d7287fe2d8b35dc
2020-04-09 14:27:05 -07:00
Martin Yuan
82087ee7f6 Add DICT_CONSTRUCT and NAMED_TUPLE_CONSTRUCT to lite interpreter (#36015)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36015

Test Plan: Imported from OSS

Reviewed By: linbinyu

Differential Revision: D20853995

Pulled By: iseeyuan

fbshipit-source-id: 153f76d223f9ffc71e2259b741a7e5d78ae63f22
2020-04-04 09:52:58 -07:00
Nikita Shulga
03a4a4887d Fix clang-format (#35969)
Summary:
Just run `./tools/clang_format.py  --verbose` and `git commit --all`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35969

Test Plan: CI

Differential Revision: D20845626

Pulled By: malfet

fbshipit-source-id: 0ae9a91dfa33417a021e7e9d233baba4188daf81
2020-04-03 14:36:20 -07:00
davidriazati
596153cad1 [jit] Enable type tags in serialization (#35741)
Summary:
This enables the serialization part of this change (the deserialization stuff is already landed #33255)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35741

Pulled By: driazati

Differential Revision: D20758124

fbshipit-source-id: e2cdefa99c3bec991491e5e967e7f1661ca7ffd9
2020-04-03 11:59:42 -07:00
Song Zhou
dabeff33b9 [pytorch] Fix fblearner flow compiling errors (#35902)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35902

Move operator registration to anonymous namespace to avoid collision.

Reviewed By: soumith

Differential Revision: D20822382

fbshipit-source-id: 1ab00871491668b8b85e803ac877d96477f1688b
2020-04-02 14:52:48 -07:00
Christian Sarofeen
6d24f8fe21 Infrastructure for a new CUDA Fuser (#34785)
Summary:
**Summary:** This PR contains the infrastructure of a new CUDA fuser. This CUDA fuser is based on many of the same principles of TensorExpressions and Halide, however the implementation is ground up. The fusion pass itself is similar to the default CUDA fuser, however, it has undergone some refactoring and is using the new code generation infrastructure. For those who are interested in how the code generation in this PR works, I would recommend reviewing _test/cpp/jit/test_gpu_fusion.cpp_ as well as the long comment section at the beginning of _torch/csrc/jit/codegen/cuda/transform_replay.h_  One of the largest differences between our approach and that of TVM/Halide, is the concept of "TensorView". TensorView from a high level should be thought of similarly to how we think of working with Tensors in PyTorch. It's an N-D object which can undergo transformations that change its dimensionality. Dimensionality changes are done through the operations split/merge/reorder/computeAt. These transformations are similar to split/fuse/reorder/compute_at of TVM, they modify how a tensor is iterated over to generate GPU code. Interestingly, in our scheme these transformations are applied to tensors and only impact how that tensor is generated.

**Warning:** This PR is purposefully not feature complete with the current fuser. We wanted to separate out the infrastructure from the fusion capabilities. Once in, smaller incremental PRs will be submitted to expand capabilities of the fuser.

**Short term goals:**

Parity with current CUDA fuser (including performance):
- Dynamic shapes (no recompilation)
- Implicit handling of braodcast (broadcasted tensors are treated as tensors of the braodcasted size in the generated code)
- Dropout

**Mid-term goals:**

- Transposes fused with pointwise operations where transpose involves only 2 axes (across the fused operation).
- 1-D reductions fused with pointwise operations
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34785

Reviewed By: ZolotukhinM

Differential Revision: D20650977

Pulled By: soumith

fbshipit-source-id: ee39c95a880e1b9822e874ed4cc180971572bf63
2020-04-02 09:22:42 -07:00
Ilia Cherniavskii
bc6bd0bb1a Debug Information Guard
Summary: This diff fixes the issues with current handling of debug information passed along the execution of the model. (For example, it is possible that multiple calls to the debug guard may override each other)

Test Plan: CI test/cpp/jit

Reviewed By: dzhulgakov

Differential Revision: D20602775

fbshipit-source-id: 4683957954028af81a1a0f1f12b243650230c9bb
2020-04-01 01:55:29 -07:00
Ilia Cherniavskii
800d5617c0 Recording of TorchScript functions (#34710)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34710

Extending RecordFunction API to support new recording scopes (such as TorchScript functions), as well as giving more flexibility to set sampling rate.

Test Plan: unit test (test_misc.cpp/testRecordFunction)

Reviewed By: gdankel, dzhulgakov

Differential Revision: D20158523

fbshipit-source-id: a9e0819d21cc06f4952d92d43246587c36137582
2020-03-31 00:33:23 -07:00
Nikolay Korovaiko
9e22d15f14 Enable tensorexpr cpp tests in CI. try #2 (#35454)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35454

Differential Revision: D20665160

Pulled By: Krovatkin

fbshipit-source-id: e04cbe92b2ee5a3288f3c4e5c83533bfea85bf85
2020-03-27 12:09:55 -07:00
Meghan Lele
6384c2d81b [JIT] clang-format JIT code (#35115)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35115

This commit runs the newly added tools/clang_format.py on the JIT
codebase and includes all of the formatting changes thus produced.

Testing:
Ran the script, CI.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D20568523

Pulled By: SplitInfinity

fbshipit-source-id: e09bdb982ccf090eecfb7c7b461b8d0681eef82b
2020-03-26 11:24:51 -07:00
Suraj Menon
aa01a95c6d Revert D20630760: [pytorch][PR] Enable NNC tests vol. i. add test_tensorexpr.py tests [WIP]
Test Plan: revert-hammer

Differential Revision:
D20630760

Original commit changeset: 7d2f27aca6b1

fbshipit-source-id: 28ac92b3390651a4a67061d6ebf208515b9b9463
2020-03-25 20:34:46 -07:00
Nikolay Korovaiko
f3a5081bd4 Enable NNC tests vol. i. add test_tensorexpr.py tests [WIP] (#34897)
Summary:
This  PR add tensorexpr cpp tests to test_jit.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34897

Differential Revision: D20630760

Pulled By: Krovatkin

fbshipit-source-id: 7d2f27aca6b1e23e3ffed1c765d8f590688118e3
2020-03-25 17:23:48 -07:00
Nikita Shulga
512bcf68be [Formatting] if ( -> if( in CMakeLists.txt (#35343)
Summary:
Same to `else`, `endif` and `elseif`.
Also prefer lowercase over uppercase ones
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35343

Test Plan: None at all

Differential Revision: D20638789

Pulled By: malfet

fbshipit-source-id: 8058075693185e66f5dda7b825b725e139d0d000
2020-03-25 13:48:42 -07:00
James Reed
618c6214aa [reapply][JIT] Namespaces for TorchBind (#35254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35254

Reapply D20541090 with some BC fixes
ghstack-source-id: 100733987

Test Plan: buck test mode/dev-nosan //caffe2/torch/fb/predictor/model_repo/tests:ai_infra_representative_model_shard_6_test -- 'RepresentativeModelTest\/ShardedRepresentativeModelTest\.RunModel\/0'

Reviewed By: zdevito

Differential Revision: D20607111

fbshipit-source-id: 80f148d860571208c93e9308128cd480ff089f74
2020-03-24 00:39:48 -07:00
Nikita Shulga
c46c28a7cb Fix JitTest.ADFormulas intermittent failures (#35196)
Summary:
Clamp input tensor values to [3, 3] to limit how small `tanh` gradint can get
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35196

Test Plan: CI + `bin/test_jit --gtest_filter=JitTest.ADFormulas --gtest_repeat=60000 --gtest_break_on_failure`

Differential Revision: D20611256

Pulled By: malfet

fbshipit-source-id: 8640faa5d8567d6c6df8cc5df80c2e65407116eb
2020-03-23 22:21:30 -07:00
Lu Fang
a100cf5146 Revert D20541090: [JIT][torchbind] Namespaces for torchbind classes
Test Plan: revert-hammer

Differential Revision:
D20541090

Original commit changeset: ce3d9391dd3c

fbshipit-source-id: acc1d660fbda611941381315507dfe594c385db1
2020-03-21 12:20:44 -07:00
James Reed
e0496a70fc [JIT][torchbind] Namespaces for torchbind classes (#35054)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35054

Test Plan: Imported from OSS

Differential Revision: D20541090

Pulled By: jamesr66a

fbshipit-source-id: ce3d9391dd3cdf619042b8f6ba2645f4c1fc875c
2020-03-20 20:07:02 -07:00
Michael Suo
8210b2054e Move ivalue tests to aten (#34985)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34985

IValue is part of the overall runtime system, not just the JIT. So it
should be tested in the ATen tests.

The real motivation though is so that I can use gtest directly, not the
hacked-up version the JIT uses.

Test Plan: Imported from OSS

Differential Revision: D20537902

Pulled By: suo

fbshipit-source-id: 09897e015ecde24aa8996babeaa08d98db90ef0d
2020-03-19 17:56:37 -07:00
James Reed
09a7788a2f [torchbind] Improve IValue custom class API and remove most Capsule stuff (#34848)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34848

Test Plan: Imported from OSS

Differential Revision: D20480514

Pulled By: jamesr66a

fbshipit-source-id: 1c595faf34e00aab0a6202a8902426bd310551c3
2020-03-17 20:39:34 -07:00
James Reed
089a0a2117 [torchbind] Test moving custom classes to/from IValue (#34847)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34847

Test Plan: Imported from OSS

Differential Revision: D20480512

Pulled By: jamesr66a

fbshipit-source-id: 87f5f8ea8764e26d383b17e4f72538166ddd0655
2020-03-16 23:57:42 -07:00
peter
24c9e61e79 Enable JIT tests on Windows (#27029)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27029

Reviewed By: eellison

Differential Revision: D20458664

Pulled By: jamesr66a

fbshipit-source-id: 22be918543703869f471e89b3478423198351bf3
2020-03-16 11:26:21 -07:00
Martin Yuan
d4f182d06b Add overloaded name to prim operators (#34280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34280

To have prim ops searchable for lite interpreter, overloaded names need to be added for the operators with the same name but different schema. For example, aten::add in register_prim_ops.cpp. The difference is a combination of args and output type.
`"aten::add(str a, str b) ->str"`
`"aten::add(int a, int b) ->int"`
`"aten::add(float a, float b) ->float"`
`"aten::add(int a, float b) ->float"`
`"aten::add(float a, int b) ->float"`
`"aten::add(Scalar a, Scalar b) ->Scalar"`

Solution:
Use the argument type and/or output type (the same to the existing overloaded names). The overloaded name should be minimum as long as the operators can be differentiated. For other operators please look into the source code change for details.

`"aten::add.str(str a, str b) ->str"`
`"aten::add.int(int a, int b) ->int"`
`"aten::add.float(float a, float b) ->float"`
`"aten::add.int_float(int a, float b) ->float"`
`"aten::add.float_int(float a, int b) ->float"`
`"aten::add.Scalar_Scalar(Scalar a, Scalar b) ->Scalar"`

Test Plan: Imported from OSS

Differential Revision: D20456997

Pulled By: iseeyuan

fbshipit-source-id: 2c3dc324b4a4e045559f62c6cc2a10fbb9a72dcf
2020-03-15 17:05:54 -07:00
James Reed
fb20621b3b Move torchbind out of jit namespace (#34745)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34745

Test Plan: Imported from OSS

Differential Revision: D20450239

Pulled By: jamesr66a

fbshipit-source-id: 3f5597626f21d7b5e329b57da358c76b531bf806
2020-03-13 23:03:14 -07:00
James Reed
ab76a8206f [JIT][mobile] Support built-in Function call in lite interpreter (#34676)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34676

Test Plan: Imported from OSS

Differential Revision: D20427938

Pulled By: jamesr66a

fbshipit-source-id: 79eebfa858776f26da55ffd49d3f78fa7ae0df9b
2020-03-13 18:24:18 -07:00
eellison
44256199a9 [JIT] remove specialized list ops (#34520)
Summary:
Now that lists are no longer specialized, we can register only one operator for list ops that are generic to their element type.
This PR reorgs lists into three sets of ops:
- CREATE_GENERIC_LIST_OPS
- CREATE_SPECIALIZED_LIST_OPS
- CREATE_COMPARATOR_LIST_OPS_SPECIALIZED (we didn't bind certain specialized ops to Tensor)

This is important to land quickly because mobile is finalizing its bytecode soon, after which we could not remove these ops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34520

Reviewed By: iseeyuan

Differential Revision: D20429775

Pulled By: eellison

fbshipit-source-id: ae6519f9b0f731eaa2bf4ac20736317d0a66b8a0
2020-03-12 17:49:23 -07:00
Elias Ellison
787c307e63 Revert D20368543: [pytorch][PR] [JIT] remove specialized list ops
Test Plan: revert-hammer

Differential Revision:
D20368543

Original commit changeset: ad0c6d70d2a6

fbshipit-source-id: b8b1a64ac830d5f544567714b940c57274194d3f
2020-03-12 12:55:49 -07:00
eellison
f9f8424386 [JIT] remove specialized list ops (#34520)
Summary:
Now that lists are no longer specialized, we can register only one operator for list ops that are generic to their element type.
This PR reorgs lists into three sets of ops:
- CREATE_GENERIC_LIST_OPS
- CREATE_SPECIALIZED_LIST_OPS
- CREATE_COMPARATOR_LIST_OPS_SPECIALIZED (we didn't bind certain specialized ops to Tensor)

This is important to land quickly because mobile is finalizing its bytecode soon, after which we could not remove these ops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34520

Differential Revision: D20368543

Pulled By: eellison

fbshipit-source-id: ad0c6d70d2a6be6ff0e948d6786052167fc43e27
2020-03-12 10:48:14 -07:00
Michael Suo
c235be42dd [jit] kill script namespace (#34515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34515

Once upon a time we thought this was necessary. In reality it is not, so
removing it.

For backcompat, our public interface (defined in `api/`) still has
typedefs to the old `script::` names.

There was only one collision: `Pass` as a `Stmt` and `Pass` as a graph
transform. I renamed one of them.

Test Plan: Imported from OSS

Differential Revision: D20353503

Pulled By: suo

fbshipit-source-id: 48bb911ce75120a8c9e0c6fb65262ef775dfba93
2020-03-11 23:32:48 -07:00
Edward Yang
cf8b728255 Delete OperatorOptions, absorb AliasAnalysisKind into FunctionSchema. (#34588)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34588

I constructed the patch by deleting OperatorOptions and then rerouting
all queries for AliasAnalysisKind to FunctionSchema.  Some of the
behavior is kind of bogus: we really shouldn't be mutating FunctionSchema
after the fact, but that won't get fixed until we actually switch to
true schema merging.

Reland of https://github.com/pytorch/pytorch/pull/34160

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D20387079

Pulled By: ezyang

fbshipit-source-id: d189f7a6ad8cd186b88b6fbfa3f189994eea14e8
2020-03-11 20:59:46 -07:00
Edward Yang
6f8a8e4e47 Revert D20282846: Delete OperatorOptions, absorb AliasAnalysisKind into FunctionSchema.
Test Plan: revert-hammer

Differential Revision:
D20282846

Original commit changeset: ba7bca6e8adc

fbshipit-source-id: b9e15d2b2c3d1dbc6e971ab3c0bdf380e769dcf1
2020-03-11 07:50:29 -07:00
Edward Yang
9d42177a31 Delete OperatorOptions, absorb AliasAnalysisKind into FunctionSchema. (#34160)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34160

I constructed the patch by deleting OperatorOptions and then rerouting
all queries for AliasAnalysisKind to FunctionSchema.  Some of the
behavior is kind of bogus: we really shouldn't be mutating FunctionSchema
after the fact, but that won't get fixed until we actually switch to
true schema merging.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D20282846

Pulled By: ezyang

fbshipit-source-id: ba7bca6e8adc3365789639b88e54c4e881b1692e
2020-03-11 07:15:18 -07:00
davidriazati
23b2fba79a [jit] Add type tags to lists/dicts in pickle (#33255)
Summary:
Stacked PRs
 * #33474 - [jit] Remove list specializations from pickler
 * **#33255 - [jit] Add type tags to lists/dicts in pickle**

This adds a global call to `torch.jit._pickle.restore_type_tags` for
lists and dicts so that we can preserve their types after serialization.
](https://our.intern.facebook.com/intern/diff/20346780/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33255

Pulled By: driazati

Differential Revision: D20346780

fbshipit-source-id: c8534954ef4adb2e3c880401acbee30cd284f3db
2020-03-10 19:17:01 -07:00
Meghan Lele
903ad90325 [JIT] Introduce a fake Tensor creation node for IR unit tests (#34334)
Summary:
**Summary**
There is often a need to create a Tensor when writing IR by hand for JIT
optimisation pass unit tests. The only options for this today are real
Tensor creation functions like `aten::ones`. Any test that uses these functions
must also use the same default arguments as the Python/C++ API, which means
that all of the tests have to be updated when the API is updated. This commit
introduces a new primitive, `prim::MakeTestTensor` with schema `() -> Tensor` that
should be used in unit tests instead of real Tensor creation functions. This new
primitive has no public-facing API, so the maintenance burden is much lower.

**Testing**
This commit updates the alias analysis and DCE tests to use `prim::MakeTestTensor` instead of
`aten::rand`, `aten::ones`, and `aten::zeros`.

```
$ ./bin/test_jit
CUDA not available. Disabling CUDA and MultiCUDA tests
Note: Google Test filter = *-*_CUDA:*_MultiCUDA
[==========] Running 75 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 75 tests from JitTest
[ RUN      ] JitTest.ADFormulas
[       OK ] JitTest.ADFormulas (82 ms)
[ RUN      ] JitTest.Attributes
[       OK ] JitTest.Attributes (0 ms)
...
...
...
[ RUN      ] JitTest.LiteInterpreterPrim
[       OK ] JitTest.LiteInterpreterPrim (0 ms)
[ RUN      ] JitTest.LiteInterpreterLoadOrigJit
[       OK ] JitTest.LiteInterpreterLoadOrigJit (2 ms)
[----------] 75 tests from JitTest (150 ms total)

[----------] Global test environment tear-down
[==========] 75 tests from 1 test case ran. (150 ms total)
[  PASSED  ] 75 tests.
```

**Fixes**
This pull request fixes https://github.com/pytorch/pytorch/issues/33500.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34334

Differential Revision: D20296437

Pulled By: SplitInfinity

fbshipit-source-id: df4e7b0881ae4913424e5a409bfa171a61c3e568
2020-03-10 16:12:45 -07:00
Michael Suo
965146b818 [jit] delete netdef converter (#33807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33807

afaik this is unused, so removing it from the source tree. RIP :(

Test Plan: Imported from OSS

Differential Revision: D20122118

Pulled By: suo

fbshipit-source-id: cb45943f5b9f969482301a2f9fe540326dbc78f2
2020-03-09 22:25:16 -07:00
James Reed
45a504dd2d [JIT] Introduce BuiltinOpFunction and integrate into torchbind (#34098)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34098

* #33900 [JIT] Move stuff out of class_type.cpp

Test Plan: Imported from OSS

Differential Revision: D20229166

Pulled By: jamesr66a

fbshipit-source-id: d658a63a5d6e372e675f35b8456adc8de82b49f3
2020-03-07 10:03:56 -08:00
Leah Dickstein
c5e822b7bb Back out "[jit] Add type tags to lists/dicts in pickle" (#34406)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34406

Pull Request resolved: https://github.com/pytorch/pytorch/pull/34405

Original commit changeset: 2f1826e6679a

Test Plan: reverting, see S197156

Reviewed By: akyrola, volkhin

Differential Revision: D20317456

fbshipit-source-id: 89298a9c022edba1d54bcdc7541804cb919e33f5
2020-03-06 20:02:16 -08:00
Jeremy Lilley
5f641f93f1 [aten] Don't deadlock in IValue::Future impl, tests. (#34099)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34099

This change effectively applies into IValue's future impl a few fixes
we discovered when using the torch::utils::Future<T> impl.

The parallel impls should probably eventually be merged, but until then:

  - Don't hold the lock when invoking the callbacks. This makes
    it effectively impossible (deadlocks) to call value() to get
    the value from inside the callback.

  - We discovered that it was slightly cleaner in practice to
    notify condition variables prior to invoking callbacks
    (best to unblock paused threads ASAP, before spawning new work).

  - Fix some var naming inconsistency.
  - Add a some caffe2 cpp test coverage.
ghstack-source-id: 99336569

Test Plan:
```
buck test mode/dev //caffe2/test/cpp/jit:jit -- 'JitTest\.IValueFuture'

```

Differential Revision: D20203278

fbshipit-source-id: 6e805ba547899dab9aab458e4b23049db31f930e
2020-03-06 12:34:50 -08:00
Ilia Cherniavskii
b50825e011 Make RecordFunction more robust for async use cases (#34122)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34122

Earlier work added support for async rpc cases when RecordFunction's
end callbacks might be called in a different thread; in addition some
extra care was needed to handle pointer to parent function;

This PR makes RecordFunction aware of potentially multiple threads in
use, as well as removes unused parent() call and restricts current()
RecordFunction to scope-based record functions (RECORD_FUNCTION macro)

Test Plan: unit tests

Differential Revision: D20297709

Pulled By: ilia-cher

fbshipit-source-id: 46a59e1b2eea0bbd8a59630385e193b38d30f9d1
2020-03-05 22:28:53 -08:00
Meghan Lele
5500c3de0a Revert D20150304: [pytorch][PR] [JIT] Introduce a fake Tensor creation node for IR unit tests
Test Plan: revert-hammer

Differential Revision:
D20150304

Original commit changeset: c88f5289055a

fbshipit-source-id: 14ac0e46145e9fb4f200c6318b63edd541380aeb
2020-03-05 16:25:08 -08:00
Martin Yuan
ccf4d69b75 [Lite Interpreter] Enable __setstate__ (#33294)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33294

1. Serialize bytecode of __setstate__ and run it when loading the model.
2. One use case is quantization. To test this use case a few operators are registered temporarily for lite interpreter. The "_" prefix registration will be removed when the operators are all migrated to mobile.

Test Plan: Imported from OSS

Differential Revision: D20162898

Pulled By: iseeyuan

fbshipit-source-id: 7a3180807bf38fbce594d86993896861f12bb58c
2020-03-05 15:24:21 -08:00
Meghan Lele
9ce833879f [JIT] Introduce a fake Tensor creation node for IR unit tests (#33914)
Summary:
**Summary**
There is often a need to create a Tensor when writing IR by hand for JIT
optimisation pass unit tests. The only options for this today are real
Tensor creation functions like `aten::ones`. Any test that uses these functions
must also use the same default arguments as the Python/C++ API, which means
that all of the tests have to be updated when the API is updated. This commit
introduces a new primitive, `prim::MakeTestTensor` with schema `() -> Tensor` that
should be used in unit tests instead of real Tensor creation functions. This new
primitive has no public-facing API, so the maintenance burden is much lower.

**Testing**
This commit updates the alias analysis and DCE tests to use `prim::MakeTestTensor` instead of
`aten::rand`, `aten::ones`, and `aten::zeros`.

```
$ ./bin/test_jit
CUDA not available. Disabling CUDA and MultiCUDA tests
Note: Google Test filter = *-*_CUDA:*_MultiCUDA
[==========] Running 75 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 75 tests from JitTest
[ RUN      ] JitTest.ADFormulas
[       OK ] JitTest.ADFormulas (82 ms)
[ RUN      ] JitTest.Attributes
[       OK ] JitTest.Attributes (0 ms)
...
...
...
[ RUN      ] JitTest.LiteInterpreterPrim
[       OK ] JitTest.LiteInterpreterPrim (0 ms)
[ RUN      ] JitTest.LiteInterpreterLoadOrigJit
[       OK ] JitTest.LiteInterpreterLoadOrigJit (2 ms)
[----------] 75 tests from JitTest (150 ms total)

[----------] Global test environment tear-down
[==========] 75 tests from 1 test case ran. (150 ms total)
[  PASSED  ] 75 tests.
```

**Fixes**
This pull request fixes https://github.com/pytorch/pytorch/issues/33500.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33914

Differential Revision: D20150304

Pulled By: SplitInfinity

fbshipit-source-id: c88f5289055a02dc20b7a5dcdf87469f9816d020
2020-03-05 12:42:42 -08:00
Martin Yuan
f097ca503d Add and test training in lite interpreter. (#32359)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32359

Test Plan: Imported from OSS

Differential Revision: D19450614

Pulled By: iseeyuan

fbshipit-source-id: 6bafff39d7880a5b7fb9cd70c33a4e584812be12
2020-03-03 23:33:43 -08:00
davidriazati
99e211e661 [jit] Add type tags to lists/dicts in pickle (#33255)
Summary:
Stacked PRs
 * #33474 - [jit] Remove list specializations from pickler
 * **#33255 - [jit] Add type tags to lists/dicts in pickle**

This adds a global call to `torch.jit._pickle.restore_type_tags` for
lists and dicts so that we can preserve their types after serialization.
](https://our.intern.facebook.com/intern/diff/19868637/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33255

Pulled By: driazati

Reviewed By: xman1979, Tianshu-Bao

Differential Revision: D19868637

fbshipit-source-id: 2f1826e6679a786ca209198690269f399a542c04
2020-03-03 16:48:21 -08:00
Zachary DeVito
358450e02b improved TorchScript traceback (#33834)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33834

This changes how we report Tracebacks to make them more clear when
there are both serialized and non-serialized ranges. It now looks like:

```
Traceback (most recent call last):
  File "foo.py", line 25, in <module>
    s2(a, b)
  File "/scratch/zdevito/pytorch/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__.py", line 7, in forward
    x: Tensor,
    y: Tensor) -> Tensor:
    return (self).bar(x, y, )
            ~~~~~~~~~ <--- HERE
  def bar(self: __torch__.Moo,
    x: Tensor,
  File "code/__torch__.py", line 11, in bar
    x: Tensor,
    y: Tensor) -> Tensor:
    _0 = (self).baz(x, y, )
          ~~~~~~~~~ <--- HERE
    _1 = torch.ones([3], dtype=None, layout=None, device=None, pin_memory=None)
    return torch.add(_0, _1, alpha=1)
  File "code/__torch__.py", line 17, in baz
    x: Tensor,
    y: Tensor) -> Tensor:
    return torch.add(x, y, alpha=1)
           ~~~~~~~~~ <--- HERE

Traceback of TorchScript, original code (most recent call last):
  File "foo.py", line 11, in forward
    def forward(self, x, y):
        return self.bar(x, y)
               ~~~~~~~~ <--- HERE
  File "foo.py", line 9, in bar
    def bar(self, x, y):
        return self.baz(x, y) + torch.ones(3)
               ~~~~~~~~ <--- HERE
  File "foo.py", line 7, in baz
    def baz(self, x, y):
        return x + y
               ~~~~~ <--- HERE
RuntimeError: The size of tensor a (4) must match the size of tensor b (5) at non-singleton dimension 1
```

It follows Python convension of having the most important information last
and reading from the bottom up.

Changes:
* Moved the error message to the end, to copy Python
* Report original traceback separate from serialized traceback
* Make sure root functions have names in the interpreter trace.

Test Plan: Imported from OSS

Differential Revision: D20126136

Pulled By: zdevito

fbshipit-source-id: fd01f9985e5d74e04c4d064c02e8bc320f4fac13
2020-03-03 12:27:38 -08:00
Martin Yuan
55b44f6746 Throw an exception when method cannot be found from mobile module. (#33972)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33972

Test Plan: Imported from OSS

Differential Revision: D20168965

Pulled By: iseeyuan

fbshipit-source-id: 2efe5dcb1fb80407cd88a47c50cb382ecd8aa275
2020-02-28 18:28:09 -08:00
Meghan Lele
5eacdfb21f Revert D20127441: [pytorch][PR] [JIT] Introduce a fake Tensor creation node for IR unit tests
Test Plan: revert-hammer

Differential Revision:
D20127441

Original commit changeset: 56da4f23ac46

fbshipit-source-id: 7d4602e5011bec6f6871eab16af05a3198694e5d
2020-02-27 13:48:31 -08:00