Commit Graph

256 Commits

Author SHA1 Message Date
Zachary DeVito
c38c7b0ec5 Support Kwargs in C++ Function/Method calls (#19086)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19086
ghimport-source-id: 7790a5cc6e32f6f72e92add0b9f76dfa49ad9859

Reviewed By: jamesr66a

Differential Revision: D14875729

Pulled By: zdevito

fbshipit-source-id: ad1e4542381d9c33722155459e794f1ba4660dbb
2019-04-13 08:42:11 -07:00
Bram Wasti
b1539412db Add pass registration mechanism (#18587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18587
ghimport-source-id: 80d753f7046a2a719e0c076684f44fa2059a0921

Differential Revision: D14901227

Pulled By: bwasti

fbshipit-source-id: 56511d0313419b63945a36b80e9ea51abdef2bd4
2019-04-12 15:32:00 -07:00
Roy Li
422b01e788 Replace more usages of Type with DeprecatedTypeProperties (#19093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19093
ghimport-source-id: a82e3dce912a173b42a6a7e35eb1302d9f334e03

Differential Revision: D14865520

Pulled By: li-roy

fbshipit-source-id: b1a8bf32f87920ce8d82f990d670477bc79d0ca7
2019-04-11 17:02:05 -07:00
Zachary DeVito
ef406ee925 First class modules in the compiler, round 2 (#19167)
Summary:
This PR propagates where we use first-class modules objects into the compiler. This creates a transitionary state where:

* compiler.cpp creates Graphs where `self` is a Module class and attributes/parameters/buffers/submodules are looked up with `prim::GetAttr`
* GraphExecutor still runs "lowered graphs" where the self object has been removed by a compiler pass `lower_first_class_method`.
* Tracing still creates "lowered graphs", and a pass "lift_lowered_method" creates a first-class method graph for things.

* This PR separates out Method and Function. A script::Function is a pure Graph with no `self` bound.  Similar to Python, a script::Method is just a bound `self` and its underlying `script::Function`.
* This PR also separates CompilationUnit from Module. A CompilationUnit is just a list of named script::Functions.  Class's have a CompilationUnit holding the class methods, and Modules also have a CompilationUnit holding their Methods. This avoids the weird circular case Module --has a-> Class -> has a -> Module ...

Details:
* In this transitionary state, we maintain two copies of a Graph, first-class module and lowered. Th first-class one has a self argument that is the module's class type. The lowered one is the lowered graph that uses the initial_ivalues inputs.
* When defining lowered methods using `_defined_lowered` we immediately create the first-class equivalent. The reverse is done lazily, creating lowered_methods on demand from the class.
* The two way conversions will be deleted in a future PR when the executor itself runs first-class objects. However this requires more changes to (1) the traces, (2) the python bindings, and (3) the onnx export pass and would make this PR way to large.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19167

Differential Revision: D14891966

Pulled By: zdevito

fbshipit-source-id: 0b5f03118aa65448a15c7a7818e64089ec93d7ea
2019-04-11 13:55:48 -07:00
Zachary DeVito
f5165ade5b Revert D14842057: Compiler uses first-class modules**
Differential Revision:
D14842057

Original commit changeset: ca6e7b5a4380

fbshipit-source-id: e8f1862a59bf20d5f78648b2fdc53a8b3750ead3
2019-04-11 06:17:01 -07:00
Zachary DeVito
5e1f0b2a07 Compiler uses first-class modules** (#19043)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19043
ghimport-source-id: 0c9e80d5f35654af6d472abd5643bff3e9eb9ddf

Differential Revision: D14842057

Pulled By: zdevito

fbshipit-source-id: ca6e7b5a43805240f40b84d30e54495061067dc0
2019-04-11 00:00:48 -07:00
Omegastick
31ff0ecd2b Fix torch::nn::init::orthogonal_ with CNNs (#18915)
Summary:
Fixes #18518

I changed the C++ API torch::nn::init::orthogonal_ implementation to match the Python implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18915

Differential Revision: D14851833

Pulled By: ezyang

fbshipit-source-id: 45b5e9741582777c203e9ebed564ab3ac1f94baf
2019-04-09 10:39:15 -07:00
Michael Suo
fefa6d305e fix side-effects and aliasing for custom ops (#18711)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18711
ghimport-source-id: c9caedc0660b2b7ba3730cd0e1a2e0e9c3cf422b

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18711 [jit] fix side-effects and aliasing for custom ops**

Previously we didn't track aliasing, mutation, or side effects for
custom ops. This PR adds in guards with the most conservative
assumptions possible: the op will
1) have side effects,
2) write to everything
3) produce a wildcard.

In order to tell whether a given operator is a custom op, this PR introduces
the concept of a "reserved" namespace (basically all our builtin namespaces).
Custom ops live in non-reserved namespaces, so a check on the namespace
is sufficient to tell whether a schema/node is "custom" or not.

This is just to get things correct for now. Follow-ups to this:
- Users should be able to specify aliasing/mutability without having to learn
the whole alias annotation schema.
- Relax assumptions a bit. In particular outputs can only alias input tensors,
they don't have to be wildcards.

Fixes #18490

Differential Revision: D14730978

fbshipit-source-id: 540b47a24ccf24145051609bdcc99c97e46e0fe0
2019-04-05 10:48:14 -07:00
Michael Suo
0a4117a36e run cpp tests for non-cuda builds in test_jit.py (#18826)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18826
ghimport-source-id: 7ffa3bc7ef7402a6d6eb6ba5849e197019d77bf8

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18826 [jit] run cpp tests for non-cuda builds in test_jit.py**

We did all the work of nicely separating our cpp tests that don't require
CUDA, but they aren't run from test_jit.py if CUDA is missing.

Reviewed By: ZolotukhinM

Differential Revision: D14766287

fbshipit-source-id: 9326b3a5c90f6c20fc8cfaf1a1885a363b91f30a
2019-04-03 22:23:58 -07:00
Soumith Chintala
b5d8844bbe push magma init into lazyInitCUDA (#18527)
Summary:
Tries to fix C++ API's usage of MAGMA-based functions.

Attempts to Fix https://github.com/pytorch/pytorch/issues/18074
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18527

Differential Revision: D14691694

Pulled By: soumith

fbshipit-source-id: dd04e74418e486d73ea4a92193ddf79352ed71ba
2019-04-03 12:47:34 -07:00
Zachary DeVito
2d07993bcb Add ability to specialize class types to ArgumentSpec (#18314)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18314
ghimport-source-id: 8cecb768d476ab19c9460f39c8f94a764e4cb052

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18314 Add ability to specialize class types to ArgumentSpec**
* #18226 Add Slot type to abstract the raw pointers being used for slots.

Differential Revision: D14574395

fbshipit-source-id: cc3af6e56e9ae52990f4a1ad56ecceaa2d493577
2019-04-02 17:35:57 -07:00
eellison
af9335436d Re-land Parsing file check (#18570)
Summary:
The last time I tried to land it there was a merge race with the docs coverage test lol. Re-landing with the fix.

Re-land of https://github.com/pytorch/pytorch/pull/18304
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18570

Reviewed By: driazati

Differential Revision: D14707285

Pulled By: eellison

fbshipit-source-id: 3a0265928aa8cad78961723d8bf0fbf871fdb71d
2019-04-01 11:56:32 -07:00
David Riazati
e2fd1d966f Revert D14668859: [pytorch][PR] Re-land Parsing file check
Differential Revision:
D14668859

Original commit changeset: 3825a35ddc61

fbshipit-source-id: f3343ec6b63fe8f1f04959adfac4331865990047
2019-03-29 17:14:00 -07:00
eellison
393731ab24 Re-land Parsing file check (#18570)
Summary:
The last time I tried to land it there was a merge race with the docs coverage test lol. Re-landing with the fix.

Re-land of https://github.com/pytorch/pytorch/pull/18304
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18570

Differential Revision: D14668859

Pulled By: eellison

fbshipit-source-id: 3825a35ddc6179a0d433d70d22b5c1a96c20b21a
2019-03-29 15:46:59 -07:00
Will Feng
6ebfbdf4c6 Add named submodule support to nn::Sequential (#17552)
Summary:
Previously, we were not able to assign names to `nn::Sequential`'s submodules. This PR adds this feature to match the Python API. Example use:
```cpp
Sequential sequential(named_submodule({
      {"linear", Linear(10, 3)},
      {"conv2d", Conv2d(1, 2, 3)},
      {"dropout", Dropout(0.5)},
      {"batchnorm", BatchNorm(5)},
      {"embedding", Embedding(4, 10)},
      {"lstm", LSTM(4, 5)}
}));
```

It also enables loading parameters of Python `nn.Sequential` module with custom submodules names into C++ frontend, unblocking https://github.com/pytorch/vision/pull/728#issuecomment-466661344.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17552

Differential Revision: D14246834

Pulled By: yf225

fbshipit-source-id: 3030b5c5d68f6dd5d3e37ac4b4f98dc6d6d9ba72
2019-03-29 13:06:29 -07:00
Ilia Cherniavskii
600eeecbf4 Add external callbacks into RecordFunction (#17844)
Summary:
Add a way to insert external callbacks into PT's RecordFunction
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17844

Differential Revision: D14399664

Pulled By: ilia-cher

fbshipit-source-id: 76654799811fefd3ffed4abfb46ed95b492cebab
2019-03-28 17:48:45 -07:00
Elias Ellison
ffc7158bf2 Revert D14652372: [pytorch][PR] Add parsing to file check
Differential Revision:
D14652372

Original commit changeset: 7430b9d1dc2b

fbshipit-source-id: fa3d0f68515fe53447746469844d2db20c1292e0
2019-03-28 00:12:47 -07:00
Elias Ellison
0daafe0209 Add parsing to file check (#18304)
Summary:
This allows you to embed checks in IR, making the test more readable.

E.g.
```
graph_str = 'graph(%0 : Double(5, 5)):
          # CHECK: aten::relu
          %1 : Double(5, 5) = aten::relu(%0)
          return (%1)'
FileCheck().run(graph_str, parseIR(graph_str))
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18304

Differential Revision: D14652372

Pulled By: eellison

fbshipit-source-id: 7430b9d1dc2b7584704375aac02d7392ecec76a0
2019-03-27 18:16:05 -07:00
eellison
dc6b5b2a52 Optimize boolean expressions & unwraps (#18259)
Summary:
Simplify or eliminate boolean and/or expressions, optimize unwrapping a value that cannot be None, and optimize using `is` with a None and a non-None value

Since peephole optimize is now introducing constants, i added another constant propagation pass after running it.

Previously i had a PR that did this & optimized shape ops - i will add the shape optimizations in a separate PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18259

Differential Revision: D14602749

Pulled By: eellison

fbshipit-source-id: 1c3f5a67067d8dfdf55d7b78dcb616472ea8a267
2019-03-25 21:50:57 -07:00
Elias Ellison
3baf99bea7 Breakup test misc pt2 (#18191)
Summary:
Further breakup test_misc.h. The remaining tests don't directly map to a jit file so I left them in test_misc.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18191

Differential Revision: D14533442

Pulled By: eellison

fbshipit-source-id: 7f538ce0aea208b6b55a4716dfcf039548305041
2019-03-19 19:41:22 -07:00
Michael Suo
794c631e23 fix bug in alias analysis (#18146)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18146
ghimport-source-id: 4b061c27c5c44ef0d06066490ed16cab3d0c7a64

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18146 [jit] fix bug in alias analysis**

We handled hasWriters() incorrectly in the case of wildcards. There's
even a comment describing the correct behavior. Sad!

Much thanks to t-vi for tracking this down and suggesting the fix!

Differential Revision: D14524208

fbshipit-source-id: 8010b54257241bd64013a0d0a8b6e7d22d8c70af
2019-03-19 11:35:28 -07:00
Michael Suo
ed36fd30c8 fix double free in test_jit (#18121)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18121
ghimport-source-id: 70c273bfbcb68f7b25cf87f5614c662960864758

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18121 [jit] fix double free in test_jit**

These definitions used to be in anonymous namespace so they weren't exported from the translation unit. #18071 put those in a `test` namespace so I guess they were getting their destructors called twice on exit somehow. Making them static again fixes the problem.

Reviewed By: ezyang

Differential Revision: D14498349

fbshipit-source-id: f969781695dcbebdfcfce667fce5b986222a373e
2019-03-18 09:59:13 -07:00
Elias Ellison
f3806094d5 Breakup Test Misc (batch 1/2) (#18071)
Summary:
Breakup test_misc so that a test for a file is in test_filename. I think we might want to wait on moving test files into the source directory, since that would involve moving some tests over to the C10 folder, and this goes 99% of the way for test discoverability IMO anyway.

I added a file test_utils for common functions invoked in the tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18071

Differential Revision: D14485787

Pulled By: eellison

fbshipit-source-id: dcb20d1978d490999d435ea20c1d0503413a5c80
2019-03-15 13:56:19 -07:00
Duc Ngo
5cbc1981f3 JIT IR - Make valueMapPtr optional in convertNetDefToIR (#17942)
Summary:
Make valueMapPtr optional in convertNetDefToIR, and add tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17942

Differential Revision: D14429687

Pulled By: duc0

fbshipit-source-id: 3a5a72bbb5acc1bfd7144a987688c599016fbf7a
2019-03-14 12:22:49 -07:00
Michael Suo
9a946c4072 unify cpp tests (#17947)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17947

Instead of having a gtest and a no-gtest file that you have to remember to register tests in, add a single registration point and use some macro magic to make it work for both gtest and non-gtest builds

Reviewed By: eellison

Differential Revision: D14431302

fbshipit-source-id: e1abac135992577a943eaa7abcc81a6ed31fa6e5
2019-03-12 21:35:40 -07:00
Duc Ngo
552f903c63 JIT IR - Add option to remove prefix string when converting from JIT IR to NetDef (#17931)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17931

When converting from NetDef to IR and back, the prefix string should be removed so the operator types are preserved in caffe2.

Reviewed By: ZolotukhinM

Differential Revision: D14425954

fbshipit-source-id: 2807e7337b0f804f126970768b1250a4a8c5f35c
2019-03-12 17:02:26 -07:00
Elias Ellison
4d2f6f1bbe Remove remaining test jit expects redux (#17924)
Summary:
Trying to reland https://github.com/pytorch/pytorch/pull/17886 since it broke a build and I reverted it
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17924

Differential Revision: D14423842

Pulled By: eellison

fbshipit-source-id: f219e786bd07f7da3b7f9e866981199f5ccf6318
2019-03-12 11:33:34 -07:00
Michael Suo
496a3339dc add support for parsing class defs to the string frontend (#17628)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17628

This is not hooked up anywhere yet, just adding support.
This shares the same restrictions as the python frontend—namely, that the only exprs allowed right now are method defs.

Reviewed By: shannonzhu

Differential Revision: D14291654

fbshipit-source-id: 7798e5ff412a52ef8803c7bae8f439e50968a73a
2019-03-11 19:13:55 -07:00
Michael Suo
f9820e55af initializing class value (#17585)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17585

Create a sugared value that represents a class during initialization. This is
so that assignments to attributes correctly define attributes in __init__ but
raise an error elsewhere.

Reviewed By: shannonzhu

Differential Revision: D14263403

fbshipit-source-id: 09b2feeb272302f00a79c2a0302fbdf5483aed6a
2019-03-11 19:13:52 -07:00
Elias Ellison
f540536dfd Revert D14414435: [pytorch][PR] Remove remaining IR Expect files
Differential Revision:
D14414435

Original commit changeset: 0bfd7ce66ac2

fbshipit-source-id: 02de1814f3c4e581d3798059cee752517b176ed9
2019-03-11 17:36:44 -07:00
Elias Ellison
fd67f6b463 Remove remaining IR Expect files (#17886)
Summary:
Last batch of IR expect files removed. Includes some removal of expect files that are no longer used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17886

Differential Revision: D14414435

Pulled By: eellison

fbshipit-source-id: 0bfd7ce66ac2f72a57f15f45ebd60b95e80b6c16
2019-03-11 17:32:19 -07:00
Roy Li
7aae51cded Replace tensor.type().scalarType() calls with tensor.scalar_type()
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17515

Reviewed By: ezyang

Differential Revision: D14233250

fbshipit-source-id: 6c7af8d2291c0c2b148001b30cf03834f34366c0
2019-03-08 14:08:18 -08:00
Mikhail Zolotukhin
7bcc2301ee Cleanup testFusion/testOne: there are unused arguments.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17737

Differential Revision: D14366584

Pulled By: ZolotukhinM

fbshipit-source-id: 3c2dd2aabfecca475909e4eec4a077d900795da9
2019-03-07 11:19:24 -08:00
Elias Ellison
10ea02facf fix tuple matching (#17687)
Summary:
Check for Tuple Matching in isSubvalueOf, since they may contain container types that need to be recursed within isSubvalueOf

Fix for https://github.com/pytorch/pytorch/issues/17650
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17687

Differential Revision: D14324642

Pulled By: eellison

fbshipit-source-id: 7f1e019875286b2640a3b9c003d1635dda8cf543
2019-03-06 11:25:36 -08:00
Wanchao Liang
ab95b5c6cc Rename prim::Undefined to prim::AutogradZero (#17611)
Summary:
supersedes #17245
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17611

Differential Revision: D14283581

Pulled By: wanchaol

fbshipit-source-id: 8022d02b8a021ea2fee9a18a2c8920eb123200c5
2019-03-01 15:13:18 -08:00
Michael Suo
830ca665f5 alias analysis refactor take 2 (#17594)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17594

The original version of this broke things because a concurrent change raced with it in CI.

Reviewed By: ezyang

Differential Revision: D14266663

fbshipit-source-id: e8ac5dfcb7349b4f2c425d9f0eabbfc964314063
2019-03-01 10:08:22 -08:00
Michael Suo
1046593509 Revert D14231251: [jit] alias_analysis refactor
Differential Revision:
D14231251

Original commit changeset: 6cd98ae6fced

fbshipit-source-id: 96189f47daf7cc4cf4ef5cd343022d56a2296b39
2019-02-28 12:56:17 -08:00
Michael Suo
54c5b10934 alias_analysis refactor (#17511)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17511

AliasTracker was doing bookkeeping for three concepts: the points-to graph,
writes, and wildcards.

This PR makes AliasTracker's job clearer: it keeps track of the points-to
graph. Thus it has been renamed MemoryDAG. Write and wildcard information were
pulled back into AliasDb as part of this—I may decide to pull them into their
own little modules since I don't want the alias analysis stuff to get too
bloated.

This refactor is necessary because we want to start tracking information for
aliasing elements that _aren't_ first-class IR Values (e.g. the "stuff" inside
a list). So MemoryDAG can't know too much about Values

Reviewed By: houseroad

Differential Revision: D14231251

fbshipit-source-id: 6cd98ae6fced8d6c1522c2454da77c3c1b2b0504
2019-02-28 12:00:36 -08:00
Michael Suo
f9d3f1dca5 allow "before" and "after" alias annotations (#17480)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17480

This was always part of our "spec" but not implemented

Reviewed By: houseroad

Differential Revision: D14214301

fbshipit-source-id: 118db320b43ec099dc3e730c67d39487474c23ea
2019-02-28 12:00:34 -08:00
Jaliya Ekanayake
bb3a2d99ac Jaliyae/chunk buffer fix (#17409)
Summary:
The chunk buffer had a possibility to hang when no data is read and the buffer size is lower than chunk size. We detected this while running with larger dataset and hence the fix. I added a test to mimic the situation and validated that the fix is working. Thank you Xueyun for finding this issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17409

Differential Revision: D14198546

Pulled By: soumith

fbshipit-source-id: b8ca43b0400deaae2ebb6601fdc65b47f32b0554
2019-02-23 08:48:53 -08:00
Mikhail Zolotukhin
6d744f8fbf Preserve names when converting to/from NetDef.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17378

Differential Revision: D14176515

Pulled By: ZolotukhinM

fbshipit-source-id: da9ea28310250ab3ca3a99cdc210fd8d1fbbc82b
2019-02-22 15:25:52 -08:00
Ailing Zhang
9aae82bc2c Improvements for current AD (#17187)
Summary:
This PR removes a few size of `self` that passed from forward pass to backward pass when `self` is already required in backward pass. This could be reason that cause the potential slow down in #16689 . I will attach a few perf numbers (still a bit volatile among runs tho) I got in the comment.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17187

Differential Revision: D14179512

Pulled By: ailzhang

fbshipit-source-id: 5f3b1f6f26a3fef6dec15623b940380cc13656fa
2019-02-22 14:34:14 -08:00
Elias Ellison
81b43202ae Refactor Type Parser b/w Schemas & IRParser into a type common parser (#17383)
Summary:
Creates a new shared type parser to be shared between the IR parser and the Schema Parser.

Also adds parsing of CompleteTensorType and DimensionedTensorType, and feature-gates that for the IRParser.

Renames the existing type_parser for python annotations, python_type_parser, and names the new one jit_type_parser.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17383

Differential Revision: D14186438

Pulled By: eellison

fbshipit-source-id: bbd5e337917d8862c7c6fa0a0006efa101c76afe
2019-02-22 13:43:55 -08:00
Will Feng
be6ad7ddde Rename BatchNorm running_variance to running_var (#17371)
Summary:
Currently there is a mismatch in naming between Python BatchNorm `running_var` and C++ BatchNorm `running_variance`, which causes JIT model parameters loading to fail (https://github.com/pytorch/vision/pull/728#issuecomment-466067138):
```
terminate called after throwing an instance of 'c10::Error'
  what():  No such serialized tensor 'running_variance' (read at /home/shahriar/Build/pytorch/torch/csrc/api/src/serialize/input-archive.cpp:27)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x85 (0x7f2d92d32f95 in /usr/local/lib/libc10.so)
frame #1: torch::serialize::InputArchive::read(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, at::Tensor&, bool) + 0xdeb (0x7f2d938551ab in /usr/local/lib/libtorch.so.1)
frame #2: torch::nn::Module::load(torch::serialize::InputArchive&) + 0x98 (0x7f2d9381cd08 in /usr/local/lib/libtorch.so.1)
frame #3: torch::nn::Module::load(torch::serialize::InputArchive&) + 0xf9 (0x7f2d9381cd69 in /usr/local/lib/libtorch.so.1)
frame #4: torch::nn::Module::load(torch::serialize::InputArchive&) + 0xf9 (0x7f2d9381cd69 in /usr/local/lib/libtorch.so.1)
frame #5: torch::nn::operator>>(torch::serialize::InputArchive&, std::shared_ptr<torch::nn::Module> const&) + 0x32 (0x7f2d9381c7b2 in /usr/local/lib/libtorch.so.1)
frame #6: <unknown function> + 0x2b16c (0x5645f4d1916c in /home/shahriar/Projects/CXX/build-TorchVisionTest-Desktop_Qt_5_12_1_GCC_64bit-Debug/TorchVisionTest)
frame #7: <unknown function> + 0x27a3c (0x5645f4d15a3c in /home/shahriar/Projects/CXX/build-TorchVisionTest-Desktop_Qt_5_12_1_GCC_64bit-Debug/TorchVisionTest)
frame #8: <unknown function> + 0x2165c (0x5645f4d0f65c in /home/shahriar/Projects/CXX/build-TorchVisionTest-Desktop_Qt_5_12_1_GCC_64bit-Debug/TorchVisionTest)
frame #9: <unknown function> + 0x1540b (0x5645f4d0340b in /home/shahriar/Projects/CXX/build-TorchVisionTest-Desktop_Qt_5_12_1_GCC_64bit-Debug/TorchVisionTest)
frame #10: __libc_start_main + 0xf3 (0x7f2d051dd223 in /usr/lib/libc.so.6)
frame #11: <unknown function> + 0x1381e (0x5645f4d0181e in /home/shahriar/Projects/CXX/build-TorchVisionTest-Desktop_Qt_5_12_1_GCC_64bit-Debug/TorchVisionTest)
```
Renaming C++ BatchNorm `running_variance` to `running_var` should fix this problem.

This is a BC-breaking change, but it should be easy for end user to rename `running_variance` to `running_var` in their call sites.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17371

Reviewed By: goldsborough

Differential Revision: D14172775

Pulled By: yf225

fbshipit-source-id: b9d3729ec79272a8084269756f28a8f7c4dd16b6
2019-02-22 08:00:25 -08:00
eellison
82aa511146 move prim::None to prim::Constant (again) (#17186)
Summary:
Trying to land again, make prim::None into a case of prim::Constant. Reverted the previous landing because it broke an important onnx export test.

https://github.com/pytorch/pytorch/pull/16160
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17186

Differential Revision: D14115304

Pulled By: eellison

fbshipit-source-id: 161435fc30460b4e116cdd62c7b2e5b94581dcb7
2019-02-19 11:45:50 -08:00
Jaliya Ekanayake
9477c143c6 C++ Frontend: adding two distributed samples (Random and Sequential) (#16910)
Summary:
Adding two distrbuted samplers, Random and Sequential to the mix. Similar to python counterpart, DistributedSampler introduces a new method `set_epoch(size_t epoch)` which can be use to shuffle data determinstically between distributed processes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16910

Differential Revision: D14130980

Pulled By: soumith

fbshipit-source-id: ec08b7130c01e2fc6dc3693f7ac622a0a6d60f10
2019-02-19 05:40:37 -08:00
Mikhail Zolotukhin
3a01a45f06 Implement IRParser. (#16987)
Summary:
It might need some cleaning up and might be missing some features, but it should be already working for most cases.

This PR is based on top of PR16986 (so please review only the last commit here).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16987

Differential Revision: D14074577

Pulled By: ZolotukhinM

fbshipit-source-id: 712b598f423265655f574bb9903e2066628eaad3
2019-02-16 20:23:50 -08:00
David Riazati
b3d8c569d3 Remove templates for GenericDict
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17175

Differential Revision: D14113022

Pulled By: driazati

fbshipit-source-id: 5183e131cc8ccb58525875f76fa03133570a59ea
2019-02-15 21:35:19 -08:00
Mikhail Zolotukhin
6c06b32558 Implement NetDef <--> JIT IR converters. Try 2. (#17123)
Summary:
Currently the converters are very straightforward, i.e. there is no code for trying to
preserve semantics, we're purely perform conversion from one format to another.

Two things that we might want to add/change:

1. Add semantic conversion as well (but probably it would be a good idea to keep
   it separate as a temporary thing).
2. Make sure we don't mess with value names, as they are crucial for current
   uses of NetDefs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17123

Differential Revision: D14090244

Pulled By: ZolotukhinM

fbshipit-source-id: 07175fa9235582e1d1da5f10a42a5c1280b1b394
2019-02-15 20:39:30 -08:00
Elias Ellison
91c1d728ac Revert D14109636: [pytorch][PR] move prim::None to a case in prim::Constant
Differential Revision:
D14109636

Original commit changeset: d26fd3839761

fbshipit-source-id: c8c8113e2bff49ea93235732603e6ebc89356533
2019-02-15 16:38:12 -08:00