Commit Graph

202 Commits

Author SHA1 Message Date
James Reed
18bdf97dbb Factor Module into Object and Module
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29500

Test Plan: Imported from OSS

Differential Revision: D18463064

Pulled By: jamesr66a

fbshipit-source-id: d37bef242a8626593d4b8754042152cfc0f0acb2
2019-11-17 22:58:50 -08:00
Martin Yuan
3003c5f91b OPN ops TupleConstruct/Unpack and format. (#29635)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29635

TupleConstruct/Unpack as OPN ops.

Test Plan: Imported from OSS

Differential Revision: D18499602

fbshipit-source-id: 389b21d3ea532ef6fa729d67ce34214d86700cd2
2019-11-15 16:22:42 -08:00
Martin Yuan
a4b872b65e Inline graph before writing the bytecode file. (#29421)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29421

Inline graph before writing the bytecode file, so that all the instructions are emitted from the top-level methods.

Test Plan: Imported from OSS

Differential Revision: D18404180

fbshipit-source-id: 4759474a8dba3813616ebce8253bea09941f6bbb
2019-11-08 13:23:32 -08:00
Jeremy Lilley
78039627ae Minor followup on stringstream cleanups (#28300)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28300

  - Remove trivial stringstream from ScriptModuleSerializer::writeCode;
    I didn't include this in earlier changes to avoid a merge conflict
    with an earlier change.
  - Remove underscore from QualifiedName var ref; no difference in
    current use, but more correct.
ghstack-source-id: 92206909

Test Plan:
Benchmark: buck build mode/opt experimental/jeremyl/c2:
   Correctness: buck test mode/dev-nosan caffe2/test/...

Differential Revision: D18012511

fbshipit-source-id: 7db057d77741cf69c4f2fed560771c3201da19ed
2019-10-24 13:05:46 -07:00
Jeremy Lilley
3d745508eb String optimizations related to serialization. (#28230)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28230

This change improves the pickling small data benchmark by roughly 30%.
(25.8usec -> 18.05usec).

One of the main issues was that we were spending 25%+ of the cpu profile
time in std::[o]stringstream constructors alone.

Two main parts
 - Change some std::stringstream to std::ostringstream, when they
   showed up on hot-ish paths, and it was trivial to convert them.
   Roughly 27% of the std::stringstream constructor time is spent
   building the constituent std::basic_istream. If the istream isn't
   needed, don't construct it.

 - For a couple of very hot paths (e.g. Pickler::pushGlobal), just
   convert to traditional string::append(). std::ostringstream is
   convenient, but not particularly efficient.
ghstack-source-id: 92153103

Test Plan:
Benchmarking: buck build mode/opt experimental/jeremyl/c2:SerializationBench
  Correctness: buck test mode/dev-nosan caffe2/test/...

Differential Revision: D17982181

fbshipit-source-id: 7fd4d267293231244c10c1e5b8f4951a7a3d852f
2019-10-18 07:39:30 -07:00
Jeremy Lilley
d7ff34c0f8 In torch::save() avoid zip compressing small header records. (#28180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28180

ScriptModuleSerializer::writeCode() is the only place during torch::save()
serialization where we attempt to zip compress records.

This change avoids compressing these string records if they are
sufficiently small - e.g. in the example I looked at:
  - the strings were 123 and 28 bytes, respectively.
  - the cost in the compression routines was 16.5% of the torch::save() cost.
    (we're building a huffman table for a 28 byte string).

We'd save time and not significantly affect the space if we add these
1-line conditional compressions, rather than making it unconditional.
ghstack-source-id: 92104517

Test Plan:
Benchmark: experimental/jeremyl/c2:SerializationBench
  Correctness: normal buck mode/dev-nosan caffe2/test/...

Differential Revision: D17967995

fbshipit-source-id: 7ff934388533645dc987e105c814ffe6324f4596
2019-10-17 21:10:07 -07:00
Zachary DeVito
58ed8ca9e1 clean up exported source format (#28129)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28129

The previous PR in the stack removed the need to order classes/functions
or have correct import statements. This resolved circular depedency issues
that can arise when class constructors like ModuleList put new instances
of themselves in a common namespace.

This PR changes our export format to no longer produce this information.
By doing so we can make the logic signficantly simpler, since we just
keep track of an individual PythonPrint object per file.

Notes:
* PythonPrint was changed to manage its own stream/list of ranges. It
was doing this anyway internally, this just makes the API more clear.
* Since we are changing the serialization format, I also removed op_version_set.
It is now replaced with the VERSION number that written in the zip archive.
This further simplifies the code emission process.
* A test of op_version_set was removed since there is no longer any behavior
to test.

Test Plan: Imported from OSS

Differential Revision: D17961610

Pulled By: zdevito

fbshipit-source-id: ada362c4ca34d05393a1a7e799c94785ab9d9825
2019-10-16 22:47:24 -07:00
Jeremy Lilley
2e0294cb39 Make JIT Serialization support arbitrary std::function<> IO (#28039)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28039

Right now, torch::save() uses std::ostream, which results in unnecessary
data copies in practice. Similar for torch::load().

Adding a std::function<size_t(const void*, size_t)> as an output option,
parallel to the existing filename and std::ostream apis, gives users the
flexibility to emit directly to a backing store.

For a simple case of appending the output to a std::string, we observe
significant benchmark savings (on order of -50%), even with the
minor std::function<> dispatch overhead. The main reason is that
std::ostringstream effectively requires 2 extra copies of the data
beyond a simple string.append lambda.

We also provide a parallel api for the load(), though this one is
slightly more complex due to the need to do arbitrary position reads.

Test Plan:
buck test mode/dev-nosan caffe2/test/...
      (Basic serialization test in caffe2/test/cpp/api/serialize.cpp)
      Benchmark in experimental/jeremyl/c2/SerializationBench.cpp, with D17823443
        (1M time goes from 90ms -> 40ms, albeit with crc patch applied)

Differential Revision: D17939034

fbshipit-source-id: 344cce46f74b6438cb638a8cfbeccf4e1aa882d7
2019-10-15 22:12:04 -07:00
Zachary DeVito
3de34744b3 Make PythonPrint a class (#26787)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26787

A follow up PR will remove the need to issue import statements,
or write classes in order since they are no longer needed.
 This change allows the same PythonPrint class
to be used for an entire file which will be needed in that patch.

Test Plan: Imported from OSS

Differential Revision: D17566440

Pulled By: zdevito

fbshipit-source-id: 1ee896da0cdfe6a003298e1d4b0238403b9ed6dd
2019-10-15 16:00:34 -07:00
Will Feng
964d3d8b38 Revert D17822962: [pytorch][PR] Make JIT Serialization support arbitrary std::function<> IO
Test Plan: revert-hammer

Differential Revision:
D17822962

Original commit changeset: d344a7e59707

fbshipit-source-id: ba153a2110faf91d103bd0f8dea4e9613bd6b0da
2019-10-15 13:55:11 -07:00
Jeremy Lilley
cbe5ab1109 Make JIT Serialization support arbitrary std::function<> IO (#27586)
Summary:
Right now, torch::save() uses std::ostream, which results in unnecessary
data copies in practice. Similar for torch::load().

Adding a std::function<size_t(const void*, size_t)> as an output option,
parallel to the existing filename and std::ostream apis, gives users the
flexibility to emit directly to a backing store.

For a simple case of appending the output to a std::string, we observe
significant benchmark savings (on order of -50%), even with the
minor std::function<> dispatch overhead. The main reason is that
std::ostringstream effectively requires 2 extra copies of the data
beyond a simple string.append lambda.

We also provide a parallel api for the load(), though this one is
slightly more complex due to the need to do arbitrary position reads.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27586

Test Plan:
buck test mode/dev-nosan caffe2/test/...
      (Basic serialization test in caffe2/test/cpp/api/serialize.cpp)
      Benchmark in experimental/jeremyl/c2/SerializationBench.cpp, with D17823443
        (1M time goes from 90ms -> 40ms, albeit with crc patch applied)

Differential Revision: D17822962

Pulled By: jjlilley

fbshipit-source-id: d344a7e59707f3b30d42280fbab78f87399e4d10
2019-10-15 12:39:58 -07:00
Wanchao Liang
b05ec828ad Add interface/object serialization as module attribute (#26770)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26770

This PR added the interface/object serialization as module attribute, to
allow initializing object as a interface type during python
initialization. Because interface type can be backed by any class object
that implements that interface, if we declare it in
python/module.__init__, we will need to collect the run time types of the
value and serialize them to ensure complete code information

Test Plan: Imported from OSS

Differential Revision: D17742707

fbshipit-source-id: 7f614ad4f982996d320a0e2dd3515bf47370e730
2019-10-04 17:12:08 -07:00
Martin Yuan
19ab5381c3 Add OPN instruction and vararg operator table (#27104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27104

* The use case here is to replace prim::ListConstruct, which requires Node, but Node is not available in mobile lite interpreter.
* (OPN, X, N), X is the index to the vararg operator-name and operator tables. N is number of inputs. For ListConstruct example, operator name can be "aten::listconstruct" and the overloaded name is the output type ("int", "float", "bool", "tensor" and "generic").
* A vararg operator table is built with void(int input_size, Stack& stack) functions.
## Unit test
LiteInterpreterConv covers OPN instruction and conv operator.

Test Plan: Imported from OSS

Differential Revision: D17762853

fbshipit-source-id: 475aa0c6678e3760cec805862a78510913a89c83
2019-10-04 09:35:53 -07:00
Lu Fang
a173bea425 Resubmit [pytorch][PR] [ONNX] Updating producer_version in exported O… (#27004)
Summary:
Fix more expect files.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27004

Reviewed By: hl475

Differential Revision: D17641034

Pulled By: houseroad

fbshipit-source-id: 8130397d1af28d33b98ad146146d3b3fa16c15e3
2019-09-27 23:23:31 -07:00
Karl Ostmo
55a358546f Revert D17631902: [pytorch][PR] [ONNX] Updating producer_version in exported ONNX models to PyTorch 1.3.
Test Plan: revert-hammer

Differential Revision:
D17631902

Original commit changeset: 6d5896465740

fbshipit-source-id: ebf9e5e1c582027dbba2db68328ea4136a974c6b
2019-09-27 15:49:36 -07:00
Spandan Tiwari
6b3c0c1f22 Updating producer_version in exported ONNX models to PyTorch 1.3. (#26976)
Summary:
Bumping up the `producer_version` in exported ONNX models in view of the next release. Updating tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26976

Reviewed By: hl475

Differential Revision: D17631902

Pulled By: houseroad

fbshipit-source-id: 6d58964657402ac23963c49c07fcc813386aabf0
2019-09-27 13:50:24 -07:00
Zachary DeVito
0e3389dced Fix circular deps in loading (#26758)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26758

This PR changes the order in which we import classes and functions so
that is is no longer necessary for them to defined in order in a file,
or for there to be proper import statements in the exported file.

Actually importing a function/class now is driven by the need to resolve
the entity during unpickling, type resolution, or value resolution.

While this should allow significant simplification to the code that
serializes classes, this work has not been done yet in order to avoid
inevitable forward compat issues in the transition period.

Notes:
* Individual functions have been replaced with a SourceImporter object
  that exposes a resolveType method. This method loads the type if
  it has not been loaded yet, potentially parsing  (but not loading)
  the file it exists in if that file hasn't been parsed yet.
* Some legacy functionality needed to be added as a method to this object
  since the old format still used some of this logic for class resolution.

Test Plan: Imported from OSS

Differential Revision: D17558989

Pulled By: zdevito

fbshipit-source-id: 7eae3470bcbd388c4de463e3462d527776ed46c6
2019-09-26 11:39:16 -07:00
Martin Yuan
7fc06ea541 Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187

The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
    * The module object (in data.pkl) is the same as the original JIT model.
    * The serializer is dependent on pickle only (no protobuf or Json).
    * The major functionality is forked in ScriptModuleSerializer2::serialize().
    * The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).

The output layout looks like:

* folders of methods.
    * In each method folder (for example, forward/):
        * bytecode.pkl: instructions and operators
        * constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.

Test Plan: Imported from OSS

Differential Revision: D17076411

fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 16:35:45 -07:00
Michael Suo
0c6ee947b6 Remove forward compat code for serialization format (#25440)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25440

See the comments deleted for what this PR is all about

Test Plan: Imported from OSS

Differential Revision: D17125690

Pulled By: suo

fbshipit-source-id: a4a2f541a3e161f9c15b51df475130e7bf683cf8
2019-09-04 12:22:31 -07:00
Zachary DeVito
e2ccccee9a Load tensors directly from pickle archive
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23281

Test Plan: Imported from OSS

Differential Revision: D16452815

Pulled By: zdevito

fbshipit-source-id: 918eef3ad444b598ab655c39037e4baafdcb51e1
2019-08-22 11:48:09 -07:00
Zachary DeVito
bdc57d3833 Merge ProfiledTensorType and TensorType (#24284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24284

This PR finishes the unification of all Tensor types into a single object.
ProfiledTensorType is renamed to TensorType and the old TensorType is
deleted.

Notes:
* Fixes bug in merge for VaryingShape by changing its representation to an
 optional list of optional ints.
* Removes ProfiledTensorType::create(type) invocations that can now
  simply be expect calls on tensor type.

Test Plan: Imported from OSS

Differential Revision: D16794034

Pulled By: zdevito

fbshipit-source-id: 10362398d0bb166d0d385d74801e95d9b87d9dfc
2019-08-20 13:01:28 -07:00
Zachary DeVito
0cbd7fa46f remove CompleteTensorType
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24169

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D16765329

Pulled By: zdevito

fbshipit-source-id: 88560cefba635c3d586a3e4dee67f9b1d901a642
2019-08-15 13:31:34 -07:00
Michael Suo
8a7e57c416 clean up import_source (#24282)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24282

This moves a test from Python to cpp, and in doing so lets us clean up a
bunch of otherwise unused code.

Test Plan: Imported from OSS

Differential Revision: D16800562

Pulled By: suo

fbshipit-source-id: ebc29bb81f4fb2538081fa309ead1739980f1093
2019-08-14 11:26:26 -07:00
Michael Suo
c158848abe class_table_ to deps_table_ (#24281)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24281

These are not just classes anymore, rename

Test Plan: Imported from OSS

Differential Revision: D16800564

Pulled By: suo

fbshipit-source-id: 8b8d508944c26a8916fc7642df43f22583dfcf82
2019-08-14 11:26:22 -07:00
Michael Suo
5839a59ae3 simplify NamedType interface (#24278)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24278

We had a lot of redundant methods. Killing them.

Test Plan: Imported from OSS

Differential Revision: D16800561

Pulled By: suo

fbshipit-source-id: 60acc1d5b0f34130a1f66a1e5bc7df364a5feb57
2019-08-14 11:26:10 -07:00
Michael Suo
0f8d1fbe96 Revert D16611883: [jit] simplify NamedType interface
Differential Revision:
D16611883

Original commit changeset: a32c0a8b8b7e

fbshipit-source-id: c0829ec8432a32b0174c26a2cd18f85c0e7f8a3f
2019-08-13 14:07:04 -07:00
Edward Yang
f36c3e9e4a Revert D16684391: [jit] class_table_ to deps_table_
Differential Revision:
D16684391

Original commit changeset: af0024c0b7fb

fbshipit-source-id: c9b98ac60b460963dc50f4837100909ff8f6c3ea
2019-08-13 13:27:03 -07:00
Edward Yang
94aae71ba9 Revert D16684390: [jit] clean up import_source
Differential Revision:
D16684390

Original commit changeset: fca81ca14d1a

fbshipit-source-id: eb229097560ab1ead43756175e552764c8a14703
2019-08-13 13:26:59 -07:00
Michael Suo
bb4f4e4d03 clean up import_source (#23846)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23846

This moves a test from Python to cpp, and in doing so lets us clean up a
bunch of otherwise unused code.

Test Plan: Imported from OSS

Differential Revision: D16684390

Pulled By: suo

fbshipit-source-id: fca81ca14d1ac9e4d6b47ae5eecaa42b38d69147
2019-08-12 20:30:06 -07:00
Michael Suo
2dbd36b384 class_table_ to deps_table_ (#23845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23845

These are not just classes anymore, rename

Test Plan: Imported from OSS

Differential Revision: D16684391

Pulled By: suo

fbshipit-source-id: af0024c0b7fbcca68785ec3fc6dc288ec46a1b84
2019-08-12 20:30:01 -07:00
Michael Suo
a0836cb8da simplify NamedType interface (#23691)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23691

We had a lot of redundant methods. Killing them.

Test Plan: Imported from OSS

Differential Revision: D16611883

Pulled By: suo

fbshipit-source-id: a32c0a8b8b7e909b386a70abb0827c26cbd37e20
2019-08-12 20:29:49 -07:00
davidriazati
75c1419b46 Add Pickler C++ API (#23241)
Summary:
This PR adds functions to wrap the Pickler and exposes them to the C++ API

](https://our.intern.facebook.com/intern/diff/16746451/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23241

Pulled By: driazati

Differential Revision: D16746451

fbshipit-source-id: 25ea5db4174006ce41e2e8989c8a345b82f637a7
2019-08-12 14:43:31 -07:00
Spandan Tiwari
7583519b87 Provide argument in ONNX export to exclude intializers from graph inputs. (#23284)
Summary:
Starting ONNX IR version 4, the initializers in the ONNX graph do not have to be inputs of the graphs. This constraint, which existed in IR version 3 and earlier, was relaxed in IR version 4. This PR provides an API level argument to allow ONNX export with the relaxed constraint of IR version 4, i.e. provides the option to not include initializers as inputs. This allows backends/runtimes to do certain optimizations, such as constant folding, better.

*Edit*: After discussion with houseroad we have the following behavior. For any OperatorExportType, except OperatorExportTypes.ONNX, the current status of export is maintained in this PR by default. However, the user can override it by setting the `keep_initializers_as_inputs` argument to the export API.  But when exporting to ONNX, i.e. OperatorExportType is OperatorExportTypes.ONNX, the current status is changed in that by default the initializers are NOT part of the input. Again, the default can be overridden by setting the `keep_initializers_as_inputs` argument.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23284

Differential Revision: D16459961

Pulled By: bddppq

fbshipit-source-id: b8f0270dfaba47cdb8e04bd4cc2d6294f1cb39cf
2019-08-12 14:17:25 -07:00
Michael Suo
77c08aa46c serialize modules as classes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23098

Test Plan: Imported from OSS

Differential Revision: D16383328

Pulled By: suo

fbshipit-source-id: 36389b8e45c3febb7f224cd9c630fe643fa90bef
2019-08-11 15:50:29 -07:00
David Riazati
3c1270a730 Revert D16675418: [jit] Add Pickler C++ API
Differential Revision:
D16675418

Original commit changeset: 76543c81ac67

fbshipit-source-id: f0249d16d363c4ecbceecd1bf610dc280e659cc0
2019-08-09 13:13:15 -07:00
davidriazati
01d98c7cfb Add Pickler C++ API (#23241)
Summary:
This PR adds functions to wrap the Pickler and exposes them to the C++ API
](https://our.intern.facebook.com/intern/diff/16675418/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23241

Pulled By: driazati

Differential Revision: D16675418

fbshipit-source-id: 76543c81ac67c3e20a75ebc2073191bcbd6573bf
2019-08-09 12:25:30 -07:00
Owen Anderson
d9ec37adc4 Compress all non-Tensor components of a serialized TorchScript model. (#23723)
Summary:
This saves about 69KB off the FaceBlaze model, bringing the total size down from 388KB to 319KB.
 See https://github.com/pytorch/pytorch/issues/23582
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23723

Differential Revision: D16623693

fbshipit-source-id: 66267f87635c502c804293054fd5716d291389c0
2019-08-02 12:39:20 -07:00
Owen Anderson
d1e0a3dd15 Compress debug symbols when serializing TorchScript models.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23659

Differential Revision: D16603775

fbshipit-source-id: f2912048bdee36b3bcaa779e801c61bfbb5f30e5
2019-08-01 22:30:27 -07:00
Supriya Rao
9223fa1c46 Add support to serialize qtensor in JIT. (#23356)
Summary:
Adds qtensor specific fields to the proto file so that they get serialized into the model.json

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23356
ghstack-source-id: 87263428

Differential Revision: D16473237

fbshipit-source-id: bf5b51d0863d036d30a1644a3c3b74516468224b
2019-07-26 15:52:15 -07:00
Michael Suo
711be82951 Make optimize a thread_local flag
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23170

Test Plan: Imported from OSS

Differential Revision: D16441912

Pulled By: suo

fbshipit-source-id: a33485178a329d54e41e364c4f14950f88481c55
2019-07-24 23:09:21 -07:00
Zachary DeVito
93da1030df Fix pickler bug where it would not load if no tensors were saved
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23263

Test Plan: Imported from OSS

Differential Revision: D16446928

Pulled By: zdevito

fbshipit-source-id: f70f86b28c3901a97b65b4d7654e39dc6e1aab6a
2019-07-24 17:13:46 -07:00
Zachary DeVito
7922b5057d Memoize storages in pickler
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23262

Test Plan: Imported from OSS

Differential Revision: D16446927

Pulled By: zdevito

fbshipit-source-id: 92d26f64ff6269b1deef821edae31745158b5137
2019-07-24 17:13:42 -07:00
Zachary DeVito
e0f632c58b pickler.cpp: respect __getstate__/__setstate__
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23190

Test Plan: Imported from OSS

Differential Revision: D16431553

Pulled By: zdevito

fbshipit-source-id: 680ea1507c12727fd17aedb3067f522cf490e306
2019-07-23 14:27:51 -07:00
Spandan Tiwari
27031dccb2 Updating producer_version in exported ONNX models to pytorch 1.2. (#23120)
Summary:
Bumping up the producer_version in exported ONNX models in view of the next release. Updating tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23120

Reviewed By: zrphercule

Differential Revision: D16420917

Pulled By: houseroad

fbshipit-source-id: 6686b10523c102e924ecaf96fd3231240b4219a9
2019-07-22 13:45:39 -07:00
davidriazati
fcdfc35d1c Support get/setstate with no args (#23119)
Summary:
`pickle` supports this and a lot of the quantized use cases for get/set
state follow this pattern
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23119

Pulled By: driazati

Differential Revision: D16391234

fbshipit-source-id: 9f63e0a1679daa61b17aa64b5995e2be23b07b50
2019-07-22 12:32:29 -07:00
Michael Suo
eaee0c6cd9 Make classtypes hold a weak_ptr to their CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22902

Test Plan: Imported from OSS

Differential Revision: D16278159

Pulled By: suo

fbshipit-source-id: 6aa682e347847e808b44218d38ff1dae66945a07
2019-07-16 12:04:20 -07:00
Will Feng
a326aad816 Revert D16197608: [jit] Make classtypes hold a weak_ptr to their CU
Differential Revision:
D16197608

Original commit changeset: 22250d6f0d24

fbshipit-source-id: 47a8cdeb62b1033252070ecb92906358014b551a
2019-07-15 19:49:41 -07:00
Michael Suo
260b0e8476 Make classtypes hold a weak_ptr to their CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22726

Differential Revision: D16197608

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: 22250d6f0d249f61f269afb4fe8e7d1af0be1205
2019-07-15 13:13:16 -07:00
Michael Suo
22d70e0d4b Give functions qualified names
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22721

Test Plan: Imported from OSS

Differential Revision: D16197606

Pulled By: suo

fbshipit-source-id: 94718fcdb0d3b651f16674af3cfd6249ed4533ae
2019-07-11 14:55:34 -07:00
Karl Ostmo
1ecc945ab2 Revert D15998762: [jit] Give functions qualified names
Differential Revision:
D15998762

Original commit changeset: bc2b734f626a

fbshipit-source-id: a118cc4e9a34233279e8380529a8d8120a25839d
2019-07-10 16:10:28 -07:00