Avik Chaudhuri
68c7aac809
[export][reland] non-strict export with dynamic shapes ( #116048 )
...
Reland of https://github.com/pytorch/pytorch/pull/115862
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116048
Approved by: https://github.com/ydwu4
2023-12-19 23:57:22 +00:00
angelayi
e43d33f4f7
[export] Support torch.sym* ops ( #115854 )
...
Fixes https://github.com/pytorch/pytorch/issues/108830 and https://github.com/pytorch/executorch/issues/1379#issuecomment-1853322866
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115854
Approved by: https://github.com/zhxchen17
2023-12-18 17:48:47 +00:00
angelayi
1f3bdf40ad
[export] Update schema version ( #115712 )
...
Since pytorch 2.1 release we've made some BC breaking changes to the serialized schema. We should update it in time for the 2.2 release. Some of the changes include:
* https://github.com/pytorch/pytorch/pull/114371 - custom class objects / pybinded objects are no longer saved directly to the `ExportedProgram` structure. Instead, the name is serialized inside of the program, and the actual bytes are stored. in a separate location from the exported program, allowing it to be saved to a different location.
* https://github.com/pytorch/pytorch/pull/111204 - `GraphSignature` structure changed and `call_spec` is removed from the `GraphModule` schema
* https://github.com/pytorch/pytorch/pull/111407 - `loss_outout` -> `loss_output`
* https://github.com/pytorch/pytorch/pull/113075 - `example_inputs` removed from the `ExportedProgram` structure (this originally did not store anything), `dialect` added to the `ExportedProgram` structure.
* https://github.com/pytorch/pytorch/pull/113689 - tensor constants are now lifted as inputs to the graph, and their locations are stored in the `GraphSignature`
* https://github.com/pytorch/pytorch/pull/114172 - removed `equality_constraints` and added a `SymExprHint` for all symbolic expressions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115712
Approved by: https://github.com/gmagogsfm
2023-12-15 21:43:03 +00:00
PyTorch MergeBot
50c9665f92
Revert "[export] Support torch.sym* ops ( #115854 )"
...
This reverts commit 347cb91946 .
Reverted https://github.com/pytorch/pytorch/pull/115854 on behalf of https://github.com/atalman due to OSSCI oncall, broke multple jobs ([comment](https://github.com/pytorch/pytorch/pull/115854#issuecomment-1858486796 ))
2023-12-15 21:07:52 +00:00
PyTorch MergeBot
80a9625d9f
Revert "non-strict export with dynamic shapes ( #115862 )"
...
This reverts commit 1bb0d0fc1f .
Reverted https://github.com/pytorch/pytorch/pull/115862 on behalf of https://github.com/atalman due to OSSCI oncall, failing trunk / macos-12-py3-arm64 / test ([comment](https://github.com/pytorch/pytorch/pull/115862#issuecomment-1858482486 ))
2023-12-15 21:04:12 +00:00
Avik Chaudhuri
1bb0d0fc1f
non-strict export with dynamic shapes ( #115862 )
...
Differential Revision: [D52175048](https://our.internmc.facebook.com/intern/diff/D52175048/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115862
Approved by: https://github.com/zhxchen17
2023-12-15 20:11:30 +00:00
angelayi
347cb91946
[export] Support torch.sym* ops ( #115854 )
...
Fixes https://github.com/pytorch/pytorch/issues/108830 and https://github.com/pytorch/executorch/issues/1379#issuecomment-1853322866
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115854
Approved by: https://github.com/zhxchen17
2023-12-15 20:08:04 +00:00
PyTorch MergeBot
1b506e7469
Revert "non-strict export with dynamic shapes ( #115862 )"
...
This reverts commit f54bb1ed56 .
Reverted https://github.com/pytorch/pytorch/pull/115862 on behalf of https://github.com/atalman due to OSSCI oncall, failing trunk / macos-12-py3-arm64 / test ([comment](https://github.com/pytorch/pytorch/pull/115862#issuecomment-1858197497 ))
2023-12-15 17:03:42 +00:00
Avik Chaudhuri
f54bb1ed56
non-strict export with dynamic shapes ( #115862 )
...
Differential Revision: [D52175048](https://our.internmc.facebook.com/intern/diff/D52175048/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115862
Approved by: https://github.com/zhxchen17
2023-12-15 16:38:45 +00:00
Angela Yi
8e2d63cbc3
[export][reland] Remove runtime assertion pass ( #115597 )
...
Summary:
Reland of https://github.com/pytorch/pytorch/pull/115196
D52054112 to fix internal failures.
Test Plan: CI
Differential Revision: D52054110
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115597
Approved by: https://github.com/ydwu4 , https://github.com/zhxchen17
2023-12-15 03:22:03 +00:00
Zhengxu Chen
ef6a0faf89
[export] Fix canonicalization. ( #115830 )
...
Summary: Add the missed layout argument branch.
Test Plan: buck2 test 'fbcode//mode/dev-nosan' fbcode//sigmoid/inference/test_gpu:export_package_sparse_toy_test
Differential Revision: D52166501
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115830
Approved by: https://github.com/angelayi
2023-12-14 22:48:26 +00:00
angelayi
dd42201cb8
[export] Preserve FQN in export_to_torch_ir ( #115462 )
...
AOTInductor currently relies of export_to_torch_ir to generate a graph, and passes it to inductor to generate the .so. They would like the FQN to be consistent so that they can easily find/update the weights in the .so.
Note that since export flattens all modules in to a single computational graph, we will change the FQNs in the original module by replacing all periods with underscores. For example, `foo.child1param`, which points to a submodule named `foo`'s parameter named `child1param`, will be renamed to `foo_child1param` since we no longer have the submodule `foo`. This is done just by doing `name.replace(".", "_")`.
Outputted AOTInductor c++ code: https://www.internalfb.com/phabricator/paste/view/P900120950?lines=377-355%2C354
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115462
Approved by: https://github.com/tugsbayasgalan
2023-12-13 04:58:47 +00:00
zhxchen17
f78f23d753
[export] Turn off output value from sources for export. ( #115442 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115442
Approved by: https://github.com/tugsbayasgalan
2023-12-12 22:41:23 +00:00
zhxchen17
d5286d7ea8
[export] Add canonical form for differentiating IR ( #115589 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115589
Approved by: https://github.com/suo
2023-12-12 16:21:57 +00:00
angelayi
92fd3927b0
[export][reland] Add math.* ops to pass base ( #115559 )
...
Reland of https://github.com/pytorch/pytorch/pull/115271/
Fixes https://github.com/pytorch/pytorch/issues/115209
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115559
Approved by: https://github.com/zhxchen17 , https://github.com/atalman
ghstack dependencies: #115556 , #115557 , #115558
2023-12-12 10:46:41 +00:00
angelayi
b6a4866330
[export][reland][refactor][3/n] Move unlift to separate file ( #115558 )
...
Reland of https://github.com/pytorch/pytorch/pull/114787
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115558
Approved by: https://github.com/zhxchen17 , https://github.com/atalman
ghstack dependencies: #115556 , #115557
2023-12-12 05:37:07 +00:00
angelayi
36199747f3
[export][reland][refactor][2/n] Move tracing logic ( #115557 )
...
Reland of https://github.com/pytorch/pytorch/pull/114768
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115557
Approved by: https://github.com/zhxchen17
ghstack dependencies: #115556
2023-12-12 05:37:07 +00:00
angelayi
dd9a989b83
[export][reland][refactor][1/n] Split dynamic shapes ( #115556 )
...
Reland of https://github.com/pytorch/pytorch/pull/114764
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115556
Approved by: https://github.com/zhxchen17
2023-12-12 05:36:41 +00:00
suo
ccd5bde6a3
[export] Reintroduce InterpreterModule to unflatten ( #115436 )
...
InterpreterModule is better than GraphModule codegen; it's more debuggable and
has better stack traces. The only reason we don't use it today is because
torch.compile doesn't work with it.
I work around this by constructing a GraphModule separately for usage during
dynamo tracing, but otherwise using torch.fx.Interpreter.
Differential Revision: [D51971661](https://our.internmc.facebook.com/intern/diff/D51971661/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115436
Approved by: https://github.com/zhxchen17
ghstack dependencies: #115408
2023-12-11 22:15:32 +00:00
suo
c137335b5c
[export] make UnflattenedModule not inherit from GraphModule ( #115408 )
...
UnflattenedModule doesn't really behave like a graph module; we customize `__call__` to do something completely different than what GraphModule does. So, things that test `isinstance(unflattened_module, GraphModule)` and do something with the GraphModule are often broken.
This change makes UnflattenedModule it's own thing.
Differential Revision: [D51959097](https://our.internmc.facebook.com/intern/diff/D51959097/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115408
Approved by: https://github.com/zhxchen17
2023-12-11 22:15:21 +00:00
atalman
b88be1686d
Revert "[export][refactor][1/n] Move dynamic shapes logic ( #114764 )" ( #115508 )
...
GitHub first oncall.
This reverts commit 53bf8cfcf9 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115508
Approved by: https://github.com/malfet , https://github.com/angelayi
2023-12-11 14:54:51 +00:00
atalman
24a463c46c
Revert "[export][refactor][2/n] Move tracing logic ( #114768 )" ( #115503 )
...
Github first oncall.
This reverts commit 0ab57ee7ea .
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115503
Approved by: https://github.com/angelayi , https://github.com/kit1980
2023-12-10 19:30:15 +00:00
atalman
749f0c90e1
Revert "[export][refactor][3/n] Move unlift to separate file ( #114787 )" ( #115457 )
...
Github First Oncall: This reverts commit 967863d91d .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115457
Approved by: https://github.com/osalpekar
2023-12-08 22:33:28 +00:00
PyTorch MergeBot
3e47e3f441
Revert "[export] Fix graph output mismatch issue with constant outputs. ( #115280 )"
...
This reverts commit 622688fab9 .
Reverted https://github.com/pytorch/pytorch/pull/115280 on behalf of https://github.com/atalman due to ghfirst issue when importing, will reland this PR ([comment](https://github.com/pytorch/pytorch/pull/115280#issuecomment-1847903624 ))
2023-12-08 22:10:03 +00:00
PyTorch MergeBot
af925a56a1
Revert "[export] Add math.* ops to pass base ( #115271 )"
...
This reverts commit 6c0a4ced53 .
Reverted https://github.com/pytorch/pytorch/pull/115271 on behalf of https://github.com/atalman due to ghfirst issue when importing, will reland this PR ([comment](https://github.com/pytorch/pytorch/pull/115271#issuecomment-1847852211 ))
2023-12-08 21:17:56 +00:00
PyTorch MergeBot
4186932bac
Revert "[export] Remove runtime assertion pass ( #115196 )"
...
This reverts commit c163b3c035 .
Reverted https://github.com/pytorch/pytorch/pull/115196 on behalf of https://github.com/atalman due to Broke internal test ([comment](https://github.com/pytorch/pytorch/pull/115196#issuecomment-1847778344 ))
2023-12-08 20:07:04 +00:00
suo
3d999d2f2c
[export] optimize unflattener ( #115364 )
...
Unflattening was slow on the APS FM model (which has thousands of nn.EmbeddingBag modules).
Quick glance at the profile shows 75% of time in unflattening was spent copying this node list, which is immutable and globally shared. So just passing it around as a tuple yields a 4x speedup lol.
Differential Revision: [D51929775](https://our.internmc.facebook.com/intern/diff/D51929775/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115364
Approved by: https://github.com/zhxchen17
2023-12-08 19:32:01 +00:00
zhxchen17
622688fab9
[export] Fix graph output mismatch issue with constant outputs. ( #115280 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115280
Approved by: https://github.com/tugsbayasgalan
2023-12-07 06:11:08 +00:00
angelayi
6c0a4ced53
[export] Add math.* ops to pass base ( #115271 )
...
Fixes https://github.com/pytorch/pytorch/issues/115209
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115271
Approved by: https://github.com/ydwu4
2023-12-07 02:47:04 +00:00
angelayi
c163b3c035
[export] Remove runtime assertion pass ( #115196 )
...
Reland of https://github.com/pytorch/pytorch/pull/111949/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115196
Approved by: https://github.com/avikchaudhuri
2023-12-07 01:44:11 +00:00
angelayi
967863d91d
[export][refactor][3/n] Move unlift to separate file ( #114787 )
...
Differential Revision: [D51823960](https://our.internmc.facebook.com/intern/diff/D51823960 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114787
Approved by: https://github.com/ydwu4
ghstack dependencies: #114764 , #114768
2023-12-06 16:46:47 +00:00
angelayi
0ab57ee7ea
[export][refactor][2/n] Move tracing logic ( #114768 )
...
2/n of refactoring export code:
* Moved tracing logic in torch/_export/init.py to torch/export/_tracer.py
Differential Revision: [D51823961](https://our.internmc.facebook.com/intern/diff/D51823961 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114768
Approved by: https://github.com/ydwu4
ghstack dependencies: #114764
2023-12-06 16:46:47 +00:00
angelayi
53bf8cfcf9
[export][refactor][1/n] Move dynamic shapes logic ( #114764 )
...
1/n of refactoring export code:
* Moved dynamic shapes/constraints/dynamic_dims logic in torch/_export/__init__.py and torch/export/__init__.py to torch/export/dynamic_shapes.py
Differential Revision: [D51823962](https://our.internmc.facebook.com/intern/diff/D51823962 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114764
Approved by: https://github.com/ydwu4
2023-12-06 16:46:38 +00:00
Wei Wei
fcf6a76108
[aot_inductor][pass] fuse parallel linear based on pre grad aten IR ( #114776 )
...
Summary:
This work is for PT2 inference. Since the IR from Export will change to pre-grad aten IR in a few months. We need to start this work from now on. Here is what I do in this diff:
1) Copy the fuse parallel linear pass to fb folder and adapt it to aten IR. We still want to keep the original `group_batch_fusion.py` because it is still used in training. In future at certain time point when PT2 training decided to retire the torch IR based group_batch_fusion, we can remove it. But right now, it's better to have torch IR and aten IR version seperately.
Our plan is to gradually transform the existing and important pre-grad passes to aten IR based passes.
Differential Revision: D51017854
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114776
Approved by: https://github.com/zhxchen17
2023-12-06 05:48:20 +00:00
Xuehai Pan
55064a4ef9
[BE] add parentheses to kwargs unpacking func(*args, **(kwargs or {})) ( #115026 )
...
This PR adds parentheses to kwargs unpacking `func(*args, **(kwargs or {}))` for better code readability.
With/without the parentheses are semantic equivalent because they produce the same bytecode.
```console
$ echo "func(*args, **kwargs or {})" | python3 -m dis -
0 0 RESUME 0
1 2 PUSH_NULL
4 LOAD_NAME 0 (func)
6 LOAD_NAME 1 (args)
8 BUILD_MAP 0
10 LOAD_NAME 2 (kwargs)
12 JUMP_IF_TRUE_OR_POP 1 (to 16)
14 BUILD_MAP 0
>> 16 DICT_MERGE 1
18 CALL_FUNCTION_EX 1
20 POP_TOP
22 LOAD_CONST 0 (None)
24 RETURN_VALUE
$ echo "func(*args, **(kwargs or {}))" | python3 -m dis -
0 0 RESUME 0
1 2 PUSH_NULL
4 LOAD_NAME 0 (func)
6 LOAD_NAME 1 (args)
8 BUILD_MAP 0
10 LOAD_NAME 2 (kwargs)
12 JUMP_IF_TRUE_OR_POP 1 (to 16)
14 BUILD_MAP 0
>> 16 DICT_MERGE 1
18 CALL_FUNCTION_EX 1
20 POP_TOP
22 LOAD_CONST 0 (None)
24 RETURN_VALUE
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115026
Approved by: https://github.com/Skylion007
2023-12-03 20:03:26 +00:00
Xuehai Pan
2a3d8e50fb
[pytree] test aligned API signature for C++ and Python pytree ( #112485 )
...
Add tests to ensure the C++ and Python pytree provide the same APIs with identical signatures.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112485
Approved by: https://github.com/zou3519
2023-11-30 17:50:06 +00:00
Zhengxu Chen
e6b3a8ce5f
[export] Refactor export() and separate the non-strict part. ( #114697 )
...
Summary: Refactor torch.export to separate strict part and non strict part. Adding an option to torch.export called `strict=True`.
Test Plan: buck2 test mode/opt caffe2/test:test_export -- -r non_strict
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114697
Approved by: https://github.com/ydwu4 , https://github.com/tugsbayasgalan
2023-11-30 16:47:50 +00:00
Angela Yi
f1fe0b685c
[export] Remove combine_args_kwargs ( #114782 )
...
Test Plan: CI
Differential Revision: D51676479
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114782
Approved by: https://github.com/zhxchen17
2023-11-30 02:49:21 +00:00
Angela Yi
f0cc6364ed
[export] Remove convert_to_cpu flag ( #114775 )
...
Test Plan: CI
Differential Revision: D51674158
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114775
Approved by: https://github.com/zhxchen17 , https://github.com/SherlockNoMad
2023-11-30 01:59:52 +00:00
angelayi
c10893654e
[export] Fix run_decomps to work with fake mode ( #114714 )
...
Fixes https://github.com/pytorch/pytorch/issues/114711
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114714
Approved by: https://github.com/ydwu4 , https://github.com/zhxchen17
2023-11-29 06:52:13 +00:00
Angela Yi
05f071d922
[export] Fix state dict device serialization ( #114695 )
...
Summary:
Fixes https://github.com/pytorch/pytorch/issues/114000
Will check with SherlockNoMad on why we need to convert to cpu after his PTO
Test Plan: CI
Differential Revision: D51629068
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114695
Approved by: https://github.com/ydwu4
2023-11-29 05:05:22 +00:00
Xuehai Pan
89a1fe6966
[pytree] register pytree node type in both C++ pytree and Python pytree ( #112111 )
...
Changes:
1. Add `_private_register_pytree_node` API in both C++ and Python pytree. In C++ pytree, the API will only register pytree node for C++ pytree. In Python pytree, the API will only register pytree node for Python pytree.
2. Do not allow registering a type as pytree node twice in the Python pytree.
3. Add thread lock to the Python pytree node register API.
4. The old `_register_pytree_node` API will call the `_private_register_pytree_node` API and raise a deprecation warning.
5. Add a new `register_pytree_node` API to register node type in both C++ and Python implementations.
6. Add tests to ensure a warning will be raised when the old private function is called.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112111
Approved by: https://github.com/zou3519
2023-11-28 11:41:38 +00:00
Jacob Szwejbka
304ea761f5
[executorch][be] update test_emit to use export ( #114294 )
...
Summary: exir.capture is deprecated. Switch to blessed path
Test Plan: fbsource/fbcode/executorch/exir/emit/test (c40a7a0d2)]$ buck test :
Differential Revision: D51503120
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114294
Approved by: https://github.com/zhxchen17
2023-11-28 01:25:46 +00:00
Zhengxu Chen
e0d2a24967
Reland "[export] Support user input mutation. [1/2]" ( #114496 ) ( #114596 )
...
Summary:
Serialization not implemented yet. Will do in the next diff.
Resolving Github issues:
https://github.com/pytorch/pytorch/issues/112429
https://github.com/pytorch/pytorch/issues/114142
Test Plan:
onnx doc test
```
python -m xdoctest /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/onnx/_internal/exporter.py ONNXProgram.model_signature:0
```
Differential Revision: D51588558
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114596
Approved by: https://github.com/angelayi
2023-11-27 20:19:04 +00:00
PyTorch MergeBot
fa1ccc34c4
Revert "[export] Support user input mutation. [1/2] ( #114496 )"
...
This reverts commit b62c0d96bc .
Reverted https://github.com/pytorch/pytorch/pull/114496 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/114496#issuecomment-1827289635 ))
2023-11-27 07:52:21 +00:00
Zhengxu Chen
b62c0d96bc
[export] Support user input mutation. [1/2] ( #114496 )
...
Summary:
Serialization not implemented yet. Will do in the next diff.
Resolving Github issues:
https://github.com/pytorch/pytorch/issues/112429
https://github.com/pytorch/pytorch/issues/114142
Test Plan:
buck2 run mode/opt caffe2/test:test_export -- -r test_export_
input_mutation
Differential Revision: D51556962
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114496
Approved by: https://github.com/tugsbayasgalan
2023-11-27 04:53:38 +00:00
Tobias Ringwald
a28876832c
Fixed an export problem when moving tensors to CPU during torch.export.save ( #114029 )
...
For whatever reason calling`.cpu()` on a `nn.Parameter` wrapping a CUDA tensor will return a plain (non-parameter) tensor. This PR fixes the symptom in the linked issue, but not the underlying issue.
Fixes #113999 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114029
Approved by: https://github.com/zhxchen17
2023-11-23 21:17:43 +00:00
PyTorch MergeBot
01366efcc9
Revert "[pytree] register pytree node type in both C++ pytree and Python pytree ( #112111 )"
...
This reverts commit 4e4a6ad6ec .
Reverted https://github.com/pytorch/pytorch/pull/112111 on behalf of https://github.com/DanilBaibak due to Break internal build ([comment](https://github.com/pytorch/pytorch/pull/112111#issuecomment-1824099658 ))
2023-11-23 09:59:32 +00:00
Yidi Wu
b27565ad7d
Forward fix D51468211 ( #114381 )
...
Summary:
Forward fix test failures caused by D51468211.
The root cause is that when converting the param_buffer into fake_tensor, we didn't set the static_shapes=True, this causes the shape_env to have more symbols than expected. The current status is that we assume all param and buffers are constant sizes.
Test Plan: buck2 test 'fbcode//mode/opt' fbcode//aps_models/ads/icvr/tests:export_test_cpu -- --exact 'aps_models/ads/icvr/tests:export_test_cpu - test_20x_icvr_export (aps_models.ads.icvr.tests.export_test.ExportTest)'
Reviewed By: hongtansun-meta
Differential Revision: D51531279
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114381
Approved by: https://github.com/angelayi
2023-11-23 02:58:52 +00:00
Angela Yi
f961bda939
[export] Move serialized custom class objs to toplevel ( #114371 )
...
Summary:
Move the serialized CustomClassHolder objects to the toplevel SerializedArtifact instead of embedding the bytes in the graph.
Currently the CustomClassHolder objects are embedded in the graph instead of being lifted to the ExportedProgram, so there's some logic introduced to lift it to the higher level of the serialized ExportedProgram. However, once that CustomClassHolder objects get lifted, we can remove the TODOs I added.
Test Plan: CI
Reviewed By: zhxchen17
Differential Revision: D51479125
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114371
Approved by: https://github.com/ydwu4
2023-11-22 23:44:20 +00:00