Commit Graph

166 Commits

Author SHA1 Message Date
atalman
24a463c46c Revert "[export][refactor][2/n] Move tracing logic (#114768)" (#115503)
Github first oncall.
This reverts commit 0ab57ee7ea.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115503
Approved by: https://github.com/angelayi, https://github.com/kit1980
2023-12-10 19:30:15 +00:00
angelayi
0ab57ee7ea [export][refactor][2/n] Move tracing logic (#114768)
2/n of refactoring export code:

* Moved tracing logic in torch/_export/init.py to torch/export/_tracer.py

Differential Revision: [D51823961](https://our.internmc.facebook.com/intern/diff/D51823961)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114768
Approved by: https://github.com/ydwu4
ghstack dependencies: #114764
2023-12-06 16:46:47 +00:00
Angela Yi
05f071d922 [export] Fix state dict device serialization (#114695)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/114000
Will check with SherlockNoMad on why we need to convert to cpu after his PTO

Test Plan: CI

Differential Revision: D51629068

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114695
Approved by: https://github.com/ydwu4
2023-11-29 05:05:22 +00:00
Zhengxu Chen
e0d2a24967 Reland "[export] Support user input mutation. [1/2]" (#114496) (#114596)
Summary:

Serialization not implemented yet. Will do in the next diff.

Resolving Github issues:
https://github.com/pytorch/pytorch/issues/112429
https://github.com/pytorch/pytorch/issues/114142

Test Plan:
onnx doc test
```
python -m xdoctest /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/onnx/_internal/exporter.py ONNXProgram.model_signature:0
```

Differential Revision: D51588558

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114596
Approved by: https://github.com/angelayi
2023-11-27 20:19:04 +00:00
PyTorch MergeBot
fa1ccc34c4 Revert "[export] Support user input mutation. [1/2] (#114496)"
This reverts commit b62c0d96bc.

Reverted https://github.com/pytorch/pytorch/pull/114496 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/114496#issuecomment-1827289635))
2023-11-27 07:52:21 +00:00
Zhengxu Chen
b62c0d96bc [export] Support user input mutation. [1/2] (#114496)
Summary:
Serialization not implemented yet. Will do in the next diff.

Resolving Github issues:
https://github.com/pytorch/pytorch/issues/112429
https://github.com/pytorch/pytorch/issues/114142

Test Plan:
buck2 run mode/opt caffe2/test:test_export -- -r test_export_
input_mutation

Differential Revision: D51556962

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114496
Approved by: https://github.com/tugsbayasgalan
2023-11-27 04:53:38 +00:00
Angela Yi
9fcf1f9632 [export] Update schema (#114172)
Summary: Will update CustomClassHolder in a followup

Test Plan: CI

Differential Revision: D51343522

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114172
Approved by: https://github.com/zhxchen17
2023-11-22 16:43:43 +00:00
Angela Yi
50101d59ba [export][retry] Move lifted tensors out of state_dict (#113689)
Test Plan: CI

Differential Revision: D51321532

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113689
Approved by: https://github.com/zhxchen17
2023-11-15 09:24:49 +00:00
rzou
c88a36ebce Grandfather in some more pytorch ops to be pt2_compliant (#113050)
We're not directly testing these, but in general the policy is to assume
that PyTorch ops inside the pytorch repo are compliant.

Test Plan:
- existing tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113050
Approved by: https://github.com/ezyang
2023-11-09 02:35:33 +00:00
PyTorch MergeBot
e49b9492c6 Revert "Grandfather in some more pytorch ops to be pt2_compliant (#113050)"
This reverts commit 85832c0b9b.

Reverted https://github.com/pytorch/pytorch/pull/113050 on behalf of https://github.com/PaliC due to breaking internal tests - contacted author with errors ([comment](https://github.com/pytorch/pytorch/pull/113050#issuecomment-1802524046))
2023-11-08 19:33:15 +00:00
rzou
85832c0b9b Grandfather in some more pytorch ops to be pt2_compliant (#113050)
We're not directly testing these, but in general the policy is to assume
that PyTorch ops inside the pytorch repo are compliant.

Test Plan:
- existing tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113050
Approved by: https://github.com/ezyang
ghstack dependencies: #113049
2023-11-07 12:55:16 +00:00
PyTorch MergeBot
19dbd8aca3 Revert "Grandfather in some more pytorch ops to be pt2_compliant (#113050)"
This reverts commit efae8449a8.

Reverted https://github.com/pytorch/pytorch/pull/113050 on behalf of https://github.com/clee2000 due to something in the stack broke distributed and inductor, pretty sure its the c10 one ([comment](https://github.com/pytorch/pytorch/pull/113050#issuecomment-1797279756))
2023-11-07 02:30:42 +00:00
rzou
efae8449a8 Grandfather in some more pytorch ops to be pt2_compliant (#113050)
We're not directly testing these, but in general the policy is to assume
that PyTorch ops inside the pytorch repo are compliant.

Test Plan:
- existing tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113050
Approved by: https://github.com/ezyang
ghstack dependencies: #113036, #113049
2023-11-06 23:43:31 +00:00
Kazuaki Ishizaki
9089242048 Fix typo under test directory (#112346)
This PR fixes typo in comments and messages under `test` directory. This PR also fixes related typo in messages under `torch` directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112346
Approved by: https://github.com/kit1980, https://github.com/ezyang
2023-11-03 07:53:33 +00:00
Janet Yang
ef1f08c5a0 State_dict serialization for meta tensors (#112213)
Summary: Add cases for serializing meta tensors from state_dict

Test Plan: sandcastle

Differential Revision: D50718161

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112213
Approved by: https://github.com/zhxchen17, https://github.com/houseroad
2023-11-01 01:07:09 +00:00
Tugsbayasgalan Manlaibaatar
cf3aa985a9 Don't rewrite assert in pytest (#112436)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112436
Approved by: https://github.com/angelayi
2023-10-30 23:20:02 +00:00
Peter Bell
bbd5b935e4 Use pytree.tree_leaves everywhere (#112324)
This changes all the instances I could find of `tree_flatten(...)[0]` or
`x, _ = tree_flatten` to use `tree_leaves`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112324
Approved by: https://github.com/lezcano
ghstack dependencies: #112327, #112323
2023-10-30 03:39:04 +00:00
ydwu4
8bc0b382fa [HigherOrderOp] Move map_impl to torch.ops.higher_order (#111404)
The purpose of this pr is as titled. Because of some misusage of ghstack, ghimport, and export to github from internal, the stack of https://github.com/pytorch/pytorch/pull/111092 is a mess. I'll try to land them one by one. This is a replacement for #111092 and #111400.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111404
Approved by: https://github.com/tugsbayasgalan, https://github.com/zou3519
2023-10-26 16:59:10 +00:00
Zhengxu Chen
f2a0bef35a [export] Upstream support of (tensor, tensor list) in op returns. (#111857)
Summary:
Upstreaming from internal to oss.
Diff: D49710320

Test Plan: buck2 build mode/opt sigmoid/inference/test_gpu:package_gen

Differential Revision: D50577490

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111857
Approved by: https://github.com/SherlockNoMad
2023-10-25 21:38:12 +00:00
Sherlock Huang
4d45c21c3f [Export] Don't serialize missing args with default value (#111715)
Summary: Per https://docs.google.com/document/d/1FzWm-sHYwmRi3x_g036kOxd99KaYquUsA-L5JwOn8ys/edit

I wonder if this would break executorch? @larryliu0820
I see exir/serialize.py using export's GraphModuleSerializer.

Test Plan: Existing CIs

Differential Revision: D50519217

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111715
Approved by: https://github.com/zhxchen17
2023-10-23 21:09:15 +00:00
Zhengxu Chen
ba7b9211ee [export] Update serialization schema to input/output specs. (#845) (#111204)
Summary: Pull Request resolved: https://github.com/pytorch/executorch/pull/845

Test Plan: CI

Differential Revision: D50191531

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111204
Approved by: https://github.com/angelayi
2023-10-13 22:19:56 +00:00
Tugsbayasgalan Manlaibaatar
5614023f5e Move export.constrain_as_* to torch._constrain_as_* (#110757)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110757
Approved by: https://github.com/avikchaudhuri
ghstack dependencies: #109859
2023-10-12 05:37:44 +00:00
PyTorch MergeBot
6ce3a38050 Revert "Move export.constrain_as_* to torch._constrain_as_* (#110757)"
This reverts commit 5aee22e0e0.

Reverted https://github.com/pytorch/pytorch/pull/110757 on behalf of https://github.com/kit1980 due to Depends on https://github.com/pytorch/pytorch/pull/109859 that needs to be reverted ([comment](https://github.com/pytorch/pytorch/pull/110757#issuecomment-1758908371))
2023-10-12 04:53:29 +00:00
Angela Yi
6d8e0c4b5a [export] Get export APIs ready for PTC (reland) (#111030)
Summary:
https://docs.google.com/document/d/1QJJEGnj2nHGPODlw38BEG3KLLCOTfdOVjPrNQbz_LM8/edit#bookmark=id.lp80wfshq130
Changes:
* `torch.export` will return a functional ATen graph but not lowered to core aten decompositions (CompositeImplicitAutograd decomps still run)
* `exported_program.run_decompositions(decomposition_table)` will optionally take a decomposition table, and run decompositions on the exported program, returning a new exported program. By default we will run the Core ATen decomposition table.

Calling convention for Executorch stays the same:
```
pre_autograd_graph = capture_pre_autograd_graph(f, args, ...)
aten_graph_no_decomps = torch.export.export(pre_autograd_graph, args, ...)
# Within to_edge we decompose to core aten and then convert to edge
edge_graph = exir.to_edge(aten_graph_no_decomps)
```

Test Plan: CI

Differential Revision: D50172210

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111030
Approved by: https://github.com/ydwu4
2023-10-11 20:48:24 +00:00
PyTorch MergeBot
98c329b19e Revert "[core ATen IR] Add decompositions for max, min, var_mean (#110906)"
This reverts commit 9606cda64e.

Reverted https://github.com/pytorch/pytorch/pull/110906 on behalf of https://github.com/SS-JIA due to Breaks internal CI ([comment](https://github.com/pytorch/pytorch/pull/110906#issuecomment-1757490740))
2023-10-11 11:41:21 +00:00
PyTorch MergeBot
0821868110 Revert "[export] Get export APIs ready for PTC (#110410)"
This reverts commit b96ea9f361.

Reverted https://github.com/pytorch/pytorch/pull/110410 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/110410#issuecomment-1757017249))
2023-10-11 07:31:51 +00:00
Angela Yi
b96ea9f361 [export] Get export APIs ready for PTC (#110410)
Summary:
https://docs.google.com/document/d/1QJJEGnj2nHGPODlw38BEG3KLLCOTfdOVjPrNQbz_LM8/edit#bookmark=id.lp80wfshq130
Changes:
* `torch.export` will return a functional ATen graph w/o decompositions
* `exported_program.run_decompositions(decomposition_table)` will optionally take a decomposition table, and run decompositions on the exported program, returning a new exported program. By default we will run the Core ATen decomposition table.

Calling convention for Executorch stays the same:
```
pre_autograd_graph = capture_pre_autograd_graph(f, args, ...)
aten_graph_no_decomps = torch.export.export(pre_autograd_graph, args, ...)
# Within to_edge we decompose to core aten and then convert to edge
edge_graph = exir.to_edge(aten_graph_no_decomps)
```

Test Plan: CI

Differential Revision: D49742989

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110410
Approved by: https://github.com/ydwu4
2023-10-11 06:10:07 +00:00
Tugsbayasgalan Manlaibaatar
5aee22e0e0 Move export.constrain_as_* to torch._constrain_as_* (#110757)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110757
Approved by: https://github.com/avikchaudhuri
ghstack dependencies: #109859
2023-10-11 02:37:55 +00:00
SS-JIA
9606cda64e [core ATen IR] Add decompositions for max, min, var_mean (#110906)
## Context

Add decompositions for `aten.max`, `aten.min`, and `aten.var_mean`. These operators follow a pattern of returning a tuple of outputs from two component operators:

```
aten.max(x) -> return aten.amax(x), aten.argmax(x)
aten.min(x) -> return aten.amin(x), aten.argmin(x)
aten.var_mean(x) -> return aten.var(x), aten.mean(x)
```

For `var_mean`, the `refs` implementation was doing something similar, so I changed it to call `torch.` ops instead like was done for other `refs` implementations previously. cc: @peterbell10 @lezcano

Note that Inductor lowers all these directly, so they are excluded from the Inductor decomp table.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110906
Approved by: https://github.com/manuelcandales
2023-10-11 00:06:24 +00:00
Stephen Jia
ff96f6d04f [core IR][reland] Add split.Tensor and unbind decompositions to core ATen decomp table (#110323)
Summary:
This is a reland of [github PR #110102]( https://github.com/pytorch/pytorch/pull/110102).

The original PR had to be unlanded due to internal CI failures. This diff applies some small fixes to the failing tests to adjust to the new decompositions.

Note that `lift_fresh` will not be decomposed for now, since it was found that [constant propogation looks specifically for `lift_fresh`](13af952f94/torch/fx/experimental/proxy_tensor.py (L381-L386)). Therefore decomposing `lift_fresh` will interfere with constant propogation during export.

Test Plan: Github CI and internal CI

Differential Revision: D49761321

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110323
Approved by: https://github.com/jansel
2023-10-03 14:35:04 +00:00
PyTorch MergeBot
e0b035c220 Revert "[core IR] Add lift_fresh, split.Tensor, and unbind decompositions to core ATen decomp table (#110102)"
This reverts commit 22e706f768.

Reverted https://github.com/pytorch/pytorch/pull/110102 on behalf of https://github.com/atalman due to Breaks internal CI ([comment](https://github.com/pytorch/pytorch/pull/110102#issuecomment-1739856671))
2023-09-28 19:03:25 +00:00
ydwu4
5f7eff0adb Replace node.meta source_fn with source_fn_stack (#108595)
A resubmit of https://github.com/pytorch/pytorch/pull/108447. Copy over the descriptions:

This is a follow-up of the discussion in https://github.com/pytorch/pytorch/pull/108356, where we want to repalce source_fn with source_fn_stack

Before this PR, for the following example:
```python
backend = EagerAndRecordGraphs()

@torch.compile(backend=backend, fullgraph=True)
def cond_f(pred, pred2, x, y):
    def true_fn(pred2, x, y):
        return x + y

    def false_fn(pred2, x, y):
        def true_fn2(x, y):
            return x.sin() - y.cos()

        def false_fn2(x, y):
            return x.cos() - y.sin()

        return control_flow.cond(pred2, true_fn2, false_fn2, (x, y))

    return control_flow.cond(pred, true_fn, false_fn, (pred2, x, y))
```
The graph captured is shown below:
```python
class GraphModule(torch.nn.Module):
    def forward(self, L_pred_ : torch.Tensor, L_pred2_ : torch.Tensor, L_x_ : torch.Tensor, L_y_ : torch.Tensor):
        l_pred_ = L_pred_
        l_pred2_ = L_pred2_
        l_x_ = L_x_
        l_y_ = L_y_

        cond_true_1 = self.cond_true_1
        cond_false_1 = self.cond_false_1
        cond = torch.ops.higher_order.cond(l_pred_, cond_true_1, cond_false_1, [l_pred2_, l_x_, l_y_]);  l_pred_ = cond_true_1 = cond_false_1 = l_pred2_ = l_x_ = l_y_ = None
        return (cond,)

    class GraphModule(torch.nn.Module):
        def forward(self, l_pred2_, l_x_, l_y_):
            add = l_x_ + l_y_;  l_x_ = l_y_ = None
            return add

    class GraphModule(torch.nn.Module):
        def forward(self, l_pred2_, l_x_, l_y_):
            cond_true_0 = self.cond_true_0
            cond_false_0 = self.cond_false_0
            cond = torch.ops.higher_order.cond(l_pred2_, cond_true_0, cond_false_0, [l_x_, l_y_]);  l_pred2_ = cond_true_0 = cond_false_0 = l_x_ = l_y_ = None
            return cond

        class GraphModule(torch.nn.Module):
            def forward(self, l_x_, l_y_):
                sin = l_x_.sin();  l_x_ = None
                cos = l_y_.cos();  l_y_ = None
                sub = sin - cos;  sin = cos = None
                return sub

        class GraphModule(torch.nn.Module):
            def forward(self, l_x_, l_y_):
                cos = l_x_.cos();  l_x_ = None
                sin = l_y_.sin();  l_y_ = None
                sub = cos - sin;  cos = sin = None
                return sub
```
the source_fn for inner cond, sin, cos will be a (name, target) tuple:
```
('cond', <torch._ops.HigherOrderOperator object at xxx>)
('sin', 'sin')
('cos', 'cos')
('sub'. <built-in function sub>)
```

After this pr, the source_fn_stack will be a list of (name, target) tuple. The bottom of stack is the end of the list.
```
[('cond', <torch._ops.HigherOrderOperator object at xxx>), ('cond', <torch._ops.HigherOrderOperator object at xxx>)],
[('cond', <torch._ops.HigherOrderOperator object at xxx>), ('cond', <torch._ops.HigherOrderOperator object at xxx>), ('sin', 'sin')],
[('cond', <torch._ops.HigherOrderOperator object at xxx>), ('cond', <torch._ops.HigherOrderOperator object at xxx>), ('cos', 'cos')]
[('cond', <torch._ops.HigherOrderOperator object at xxx>), ('cond', <torch._ops.HigherOrderOperator object at xxx>), ('sub', <built-in function sub>)]
```

Test Plan:
See added tests in test_higher_order_ops.py and modify existing test.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108595
Approved by: https://github.com/angelayi, https://github.com/zou3519
2023-09-28 18:18:36 +00:00
Avik Chaudhuri
5da5e068f3 deprecate constraints in favor of dynamic_shapes (#110143)
Recently we updated the `export` API to take an experimental `dynamic_shapes` argument that was meant to subsume the existing `constraints` argument.

This PR deprecates `constraints` (with a warning on its use, but without actually removing it). Simultaneously it replaces all uses of `constraints` in docs, examples, and tests with corresponding uses of `dynamic_shapes` (preserving behavior). This exercise fortunately revealed some minor bugs in the implementation which have also been fixed in this PR.

Some uses of `constraints` still remain, e.g., when `torch._dynamo.export` is called directly. (Meta-internal uses will be updated in a separate diff.)

Differential Revision: D49676049

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110143
Approved by: https://github.com/tugsbayasgalan
2023-09-28 10:26:21 +00:00
SS-JIA
22e706f768 [core IR] Add lift_fresh, split.Tensor, and unbind decompositions to core ATen decomp table (#110102)
## Context

Add existing decomps for `lift_fresh`, `split.Tensor`, and `unbind` to the core ATen decomposition table. Do not use them in inductor, since Inductor currently lowers these directly.

One note though is that `lift_fresh`'s decomposition has a note saying it's not correct under autograd. However, my understanding is that these decompositions are registered to the `"post_autograd"` decomposition table, meaning autograd wouldn't be a factor. Would like some confirmation that this premise is correct.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110102
Approved by: https://github.com/jansel
2023-09-28 01:21:45 +00:00
Zhengxu Chen
138fafe72d [export] Fix torch.export() issues for server use cases. (#108275)
Test Plan: In D48788843

Differential Revision: D48811793

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108275
Approved by: https://github.com/tugsbayasgalan
2023-08-31 07:19:18 +00:00
angelayi
a432f37e49 Serialize pytree to json string (#106116)
Fixes https://github.com/pytorch/pytorch/pull/102577#issuecomment-1650905536

Serializing to json is more stable, and renamed the API:

```
# Takes in a treespec and returns the serialized treespec as a string. Also optionally takes in a protocol version number.
def treespec_dumps(treespec: TreeSpec, protocol: Optional[int] = None) -> str:
# Takes in a serialized treespec and outputs a TreeSpec
def treespec_loads(data: str) -> TreeSpec:
```

If users want to register their own serialization format for a given pytree, they can go through the `_register_treespec_serializer` API which optionally takes in a `getstate` and `setstate` function.
```
_register_treespec_serializer(type_, *, getstate, setstate)
# Takes in the context, and outputs a json-dumpable context
def getstate(context: Context) -> DumpableContext:
# Takes in a json-dumpable context, and reconstructs the original context
def setstate(dumpable_context: DumpableContext) -> Context:
```

We will serialize to the following dataclass, and then json.dump this it to string.
```
class TreeSpec
    type: Optional[str]  # a string name of the type. null for the case of a LeafSpec
    context: Optional[Any]  # optional, a json dumpable format of the context
    children_specs: List[TreeSpec],
}
```

If no getstate/setstate function is registered, we will by default serialize the context using `json.dumps/loads`. We will also serialize the type through `f"{typ.__module__}.{typ.__name__}"`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106116
Approved by: https://github.com/zou3519
2023-08-27 14:34:49 +00:00
zhxchen17
162109f6c2 [export] Don't save example_inputs for now. (#107978)
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107978
Approved by: https://github.com/angelayi
2023-08-26 14:36:56 +00:00
angelayi
4e9d7f878b [export] Serialize getattr nodes (#107924)
Turns out some graphs will result in getattr nodes...so let's serialize them
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107924
Approved by: https://github.com/zhxchen17, https://github.com/avikchaudhuri
2023-08-26 02:41:49 +00:00
angelayi
4b44b1861d [export] Store the arguments used to trace the exported program in itself (#107906)
Proper fix would be to do something like https://github.com/pytorch/pytorch/pull/107877, but since that depends on internal changes and it would take too long for diff train to land we will first just make OSS work using torch.save.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107906
Approved by: https://github.com/gmagogsfm
2023-08-25 16:04:58 +00:00
angelayi
1e71c51350 [export] Serialize map correctly (#107837)
Summary: Previously serializing graphs using map would error
because map returns a singleton tensor list rather than a
single tensor. So this diff adds support for if a higher order operator
returns a list of tensors as output.

We also run into an issue with roundtripping the source_fn on
map nodes/subgraphs. The source_fn originally is
<functorch.experimental._map.MapWrapper object at 0x7f80a0549930>, which
serializes to `functorch.experimental._map.map`. However, we are unable
to construct the function from this string. This should be fixed once
map becomes a fully supported operator like
torch.ops.higher_order.cond.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D48631302](https://our.internmc.facebook.com/intern/diff/D48631302)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107837
Approved by: https://github.com/zhxchen17
ghstack dependencies: #107818
2023-08-24 06:47:50 +00:00
angelayi
1166f9a02c [export] Custom object serialization (#107666)
Some NvidaTRT folks were asking for a way to integrate the serialization of custom objects with export's serialization. After some discussion (more background [here](https://docs.google.com/document/d/1lJfxakmgeoEt50inWZ53MdUtOSa_0ihwCuPy_Ak--wc/edit)), we settled on a way for users to register their custom object's serializer/deserializer functions.

Since TorchScript's `.def_pickle` already exists for [registering custom classes](https://pytorch.org/tutorials/advanced/torch_script_custom_classes.html), and `tensorrt.ICudaEngine` already contains a `.def_pickle` implementation, we'll start off by reusing the existing framework and integrating it with export's serialization.

TorchScript's `.def_pickle` requires users to register two functions, which end up being the `__getstate__` and `__setstate__` methods on the class. The semantics of `__getstate__` and `__setstate__` in TorchScript are equivalent to that of Python pickle modules. This is then registered using pybind's `py::pickle` function [here](https://www.internalfb.com/code/fbsource/[f44e048145e4697bccfaec300798fce7daefb858]/fbcode/caffe2/torch/csrc/jit/python/script_init.cpp?lines=861-916) to be used with Python's pickle to initialize a ScriptObject with the original class, and set the state back to what it used to be.

I attempted to call `__getstate__` and `__setstate__` directly, but I couldn't figure out how to initial the object to be called with `__setstate__` in python. One option would be to create a `torch._C.ScriptObject` and then set the class and call `__setstate__`, but there is no constructor initialized for ScriptObjects. Another option would be to construct an instance of the serialized class itself, but if the class constructor required arguments, I wouldn't know what to initialize it with. In ScriptObject's `py::pickle` registration it directly creates the object [here](https://www.internalfb.com/code/fbsource/[f44e048145e4697bccfaec300798fce7daefb858]/fbcode/caffe2/torch/csrc/jit/python/script_init.cpp?lines=892-906), which is why I was thinking that just directly using Python's `pickle` will be ok since it is handled here.

So, what I did is that I check if the object is pickle-able, meaning it contains `__getstate__` and `__setstate__` methods, and if so, I serialize it with Python's pickle. TorchScript does have its own implementation of [pickle/unpickle](https://www.internalfb.com/code/fbsource/[59cbc569ccbcaae0db9ae100c96cf0bae701be9a][history]/fbcode/caffe2/torch/csrc/jit/serialization/pickle.h?lines=19%2C82), but it doesn't seem to have pybinded functions callable from python.

A question is -- is it ok to combine this pickle + json serialization?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107666
Approved by: https://github.com/gmagogsfm
2023-08-24 06:36:23 +00:00
angelayi
7bab98f161 [export] Serialize cond submodules (#107818)
Cond submodules only return a single tensor, which was not supported by the serializer. Since the serializer assumes that the graph always returns a list -- this is true for the toplevel graph from dynamo, but not true for the subgraphs.

Differential Revision: [D48622687](https://our.internmc.facebook.com/intern/diff/D48622687)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107818
Approved by: https://github.com/avikchaudhuri
2023-08-24 02:29:26 +00:00
angelayi
431d25a141 [export] Add save/load function (#107309)
Added the following APIs:

```
def save(
    ep: ExportedProgram,
    f: Union[str, pathlib.Path, io.BytesIO],
    extra_files: Optional[Dict[str, Any]] = None,
    opset_version: Optional[Dict[str, int]] = None,
) -> None:
    """
    Saves a version of the given exported program for use in a separate process.
    Args:
        ep (ExportedProgram): The exported program to save.
        f (str): A file-like object (has to implement write and flush)
            or a string containing a file name.
        extra_files (Optional[Dict[str, Any]]): Map from filename to contents
            which will be stored as part of f.
        opset_version (Optional[Dict[str, int]]): A map of opset names
            to the version of this opset
    """

def load(
    f: Union[str, pathlib.Path, io.BytesIO],
    extra_files: Optional[Dict[str, Any]] = None,
    expected_opset_version: Optional[Dict[str, int]] = None,
) -> ExportedProgram:
    """
    Loads an ExportedProgram previously saved with torch._export.save
    Args:
        ep (ExportedProgram): The exported program to save.
        f (str): A file-like object (has to implement write and flush)
            or a string containing a file name.
        extra_files (Optional[Dict[str, Any]]): The extra filenames given in
            this map would be loaded and their content would be stored in the
            provided map.
        expected_opset_version (Optional[Dict[str, int]]): A map of opset names
            to expected opset versions
    Returns:
        An ExportedProgram object
    """
```

Example usage:
```
# With buffer
buffer = io.BytesIO()
torch._export.save(ep, buffer)
buffer.seek(0)
loaded_ep = torch._export.load(buffer)

# With file
with tempfile.NamedTemporaryFile() as f:
    torch._export.save(ep, f)
    f.seek(0)
    loaded_ep = torch._export.load(f)

# With Path
with TemporaryFileName() as fname:
    path = pathlib.Path(fname)
    torch._export.save(ep, path)
    loaded_ep = torch._export.load(path)

# Saving with extra files
buffer = io.BytesIO()
save_extra_files = {"extra.txt": "moo"}
torch._export.save(ep, buffer, save_extra_files)
buffer.seek(0)
load_extra_files = {"extra.txt": ""}
loaded_ep = torch._export.load(buffer, extra_files)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107309
Approved by: https://github.com/avikchaudhuri, https://github.com/gmagogsfm, https://github.com/tugsbayasgalan
2023-08-22 08:25:19 +00:00
angelayi
a5efb5eb84 [export] Serialize constrain_as_size ops (#107386)
Since constrain_as_size has been fixed, I tried serializing it, but ran into some issues.
Notably, after each `.transform` call, I added a helper `_get_updated_range_constraints` to update the range constrains list. This is because when we retrace in a pass, the symbolic values being used changes, so we need to update this dictionary.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107386
Approved by: https://github.com/avikchaudhuri, https://github.com/zhxchen17
2023-08-21 15:24:11 +00:00
angelayi
63e9b5481d [export] Add schema version to serializer/deserializer (#107420)
Added a version number to the schema for BC issues. We will add this number to the serialized ExportedProgram and then when deserializing, if the number does not match up with the existing deserializer, we will error. We should update the number of there are any major changes to the schema.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107420
Approved by: https://github.com/zhxchen17
2023-08-21 06:56:46 +00:00
Zhengxu Chen
2dbadd1eae [export] Remove experimental runtime assertion configs from export API. (#105043)
Test Plan: CI

Differential Revision: D47390794

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105043
Approved by: https://github.com/larryliu0820
2023-07-26 16:21:29 +00:00
Angela Yi
8bf253ecce [export] Remove eliminate_dead_code (#105875)
Summary: Sometimes the graph that is being serialized contains nodes with side effects + no users (ex. out variants of operators), so we don't want to eliminate those when deserializing.

Test Plan: CI

Differential Revision: D47735009

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105875
Approved by: https://github.com/ydwu4
2023-07-25 05:37:44 +00:00
angelayi
fed8d3608d Update core aten decomp table (#105673)
Updated the decomposition table based on the existing [Core ATen IR](https://pytorch.org/docs/stable/ir.html) list, and moved rest of decompositions to inductor's decomposition table.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105673
Approved by: https://github.com/SherlockNoMad
2023-07-21 02:45:37 +00:00
ydwu4
6abb8c382c [export] add kwargs support for export. (#105337)
Solving #105242.

During export, the exported function's signature changes multiple times. Suppose we'd like to export f as shown in following example:
```python
def f(arg1, arg2, kw1, kw2):
  pass

args = (arg1, arg2)
kwargs =  {"kw2":arg3, "kw1":arg4}

torch.export(f, args, kwargs)
```
The signature changes mutiple times during export process in the following order:
1. **gm_torch_level = dynamo.export(f, *args, \*\*kwargs)**. In this step, we turn all  kinds of parameters such as **postional_only**, **var_positioinal**, **kw_only**, and **var_kwargs** into **positional_or_kw**.It also preserves the positional and kword argument names in original function (i.e. f in this example) [here](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/export.py#L546C13-L546C27). The order of kwargs will be the **key order** of kwargs (after python 3.6, the order is the insertion of order of keys) instead of the original function signature and the order is baked into a _orig_args varaible of gm_torch_level's pytree info. So we'll have:
```python
def gm_torch_level(arg1, arg2, kw2, kw1)
```
Such difference is acceptable as it's transparent to users of export.

2. **gm_aot_export = aot_export_module(gm_torch_level, pos_or_kw_args)**. In this step, we need to turn kwargs into positional args in the order of how gm_torch_level expected, which is stored in _orig_args. The returned gm_aot_export has the graph signature of flat_args, in_spec = pytree.tree_flatten(pos_or_kw_args):
``` python
flat_args, _ = pytree.tree_flatten(pos_or_kw_args)
def gm_aot_export(*flat_args)
```

3. **exported_program(*args, \*\*kwargs)**. The epxorted artifact is exported_program, which is a wrapper over gm_aot_export and has the same calling convention as the original function "f". To do this, we need to 1. specialize the order of kwargs into pos_or_kw_args and 2. flatten the pos_or_kw_args into what gm_aot_export expected.  We can combine the two steps into one with :
```python
_, in_spec = pytree.tree_flatten((args, kwargs))

# Then during exported_program.__call__(*args, **kwargs)
flat_args  = fx_pytree.tree_flatten_spec((args, kwargs), in_spec)
```
, where kwargs is treated as a normal pytree whose keyorder is preserved in in_spec.

Implementation-wise, we treat _orig_args in dynamo exported graph module as single source of truth and kwags are ordered following it.

Test plan:
See added tests in test_export.py.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105337
Approved by: https://github.com/angelayi, https://github.com/tugsbayasgalan
2023-07-20 19:53:08 +00:00
Justin Chu
8a688277a2 [BE] Enable ruff's UP rules and autoformat dynamo / functorch and refs (#105432)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105432
Approved by: https://github.com/ezyang
2023-07-19 13:48:44 +00:00
Tugsbayasgalan Manlaibaatar
1d02106e03 Preserve source_fn or nn_module_stack in the lifted params (#105017)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105017
Approved by: https://github.com/angelayi
2023-07-13 06:03:28 +00:00
zhxchen17
100aff9d4f [export] Deserialize subgraphs. (#103991)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103991
Approved by: https://github.com/angelayi, https://github.com/avikchaudhuri
2023-06-26 18:17:44 +00:00
Michael Voznesensky
ec24f1e4cc Simulate treespec flattening/unflattening (#101896)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101896
Approved by: https://github.com/jansel, https://github.com/anijain2305
2023-06-23 10:53:15 +00:00
Michael Voznesensky
e5e9d563c2 Lift user defined attributes into inputs for certain cases (user defined types and tensors) (#103386)
(1) Lazy (converts to dynamo variable on access only)
(2) Uses existing side effect/reconstruct tech
(3) not tensor opinionated

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103386
Approved by: https://github.com/jansel
2023-06-20 23:45:19 +00:00
Mengwei Liu
a52b6f086d [export][serde] Add validator to compare deserializer opset version with model opset version (#103691)
This PR adds a validator to compare model opset version and deserializer opset version. This currently raises exception if any of the version doesn't match.

Note: the validator will only print warning if the op namespace in model is missing from the deserializer.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103691
Approved by: https://github.com/avikchaudhuri, https://github.com/zhxchen17
2023-06-16 01:36:43 +00:00
Angela Yi
90ef8d58cf [export] Serialize metadata (#103274)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103274
Approved by: https://github.com/zhxchen17
2023-06-15 17:34:12 +00:00
Angela Yi
8dc6001057 [export] Serialize symbolic values (#103273)
* Modified the SymInt schema to also store the hint of the SymInt if it is represented as a symbol so that when we reconstruct the SymInt, the hint will also exist on the node.
* GraphModuleDeserializer.deserialize now also optionally map of symbol names to range.

ReplaceSymSizeOpPass should not be needed after https://github.com/pytorch/pytorch/pull/103107 lands

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103273
Approved by: https://github.com/avikchaudhuri, https://github.com/zhxchen17
2023-06-13 20:29:47 +00:00
Angela Yi
e930c0fc35 [export] Initial deserialization v2 (#102716)
v2 of https://github.com/pytorch/pytorch/pull/102126. mentally stacked on top of https://github.com/pytorch/pytorch/pull/102707

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102716
Approved by: https://github.com/avikchaudhuri, https://github.com/zhxchen17
2023-06-07 16:02:35 +00:00
Angela Yi
3a385656b5 [export] Initial serialization v2 (#102707)
v2 of https://github.com/pytorch/pytorch/pull/102125 because of git issues
corresponding deserialization diff: https://github.com/pytorch/pytorch/pull/102716

Implementing serialization of the exported program to a python dataclass, and then from that dataclass to json. This is split into a couple of sections:
- `serialize(ep: ep.ExportedProgram, opset_version: Dict[str, int]) -> Tuple[bytes, bytes]` -- takes an exported program object, a dictionary mapping opset namespaces to versions, and returns the serialized exported program in bytes, and separately the state dict serialized in bytes
- `GraphModuleSerializer` class that serializes torch.fx.GraphModule
to the schema.GraphModule dataclass
- `ExportedProgramSerializer` class that serializes torch._export.exported_program.ExportedProgram to the schema.ExportedProgram dataclass

Serialization TODOs:
- [x] pytree spec: https://github.com/pytorch/pytorch/pull/102577
- [ ] higher order ops
- [ ] node metadata (specifically nn_module_stack/source_fn)
- [ ] constraints
- [ ] graph module metadata

The tests are not super comprehensive, but that's because I think it'll be better tested + easier to test once deserialization is implemented.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102707
Approved by: https://github.com/avikchaudhuri, https://github.com/zhxchen17
2023-06-06 05:12:49 +00:00
Angela Yi
c4028de462 [export] ExportedProgram (#102259)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102259
Approved by: https://github.com/ydwu4, https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan, https://github.com/zhxchen17
2023-05-26 23:36:38 +00:00
Zhengxu Chen
351c2ea2fb [export] Prototype on serialization schema. (#101899)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101899
Approved by: https://github.com/angelayi
2023-05-21 06:31:53 +00:00
Angela Yi
ba47a2b227 [export] Pickle of ExportGraphModule (#100924)
try 2 of reland of https://github.com/pytorch/pytorch/pull/100620 bc merge conflict 😭...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100924
Approved by: https://github.com/tugsbayasgalan
2023-05-09 16:58:24 +00:00
PyTorch MergeBot
f42eae4755 Revert "[export] Pickle of ExportGraphModule (#100620)"
This reverts commit d4975a5fe0.

Reverted https://github.com/pytorch/pytorch/pull/100620 on behalf of https://github.com/clee2000 due to broke export/test_serialize.py::TestSerialize::test_pickle_dynamic across various jobs d4975a5fe0, i think you hit another landrace? ([comment](https://github.com/pytorch/pytorch/pull/100620#issuecomment-1536643519))
2023-05-05 18:52:48 +00:00
Angela Yi
d4975a5fe0 [export] Pickle of ExportGraphModule (#100620)
reland of https://github.com/pytorch/pytorch/pull/100423 bc merge conflict...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100620
Approved by: https://github.com/mergennachin
2023-05-05 18:21:39 +00:00
PyTorch MergeBot
c4fd76e7b4 Revert "[export] Pickle result of export (#100423)"
This reverts commit 7226dbcbce.

Reverted https://github.com/pytorch/pytorch/pull/100423 on behalf of https://github.com/angelayi due to merge conflict ([comment](https://github.com/pytorch/pytorch/pull/100423#issuecomment-1534163373))
2023-05-04 06:41:06 +00:00
Angela Yi
7226dbcbce [export] Pickle result of export (#100423)
Pickles the metadata["val"] into TensorMetadata struct so that it'll be retrained when we unpickle.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100423
Approved by: https://github.com/mergennachin
2023-05-04 06:37:16 +00:00