Commit Graph

51 Commits

Author SHA1 Message Date
Angela Yi
ad22bd2fa1 [export][refactor][6/n] Remove equality_constraints (#116979)
Through the new dynamic_shapes API and using torch.export.Dim, dimensions that are equal will now be represented by the same symbol, so we no longer need to store `equality_constraints`.

Differential Revision: D52351705

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116979
Approved by: https://github.com/avikchaudhuri
2024-01-09 19:04:47 +00:00
Zhengxu Chen
9519c8afd4 [export] Remove hacks for passing pinned version test. (#116871)
Summary: nature will heal itself.

Test Plan: CI

Reviewed By: angelayi

Differential Revision: D52566227

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116871
Approved by: https://github.com/angelayi
2024-01-06 18:09:27 +00:00
Angela Yi
6413511713 [export][refactor][4/n] Make equality_constraints optional (#116233)
Summary: needed to remove equality_contraints eventually :P

Test Plan: CI

Differential Revision: D52351709

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116233
Approved by: https://github.com/tugsbayasgalan
2024-01-05 00:50:52 +00:00
Zhengxu Chen
43fb1b671c [export] Improve verifier to not specialize on dialect. (#116705)
Summary:
Currently we have a very ugly specialization on edge dialect in verifier like the following:
```
 # TODO Remove this branch.
            if ep.dialect == "EDGE":  # !!! Don't change this allowlist. !!!
                pass
            else:
                raise e
```
In this diff we do some additional work to make signature checking also work in exir. We decouple the transformation stack in torch export and exir so that different layers of the stack can evolve in their own fashion and the team can divide and conquer them seperately.

Test Plan: CI

Differential Revision: D52499225

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116705
Approved by: https://github.com/tugsbayasgalan
2024-01-04 17:17:23 +00:00
angelayi
70eb53505b [export] Update range constraints to runtime_var_to_range (#115427)
Updated range_constraints to be the union of shape_env.var_to_range and shape_env.runtime_var_to_range, with shape_env.runtime_var_to_range taking priority.

Due to 0/1 specialization, if we bound an unbacked symint to be less than 5, the range of possible values for this symint is actually recorded as [2, 5] in shape_env.var_to_range. To fix this so that users will be able to see a more understandable range of [0, 5], shape_env.runtime_var_to_range was created to store the range of [0, 5]. Since range_constraints is a user-facing attribute to query the ranges of certain symints, we want to use shape_env.runtime_var_to_range to get the unbacked symints ranges, rather than shape_env.var_to_range.

Additionally, run_decompositions() has an issue where it will always add assertions to the graph, even if a previous run has already added the assertions. So, I added a part to the AddRuntimeAssertionsForInlineConstraints which will store which assertions have already been added.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115427
Approved by: https://github.com/zhxchen17
2024-01-03 16:55:04 +00:00
Arun Ranganathan
ef98987017 Fix user input mutations for run_decompositions (#116382)
Fixes #115106

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116382
Approved by: https://github.com/angelayi
2024-01-03 05:04:22 +00:00
Tugsbayasgalan Manlaibaatar
dfc898ede4 Don't decompose functional ops in predispatch functionalization (#116383)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116383
Approved by: https://github.com/bdhirsh
ghstack dependencies: #115188, #115210
2023-12-28 11:54:04 +00:00
PyTorch MergeBot
85628c0e57 Revert "[export] Update range constraints to runtime_var_to_range (#115427)"
This reverts commit f8ad664cf2.

Reverted https://github.com/pytorch/pytorch/pull/115427 on behalf of https://github.com/angelayi due to failing internal tests ([comment](https://github.com/pytorch/pytorch/pull/115427#issuecomment-1870671728))
2023-12-27 22:44:45 +00:00
suo
bc3ef1684e [export] refactor unflatten.py to be a top-level API (#115466)
This is in preparation for the merging of the internal and external versions of
the unflattener. Unflatten needs to be its own API because we are adding more
options to it in forthcoming diffs.

Differential Revision: [D52001133](https://our.internmc.facebook.com/intern/diff/D52001133/)

@diff-train-skip-merge
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115466
Approved by: https://github.com/zhxchen17
2023-12-21 20:52:29 +00:00
angelayi
f8ad664cf2 [export] Update range constraints to runtime_var_to_range (#115427)
Updated range_constraints to be the union of shape_env.var_to_range and shape_env.runtime_var_to_range, with shape_env.runtime_var_to_range taking priority.

Due to 0/1 specialization, if we bound an unbacked symint to be less than 5, the range of possible values for this symint is actually recorded as [2, 5] in shape_env.var_to_range. To fix this so that users will be able to see a more understandable range of [0, 5], shape_env.runtime_var_to_range was created to store the range of [0, 5]. Since range_constraints is a user-facing attribute to query the ranges of certain symints, we want to use shape_env.runtime_var_to_range to get the unbacked symints ranges, rather than shape_env.var_to_range.

Additionally, run_decompositions() has an issue where it will always add assertions to the graph, even if a previous run has already added the assertions. So, I added a part to the AddRuntimeAssertionsForInlineConstraints which will store which assertions have already been added.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115427
Approved by: https://github.com/zhxchen17
2023-12-20 20:00:41 +00:00
Angela Yi
8e2d63cbc3 [export][reland] Remove runtime assertion pass (#115597)
Summary:
Reland of https://github.com/pytorch/pytorch/pull/115196
D52054112 to fix internal failures.

Test Plan: CI

Differential Revision: D52054110

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115597
Approved by: https://github.com/ydwu4, https://github.com/zhxchen17
2023-12-15 03:22:03 +00:00
angelayi
17c104ac18 [export] Do not copy state_dict in run_decomp (#115269)
Fixes https://github.com/pytorch/pytorch/issues/114628

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115269
Approved by: https://github.com/thiagocrepaldi, https://github.com/ydwu4
2023-12-13 01:21:21 +00:00
angelayi
b6a4866330 [export][reland][refactor][3/n] Move unlift to separate file (#115558)
Reland of https://github.com/pytorch/pytorch/pull/114787

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115558
Approved by: https://github.com/zhxchen17, https://github.com/atalman
ghstack dependencies: #115556, #115557
2023-12-12 05:37:07 +00:00
atalman
749f0c90e1 Revert "[export][refactor][3/n] Move unlift to separate file (#114787)" (#115457)
Github First Oncall: This reverts commit 967863d91d.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115457
Approved by: https://github.com/osalpekar
2023-12-08 22:33:28 +00:00
PyTorch MergeBot
4186932bac Revert "[export] Remove runtime assertion pass (#115196)"
This reverts commit c163b3c035.

Reverted https://github.com/pytorch/pytorch/pull/115196 on behalf of https://github.com/atalman due to Broke internal test ([comment](https://github.com/pytorch/pytorch/pull/115196#issuecomment-1847778344))
2023-12-08 20:07:04 +00:00
angelayi
c163b3c035 [export] Remove runtime assertion pass (#115196)
Reland of https://github.com/pytorch/pytorch/pull/111949/

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115196
Approved by: https://github.com/avikchaudhuri
2023-12-07 01:44:11 +00:00
angelayi
967863d91d [export][refactor][3/n] Move unlift to separate file (#114787)
Differential Revision: [D51823960](https://our.internmc.facebook.com/intern/diff/D51823960)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114787
Approved by: https://github.com/ydwu4
ghstack dependencies: #114764, #114768
2023-12-06 16:46:47 +00:00
Zhengxu Chen
e6b3a8ce5f [export] Refactor export() and separate the non-strict part. (#114697)
Summary: Refactor torch.export to separate strict part and non strict part. Adding an option to torch.export called `strict=True`.

Test Plan: buck2 test mode/opt caffe2/test:test_export -- -r non_strict

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114697
Approved by: https://github.com/ydwu4, https://github.com/tugsbayasgalan
2023-11-30 16:47:50 +00:00
Angela Yi
f1fe0b685c [export] Remove combine_args_kwargs (#114782)
Test Plan: CI

Differential Revision: D51676479

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114782
Approved by: https://github.com/zhxchen17
2023-11-30 02:49:21 +00:00
angelayi
c10893654e [export] Fix run_decomps to work with fake mode (#114714)
Fixes https://github.com/pytorch/pytorch/issues/114711
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114714
Approved by: https://github.com/ydwu4, https://github.com/zhxchen17
2023-11-29 06:52:13 +00:00
Zhengxu Chen
e0d2a24967 Reland "[export] Support user input mutation. [1/2]" (#114496) (#114596)
Summary:

Serialization not implemented yet. Will do in the next diff.

Resolving Github issues:
https://github.com/pytorch/pytorch/issues/112429
https://github.com/pytorch/pytorch/issues/114142

Test Plan:
onnx doc test
```
python -m xdoctest /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/onnx/_internal/exporter.py ONNXProgram.model_signature:0
```

Differential Revision: D51588558

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114596
Approved by: https://github.com/angelayi
2023-11-27 20:19:04 +00:00
PyTorch MergeBot
fa1ccc34c4 Revert "[export] Support user input mutation. [1/2] (#114496)"
This reverts commit b62c0d96bc.

Reverted https://github.com/pytorch/pytorch/pull/114496 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/114496#issuecomment-1827289635))
2023-11-27 07:52:21 +00:00
Zhengxu Chen
b62c0d96bc [export] Support user input mutation. [1/2] (#114496)
Summary:
Serialization not implemented yet. Will do in the next diff.

Resolving Github issues:
https://github.com/pytorch/pytorch/issues/112429
https://github.com/pytorch/pytorch/issues/114142

Test Plan:
buck2 run mode/opt caffe2/test:test_export -- -r test_export_
input_mutation

Differential Revision: D51556962

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114496
Approved by: https://github.com/tugsbayasgalan
2023-11-27 04:53:38 +00:00
Angela Yi
50101d59ba [export][retry] Move lifted tensors out of state_dict (#113689)
Test Plan: CI

Differential Revision: D51321532

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113689
Approved by: https://github.com/zhxchen17
2023-11-15 09:24:49 +00:00
Tugsbayasgalan Manlaibaatar
a7b75f586a [RELAND] Disallow skipping dynamo (#110222)
Previous discussion: https://github.com/pytorch/pytorch/pull/109476

In this PR, I made following additions to the original PR:
1) Unlifted graph module now runs the runtime assertions in its' forward call.
2) When we retrace, we make sure we run the assertions to make sure user is tracing the module with correct inputs with respect to the assumptions we made during first tracing. The way I do is that I create new graph module type with modified call method. And the runtime assertions happen under torchdynamo.disable so that it is just run in eager directly. The reason is we don't this to be traced part of the graph.
3) Both ep.module and capture_pre_autograd now returns _UnliftedGraphModule.

Differential Revision: [D51078056](https://our.internmc.facebook.com/intern/diff/D51078056)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110222
Approved by: https://github.com/zhxchen17
2023-11-14 16:02:01 +00:00
Zhengxu Chen
aa376e31fd [export] Enable verifier [2/n] (#113075)
Summary: Turn on verifier check for exportec program ctor. Note that this effectively detect a large surface of spec violations, so we also spend some time fixing them one by one in this diff.

Test Plan: CI

Differential Revision: D51014944

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113075
Approved by: https://github.com/angelayi
2023-11-08 03:32:11 +00:00
Aaron Gokaslan
8219bf051b [BE]: Apply RUF015 to torch folder (#113025)
Removes unnecessary allocations of iterators. There is a small chance this may have side effects as the entire iterator is no longer consumed, but this is a way more efficient method for retrieving the first element.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113025
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-11-07 00:48:15 +00:00
Zhengxu Chen
50767a075a [export] Clean up verifier [1/n]. (#112505)
Summary: Some adjustments to verifier so that it's easier to use it correctly. We will enable verifier later, so the current diff is no-op.

Test Plan: CI

Differential Revision: D50839295

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112505
Approved by: https://github.com/tugsbayasgalan, https://github.com/angelayi
2023-11-02 19:36:06 +00:00
angelayi
131e0f1b75 [export] Separate out graph signature (#112412)
Differential Revision: [D50800524](https://our.internmc.facebook.com/intern/diff/D50800524)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112412
Approved by: https://github.com/zhxchen17
2023-11-02 00:18:28 +00:00
Zhengxu Chen
da90c31593 [export] Upstream unflattener. (#112189)
Summary: Provide a way for users to get the original module structure back after exporting.

Test Plan: caffe2/test:test_export -- -r unflatten

Differential Revision: D50708490

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112189
Approved by: https://github.com/suo, https://github.com/angelayi
2023-10-30 21:27:11 +00:00
Peter Bell
bbd5b935e4 Use pytree.tree_leaves everywhere (#112324)
This changes all the instances I could find of `tree_flatten(...)[0]` or
`x, _ = tree_flatten` to use `tree_leaves`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112324
Approved by: https://github.com/lezcano
ghstack dependencies: #112327, #112323
2023-10-30 03:39:04 +00:00
lezcano
c8a5bb451e Do not import sympy within torch._prims_common (#112034)
This is the first of a few PRs that avoid importing SymPy at import time.
The pitch here is that we (almost!) do not have SymPy on our API, so
this should be feasible.

This should speed-up torch imports by a good 15% as per
https://dev-discuss.pytorch.org/t/delving-into-what-happens-when-you-import-torch/1589

In this PR we just move a few global imports into local imports.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112034
Approved by: https://github.com/ezyang
2023-10-26 12:53:25 +00:00
Aaron Gokaslan
cb856b08b2 [BE]: Attach cause to some exceptions and enable RUFF TRY200 (#111496)
Did some easy fixes from enabling TRY200. Most of these seem like oversights instead of intentional. The proper way to silence intentional errors is with `from None` to note that you thought about whether it should contain the cause and decided against it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111496
Approved by: https://github.com/malfet
2023-10-19 21:56:36 +00:00
Zhengxu Chen
17002d25c5 [export] Remove call_spec argument from ExportedProgram ctor. (#111407)
Summary: call_spec arg is not used anymore.

Test Plan: CI

Reviewed By: SherlockNoMad, tugsbayasgalan

Differential Revision: D50335365

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111407
Approved by: https://github.com/izaitsevfb
2023-10-17 21:01:37 +00:00
PyTorch MergeBot
7a740e2b85 Revert "direct runtime assertions (#111262)"
This reverts commit e6d9350d7f.

Reverted https://github.com/pytorch/pytorch/pull/111262 on behalf of https://github.com/jeanschmidt due to Breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/111262#issuecomment-1765881675))
2023-10-17 08:04:36 +00:00
Avik Chaudhuri
e6d9350d7f direct runtime assertions (#111262)
Previously we were generating a graph to add runtime assertions on inputs and then running that graph to check input constraints. This PR checks input constraints directly.

Differential Revision: D50289970

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111262
Approved by: https://github.com/zhxchen17
2023-10-15 05:15:09 +00:00
Zhengxu Chen
11ac4ace5f [export] Use meta val from the old nodes in run_decompositions(). (#111225)
Summary: fall back to the old nodes when meta val is missing.

Test Plan: buck2 run //executorch/examples/portable/scripts:export -- --model_name=emformer_predict

Differential Revision: D50278439

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111225
Approved by: https://github.com/larryliu0820
2023-10-14 02:08:49 +00:00
Zhengxu Chen
ba7b9211ee [export] Update serialization schema to input/output specs. (#845) (#111204)
Summary: Pull Request resolved: https://github.com/pytorch/executorch/pull/845

Test Plan: CI

Differential Revision: D50191531

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111204
Approved by: https://github.com/angelayi
2023-10-13 22:19:56 +00:00
Zhengxu Chen
168bad5f23 [export] Reland "Fix graph signature data model to list of specs." (#111136)
Summary: reland D49876258

Test Plan: CI

Differential Revision: D50224384

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111136
Approved by: https://github.com/angelayi
2023-10-13 02:04:29 +00:00
Avik Chaudhuri
1208a44799 [docs] export full aten opset (#111161)
Differential Revision: [D50240459](https://our.internmc.facebook.com/intern/diff/D50240459/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111161
Approved by: https://github.com/tugsbayasgalan
2023-10-13 00:28:35 +00:00
PyTorch MergeBot
42b89aea4b Revert "[export] Fix graph signature data model to list of specs. (#111017)"
This reverts commit 33b69509d3.

Reverted https://github.com/pytorch/pytorch/pull/111017 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/111017#issuecomment-1759292161))
2023-10-12 09:52:33 +00:00
Zhengxu Chen
33b69509d3 [export] Fix graph signature data model to list of specs. (#111017)
Summary:
Previously we design the GraphSignature format as a bunch of inputs and outputs node names. After a discussion in the design meeting we decide to change the format to make signature more self-contained. Now the signature format look like the following:
```
[
InputSpec(
   kind=InputKind.USER_INPUT,
   arg=TensorArgument(name="arg0_1"),
   target=None,
),
...
]
```

Test Plan: CI

Reviewed By: angelayi

Differential Revision: D49876258

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111017
Approved by: https://github.com/angelayi
2023-10-12 03:39:04 +00:00
Tugsbayasgalan Manlaibaatar
cd275dc24f Remove RangeConstraints in favor of ValueRanges (#109859)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109859
Approved by: https://github.com/avikchaudhuri
2023-10-10 22:22:05 +00:00
Avik Chaudhuri
0d4a360fa2 remove replaced symbols from range_constraints (#110644)
While the `range_constraints` that is initially derived by processing of constraints only contains symbols that appear in the graph module, eventually the `range_constraints` that are in the exported program seem to contain more symbols than those that appear in the graph module. Clearly this is a regression, because the example of "Expressing Dynamism" in our public docs (https://pytorch.org/docs/stable/export.html#expressing-dynamism) does not show the extra symbols in `range_constraints`, but running the example does.

The problem seems to arise when we are running `_transform` passes, where we regenerate the `range_constraints` from the `shape_env`. However, as a rule, symbols that have `replacements` are actually replaced (by other expressions, including constants or other symbols), so they should never appear in the graph module. Thus we can filter such symbols out from `range_constraints` as well.

Differential Revision: [D49969620](https://our.internmc.facebook.com/intern/diff/D49969620/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110644
Approved by: https://github.com/zhxchen17
2023-10-06 21:13:55 +00:00
Zhengxu Chen
be5dc3a00d [export] Update ArgumentSpec definition. (#110612)
Summary: Changing ArgumentSpec into a true union type in Python without changing serialization format.

Test Plan: CI

Differential Revision: D49871088

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110612
Approved by: https://github.com/angelayi
2023-10-06 03:14:45 +00:00
Angela Yi
13af952f94 [export] Add run_decomposition() function to ExportedProgram (#110236)
Summary:
https://docs.google.com/document/d/1QJJEGnj2nHGPODlw38BEG3KLLCOTfdOVjPrNQbz_LM8/edit#bookmark=id.lp80wfshq130

`exported_program.run_decompositions(decomposition_table)` will optionally take a decomposition table, and run decompositions on the exported program, returning a new exported program. By default we will run the Core ATen decomposition table.

Splitting up this diff with the following one (D49742989) to make migrating Executorch easier:
1. Land this diff
1. Wait for a pytorch nightly to include this diff
1. Update executorch's pytorch nightly
1. Land the following diff to have export() return no decomps

Test Plan: Tested in following diff

Differential Revision: D49743208

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110236
Approved by: https://github.com/gmagogsfm
2023-10-01 18:18:27 +00:00
Angela Yi
a7409695bb [export] Verifier for exported program (#109519)
Summary:
X-link: https://github.com/pytorch/executorch/pull/292

Added a verifier for the graph signature in a exported program

Test Plan: CI

Differential Revision: D48926643

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109519
Approved by: https://github.com/zhxchen17
2023-09-26 18:47:43 +00:00
zhxchen17
ac967e9dad [export] Fix tree spec matching behavior. (#109679)
Summary:

Test Plan:
Internal test.
Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109679
Approved by: https://github.com/angelayi, https://github.com/tugsbayasgalan
2023-09-21 14:24:09 +00:00
zhxchen17
7a04ae6fba [export] Remove redundant no_grad() for exported program execution. (#109686)
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109686
Approved by: https://github.com/angelayi
2023-09-21 01:20:54 +00:00
Angela Yi
e8ab8c877d [exir] Add lift constant tensors passes after aten_to_edge (#109382)
Summary:
X-link: https://github.com/pytorch/executorch/pull/359

When exporting using enable_aot (through the torch.export path), we want to lift all constant tensors as buffers to the exported program. The ScalarToTensor pass in EXIR's aten_to_edge passes will create some constant tensors in the graph, so we will need to run a lift_constant_tensors pass afterwards.

Note that this only needs to be applied when exporting using the torch.export path because in the original path, nothing is lifted.

Test Plan: CI

Differential Revision: D49207492

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109382
Approved by: https://github.com/cccclai
2023-09-19 01:34:58 +00:00