Commit Graph

72 Commits

Author SHA1 Message Date
Angela Yi
4b49bc19e8 [export][reland] Disable exported_program.__call__ (#120019)
Summary: Reland of D53075378 / https://github.com/pytorch/pytorch/pull/119466

Test Plan: CI

Differential Revision: D53827930

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120019
Approved by: https://github.com/ydwu4
2024-03-05 05:29:46 +00:00
Michael Suo
12f724c779 [export] preserve constant fqn (#120664)
Summary:
Previously we were renaming constants to `lifted_constant_tensor0` or equivalent. This PR changes things so that the constants retain the same FQN as in the original eager module.

Actually, `symbolic_trace` already is supposed to do this, but the code path is not triggered when used from `make_fx`, since we don't pass an actual `nn.Module` instance to `trace()`, but rather a multiply-wrapped-functionalized-lambda-thing.

So, I reproduced the essential logic outside of make_fx, at the export layer.

Test Plan: added a unit test

Differential Revision: D54221616

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120664
Approved by: https://github.com/SherlockNoMad
2024-02-27 06:35:51 +00:00
PyTorch MergeBot
65fd8b6730 Revert "[export] Disable exported_program.__call__ (#119466)"
This reverts commit c26884f063.

Reverted https://github.com/pytorch/pytorch/pull/119466 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/119466#issuecomment-1947384298))
2024-02-15 21:42:32 +00:00
Angela Yi
c26884f063 [export] Disable exported_program.__call__ (#119466)
Summary: `ExportedProgram` is an artifact produced by torch.export, containing the graph that is exported, along with other attributes about the original program such as the graph signature, state dict, and constants. One slightly confusing thing that users run into is that they treat the `ExportedProgram` as a `torch.nn.Module`, since the object is callable. However, as we do not plan to support all features that `torch.nn.Module`s have, like hooks, we want to create a distinction between it and the `ExportedProgram` by removing the `__call__` method. Instead users can create a proper `torch.nn.Module` through `exported_program.module()` and use that as a callable.

Test Plan: CI

Differential Revision: D53075378

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119466
Approved by: https://github.com/zhxchen17, https://github.com/thiagocrepaldi
2024-02-15 08:49:34 +00:00
Han Qi
757201c213 Refactor ExportedProgram to expose the functions for pre and postprocessing (#119513)
Reason:
Consumers of ExportProgram might choose to further lower exported_program.graph_module
to something else.
Then, it will need to setup the calling convention to call it.

This refactor concentrates these calling convention to one place and can be reused.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119513
Approved by: https://github.com/zhxchen17
2024-02-12 17:22:27 +00:00
Angela Yi
c3e0836084 [export] Remove CallSpec (#117671)
Summary: This is not really being used anywhere

Test Plan: CI

Differential Revision: D52842563

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117671
Approved by: https://github.com/avikchaudhuri, https://github.com/zhxchen17
2024-02-08 17:19:03 +00:00
Edward Z. Yang
3f0fd36835 Introduce size oblivious guards (#118579)
Fixes https://github.com/pytorch/pytorch/issues/117361

The implementation here slightly diverges from what was proposed in the issue, so I will recap what this PR is doing here. Today, when doing computations involving size-like unbacked SymInts, we assume for all operations that the compile time range of the integer is `[2, inf]`, even though at runtime we also accept zero and one.

This PR removes the carte blanche assumption, and instead does the analysis in a much more limited and controlled fashion: only for guards which we have designated as "size oblivious" are we willing to do the analysis under the assumption that the range of all size-like unbacked SymInts is `[2, inf]`; otherwise, we will faithfully only do analysis with `[0, inf]` (or whatever the user provided) bounds.

The infra pieces of this PR are:

* Remove runtime_var_to_range from torch/fx/experimental/symbolic_shapes.py; modify `_constrain_range_for_size` to refine the range without clamping min to 2, and instead add the symbol to a `size_like` set in the ShapeEnv
* When evaluating an expression, if the expression is requested to be evaluated in a `size_oblivious` way, we attempt to statically compute the value of the expression with the assumption that all symbols in `size_like` are updated to assume that they are `>= 2`.
* Add Python and C++ APIs for guarding on a SymBool in a size-oblivious way. In C++, I also need to add some helpers for performing symbolic comparisons, since the stock comparisons immediately specialize in the "normal" way.

The rest of the changes of the PR are marking various spots in PyTorch framework code as size oblivious, based on what our current test suite exercises.

As you review the places where we have marked things as size oblivious, it may become clear why I ended up not opting for the "designate a branch as the default branch when it's not statically obvious which way to go": for some of the conditions, this answer is rather non-obvious. I think potentially there is another refinement on top of this PR, which is something like "I don't care if you can't figure it out with ValueRange analysis, go down this path anyway if there are unbacked sizes involved." But even if we add this API, I think we are obligated to attempt the ValueRange analysis first, since it can lead to better outcomes sometimes (e.g., we are able to figure out that something is contiguous no matter what the unbacked size is.)

When is it permissible to mark something as size oblivious? Heuristically, it is OK anywhere in framework code if it gets you past a guard on unbacked SymInt problem. It is somewhat difficult to provide a true semantic answer, however. In particular, these annotations don't have any observational equivalence guarantee; for example, if I have `torch.empty(u0, 1).squeeze()`, we will always produce a `[u0]` size tensor, even though if `u0 == 1` PyTorch will actually produce a `[]` size tensor. The argument that I gave to Lezcano is that we are in fact defining an alternate semantics for a "special" size = 0, 1, for which we have these alternate eager mode semantics. In particular, suppose that we have a constant `special1` which semantically denotes 1, but triggers alternate handling rules. We would define `torch.empty(special1, 1).squeeze()` to always produce a `[special1]` size tensor, making its semantics coincide with unbacked SymInt semantics. In this model, the decision to designate guards as size oblivious is simply a user API question: you put them where ever you need some handling for special1! As we conservatively error out whenever it is not obvious what `special1` semantics should be, it is always valid to expand these semantics to cover more cases (although you can always choose the wrong semantics!)

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118579
Approved by: https://github.com/eellison, https://github.com/lezcano
2024-02-06 19:45:32 +00:00
Michael Suo
bf4e171539 [export] support non-persistent buffers (#118969)
Summary:
X-link: https://github.com/pytorch/executorch/pull/1817

Basic support for non-persistent buffers, which are buffers that do not show up in the state dict.

One weird twist is that most of our other systems (FX, aot_export, dynamo) have completely buggy handling of non-persistent buffers. I tried to go on a wild goose chase to fix them all, but it got to be too much. So I introduced some sad rewrite passes in `_export` make the final state dict correctly align with the original module's state dict.

This exposed some bugs/ambiguous handling of parameters/buffers in existing test code. For example, `TestSaveLoad.test_save_buffer` traced over a module that was not in the root module hierarchy and caused some weird behavior. I think we should error explicitly on use cases like this: https://github.com/pytorch/pytorch/issues/118410. For now I just rewrote the tests or skipped them.

As a side effect, this diff tightened up quite a few sloppy  behaviors around state dict handling:
- Tensor attributes were getting promoted to be buffers—bad!
- Tracing through a module not in the children of the root module would add its parameters/buffers to the state dict—bad!

This behavior is unlikely to show up in user code since the model would be totally broken, but did show up in a bunch of tests.

#buildmore

Test Plan:
unit tests
sandcastle

Differential Revision: D53340041

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118969
Approved by: https://github.com/guangy10, https://github.com/huydhn, https://github.com/titaiwangms
2024-02-02 19:16:08 +00:00
Angela Yi
53da422582 [export] Move _create_graph_module_for_export to torch/export (#118893)
Summary: I have to keep the torch/_export one to not break executorch...

Test Plan: CI

Reviewed By: avikchaudhuri

Differential Revision: D52842750

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118893
Approved by: https://github.com/zhxchen17
2024-02-02 16:40:01 +00:00
PyTorch MergeBot
221747507d Revert "[export] support non-persistent buffers (#118612) (#118722)"
This reverts commit a43c28368c.

Reverted https://github.com/pytorch/pytorch/pull/118722 on behalf of https://github.com/atalman due to broke linux-jammy-py3-clang12-executorch ([comment](https://github.com/pytorch/pytorch/pull/118722#issuecomment-1921484565))
2024-02-01 14:39:29 +00:00
Michael Suo
a43c28368c [export] support non-persistent buffers (#118612) (#118722)
Summary:
X-link: https://github.com/pytorch/executorch/pull/1769

Basic support for non-persistent buffers, which are buffers that do not show up in the state dict.

One weird twist is that most of our other systems (FX, aot_export, dynamo) have completely buggy handling of non-persistent buffers. I tried to go on a wild goose chase to fix them all, but it got to be too much. So I introduced some sad rewrite passes in `_export` make the final state dict correctly align with the original module's state dict.

This exposed some bugs/ambiguous handling of parameters/buffers in existing test code. For example, `TestSaveLoad.test_save_buffer` traced over a module that was not in the root module hierarchy and caused some weird behavior. I think we should error explicitly on use cases like this: https://github.com/pytorch/pytorch/issues/118410. For now I just rewrote the tests or skipped them.

Test Plan: added a unit test

Differential Revision: D53253905

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118722
Approved by: https://github.com/SherlockNoMad, https://github.com/angelayi
2024-02-01 00:36:09 +00:00
suo
2f79a7bf9e [export] make spec comparison indifferent to fx collections (#118718)
Treat immutable_dict as dict and immutale_list as list. This behavior was tripped up by some executorch tests

Differential Revision: [D53252679](https://our.internmc.facebook.com/intern/diff/D53252679/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118718
Approved by: https://github.com/zhxchen17
2024-02-01 00:10:49 +00:00
Catherine Lee
4f5785b6b3 Enable possibly-undefined error code (#118533)
Fixes https://github.com/pytorch/pytorch/issues/118129

Suppressions automatically added with

```
import re

with open("error_file.txt", "r") as f:
    errors = f.readlines()

error_lines = {}
for error in errors:
    match = re.match(r"(.*):(\d+):\d+: error:.*\[(.*)\]", error)
    if match:
        file_path, line_number, error_type = match.groups()
        if file_path not in error_lines:
            error_lines[file_path] = {}
        error_lines[file_path][int(line_number)] = error_type

for file_path, lines in error_lines.items():
    with open(file_path, "r") as f:
        code = f.readlines()
    for line_number, error_type in sorted(lines.items(), key=lambda x: x[0], reverse=True):
        code[line_number - 1] = code[line_number - 1].rstrip() + f"  # type: ignore[{error_type}]\n"
    with open(file_path, "w") as f:
        f.writelines(code)
```

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Co-authored-by: Catherine Lee <csl@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118533
Approved by: https://github.com/Skylion007, https://github.com/zou3519
2024-01-30 21:07:01 +00:00
suo
4ee8aa6028 [export] adopt KeyPath API in nonstrict mode (#118609)
This PR rewrites two paths to use the newly-added keypaths API in pytree:
First: we were hand-rolling a tree_map during fakification because we wanted to track sources. This PR uses keypaths instead, which can do the same thing without needing custom code.

Second: our constraint error formatting was referencing placeholder names in error messages. These placeholder names are not otherwise user-visible, so they are super confusing to users (e.g. "which input does arg1_3 correspond to?"). This diff uses the `keystr` API to format the error message.

This necessitated some small refactors—generating the keystr is expensive so doing it in an f-string was very bad.

It can also be further improved—we can inspect the signature so that instead of `*args[0]` we can give people the actual argument name, which would be the ideal UX. But leaving that for later.

Differential Revision: [D53139358](https://our.internmc.facebook.com/intern/diff/D53139358/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118609
Approved by: https://github.com/zhxchen17
ghstack dependencies: #118607, #118608
2024-01-30 19:14:11 +00:00
suo
ca090b2c77 [export] do not use tree_flatten_spec (#118608)
tree_flatten_spec is bad; it isn't synced up with `register_pytree_node` so it will not handle arbitrary custom pytrees. It's also not really maintained.

We only use it for two purposes:
- To retain kwarg ordering stability, so that if the user passes in kwargs in a different order things will still work.
- To do "structural" checks that ignore types.

In both cases, tree_flatten_spec is probably *not* the ideal way to implement the desired behavior.

## kwargs ordering
- tree_flatten_spec overwrites the behavior of ALL dictionaries, not just kwargs. This is not correct, dictionary ordering is meaningful in Python, and it's pretty trivial to write a program that relies on dict ordering.
- For kwargs, we do sort of expect that the order in which arguments are passed shouldn't matter. BUT there is one exception: `**kwargs`. In fact, [PEP 468](https://peps.python.org/pep-0468/) was introduced specifically to clarify that ordering does matter when the function being called uses `**kwargs`.

In this diff I introduce a utility function that *only* reorders kwargs. This gets us most of the way to correct—dicts are no longer reordered, but kwargs can be passed in any order.

A "fully correct" solution would need fix the corner case from PEP468. We could detect whether the top-level fn being traced uses `**kwargs` (via `inspect`), then serialize a flag for it. In ExportedProgram, we would check that flag and only re-order if `**kwargs` was unused; otherwise error if the key order doesn't match. This is a super corner case though, so I'll file it as a followup task.

## structural equivalence checking

This is another use case, where again `tree_flatten_spec` is too broad. Generally we want to treat a precise two types as the same, not override the behavior of comparison generally. So I introduce an `is_equivalent` util for this purpose.

Differential Revision: [D53168420](https://our.internmc.facebook.com/intern/diff/D53168420/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118608
Approved by: https://github.com/zhxchen17
ghstack dependencies: #118607
2024-01-30 19:14:04 +00:00
PyTorch MergeBot
40ece2e579 Revert "Enable possibly-undefined error code (#118533)"
This reverts commit 4f13f69a45.

Reverted https://github.com/pytorch/pytorch/pull/118533 on behalf of https://github.com/clee2000 due to sorry i'm trying to figure out a codev merge conflict, if this works i'll be back to rebase and merge ([comment](https://github.com/pytorch/pytorch/pull/118533#issuecomment-1917695185))
2024-01-30 19:00:34 +00:00
Edward Z. Yang
4f13f69a45 Enable possibly-undefined error code (#118533)
Fixes https://github.com/pytorch/pytorch/issues/118129

Suppressions automatically added with

```
import re

with open("error_file.txt", "r") as f:
    errors = f.readlines()

error_lines = {}
for error in errors:
    match = re.match(r"(.*):(\d+):\d+: error:.*\[(.*)\]", error)
    if match:
        file_path, line_number, error_type = match.groups()
        if file_path not in error_lines:
            error_lines[file_path] = {}
        error_lines[file_path][int(line_number)] = error_type

for file_path, lines in error_lines.items():
    with open(file_path, "r") as f:
        code = f.readlines()
    for line_number, error_type in sorted(lines.items(), key=lambda x: x[0], reverse=True):
        code[line_number - 1] = code[line_number - 1].rstrip() + f"  # type: ignore[{error_type}]\n"
    with open(file_path, "w") as f:
        f.writelines(code)
```

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118533
Approved by: https://github.com/Skylion007, https://github.com/zou3519
2024-01-30 05:08:10 +00:00
Zhengxu Chen
0ed24cb1af [export] comments about runtime_var_to_range. (#118539)
Summary: Add some comments in case we forgot what runtime_var_to_range means

Test Plan: eyes

Differential Revision: D53186114

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118539
Approved by: https://github.com/suo
2024-01-30 02:07:34 +00:00
Sherlock Huang
bb3db079b1 [Export] Introduce class_fqn into CustomObjArgument (#118158)
Summary:
Class FQN is needed when unpacking CustomObj instance.
For all other Arguments, e.g. Tensor, TensorList, SymInt, we always know their exact type. However, CustomObjArgument had an opaque type.
Adding this field also helps unveiling the type of this opaque object.

Test Plan: CI

Differential Revision: D53029847

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118158
Approved by: https://github.com/zhxchen17
2024-01-25 18:44:25 +00:00
Zhengxu Chen
abd759d50d [fx] Add hooks to intercept node replacements. (#117825)
Summary: Adding an experimental API to FX graph module to place "hooks" every time when we are changing or replacing nodes in a graph, so that we can properly update the new name in graph signature and potentially other places.

Test Plan:
buck test mode/opt  -c fbcode.enable_gpu_sections=true caffe2/test/distributed/_tensor/experimental:tp_transform

buck test mode/opt caffe2/test:test_export -- -r test_replace_hook

Differential Revision: D52896531

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117825
Approved by: https://github.com/avikchaudhuri
2024-01-23 22:28:40 +00:00
Angela Yi
92d718aed1 [export] Add lifted constant obj to input (#116985)
Test Plan: wip

Differential Revision: D52556070

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116985
Approved by: https://github.com/suo
2024-01-18 22:10:53 +00:00
Angela Yi
ad22bd2fa1 [export][refactor][6/n] Remove equality_constraints (#116979)
Through the new dynamic_shapes API and using torch.export.Dim, dimensions that are equal will now be represented by the same symbol, so we no longer need to store `equality_constraints`.

Differential Revision: D52351705

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116979
Approved by: https://github.com/avikchaudhuri
2024-01-09 19:04:47 +00:00
Zhengxu Chen
9519c8afd4 [export] Remove hacks for passing pinned version test. (#116871)
Summary: nature will heal itself.

Test Plan: CI

Reviewed By: angelayi

Differential Revision: D52566227

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116871
Approved by: https://github.com/angelayi
2024-01-06 18:09:27 +00:00
Angela Yi
6413511713 [export][refactor][4/n] Make equality_constraints optional (#116233)
Summary: needed to remove equality_contraints eventually :P

Test Plan: CI

Differential Revision: D52351709

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116233
Approved by: https://github.com/tugsbayasgalan
2024-01-05 00:50:52 +00:00
Zhengxu Chen
43fb1b671c [export] Improve verifier to not specialize on dialect. (#116705)
Summary:
Currently we have a very ugly specialization on edge dialect in verifier like the following:
```
 # TODO Remove this branch.
            if ep.dialect == "EDGE":  # !!! Don't change this allowlist. !!!
                pass
            else:
                raise e
```
In this diff we do some additional work to make signature checking also work in exir. We decouple the transformation stack in torch export and exir so that different layers of the stack can evolve in their own fashion and the team can divide and conquer them seperately.

Test Plan: CI

Differential Revision: D52499225

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116705
Approved by: https://github.com/tugsbayasgalan
2024-01-04 17:17:23 +00:00
angelayi
70eb53505b [export] Update range constraints to runtime_var_to_range (#115427)
Updated range_constraints to be the union of shape_env.var_to_range and shape_env.runtime_var_to_range, with shape_env.runtime_var_to_range taking priority.

Due to 0/1 specialization, if we bound an unbacked symint to be less than 5, the range of possible values for this symint is actually recorded as [2, 5] in shape_env.var_to_range. To fix this so that users will be able to see a more understandable range of [0, 5], shape_env.runtime_var_to_range was created to store the range of [0, 5]. Since range_constraints is a user-facing attribute to query the ranges of certain symints, we want to use shape_env.runtime_var_to_range to get the unbacked symints ranges, rather than shape_env.var_to_range.

Additionally, run_decompositions() has an issue where it will always add assertions to the graph, even if a previous run has already added the assertions. So, I added a part to the AddRuntimeAssertionsForInlineConstraints which will store which assertions have already been added.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115427
Approved by: https://github.com/zhxchen17
2024-01-03 16:55:04 +00:00
Arun Ranganathan
ef98987017 Fix user input mutations for run_decompositions (#116382)
Fixes #115106

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116382
Approved by: https://github.com/angelayi
2024-01-03 05:04:22 +00:00
Tugsbayasgalan Manlaibaatar
dfc898ede4 Don't decompose functional ops in predispatch functionalization (#116383)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116383
Approved by: https://github.com/bdhirsh
ghstack dependencies: #115188, #115210
2023-12-28 11:54:04 +00:00
PyTorch MergeBot
85628c0e57 Revert "[export] Update range constraints to runtime_var_to_range (#115427)"
This reverts commit f8ad664cf2.

Reverted https://github.com/pytorch/pytorch/pull/115427 on behalf of https://github.com/angelayi due to failing internal tests ([comment](https://github.com/pytorch/pytorch/pull/115427#issuecomment-1870671728))
2023-12-27 22:44:45 +00:00
suo
bc3ef1684e [export] refactor unflatten.py to be a top-level API (#115466)
This is in preparation for the merging of the internal and external versions of
the unflattener. Unflatten needs to be its own API because we are adding more
options to it in forthcoming diffs.

Differential Revision: [D52001133](https://our.internmc.facebook.com/intern/diff/D52001133/)

@diff-train-skip-merge
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115466
Approved by: https://github.com/zhxchen17
2023-12-21 20:52:29 +00:00
angelayi
f8ad664cf2 [export] Update range constraints to runtime_var_to_range (#115427)
Updated range_constraints to be the union of shape_env.var_to_range and shape_env.runtime_var_to_range, with shape_env.runtime_var_to_range taking priority.

Due to 0/1 specialization, if we bound an unbacked symint to be less than 5, the range of possible values for this symint is actually recorded as [2, 5] in shape_env.var_to_range. To fix this so that users will be able to see a more understandable range of [0, 5], shape_env.runtime_var_to_range was created to store the range of [0, 5]. Since range_constraints is a user-facing attribute to query the ranges of certain symints, we want to use shape_env.runtime_var_to_range to get the unbacked symints ranges, rather than shape_env.var_to_range.

Additionally, run_decompositions() has an issue where it will always add assertions to the graph, even if a previous run has already added the assertions. So, I added a part to the AddRuntimeAssertionsForInlineConstraints which will store which assertions have already been added.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115427
Approved by: https://github.com/zhxchen17
2023-12-20 20:00:41 +00:00
Angela Yi
8e2d63cbc3 [export][reland] Remove runtime assertion pass (#115597)
Summary:
Reland of https://github.com/pytorch/pytorch/pull/115196
D52054112 to fix internal failures.

Test Plan: CI

Differential Revision: D52054110

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115597
Approved by: https://github.com/ydwu4, https://github.com/zhxchen17
2023-12-15 03:22:03 +00:00
angelayi
17c104ac18 [export] Do not copy state_dict in run_decomp (#115269)
Fixes https://github.com/pytorch/pytorch/issues/114628

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115269
Approved by: https://github.com/thiagocrepaldi, https://github.com/ydwu4
2023-12-13 01:21:21 +00:00
angelayi
b6a4866330 [export][reland][refactor][3/n] Move unlift to separate file (#115558)
Reland of https://github.com/pytorch/pytorch/pull/114787

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115558
Approved by: https://github.com/zhxchen17, https://github.com/atalman
ghstack dependencies: #115556, #115557
2023-12-12 05:37:07 +00:00
atalman
749f0c90e1 Revert "[export][refactor][3/n] Move unlift to separate file (#114787)" (#115457)
Github First Oncall: This reverts commit 967863d91d.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115457
Approved by: https://github.com/osalpekar
2023-12-08 22:33:28 +00:00
PyTorch MergeBot
4186932bac Revert "[export] Remove runtime assertion pass (#115196)"
This reverts commit c163b3c035.

Reverted https://github.com/pytorch/pytorch/pull/115196 on behalf of https://github.com/atalman due to Broke internal test ([comment](https://github.com/pytorch/pytorch/pull/115196#issuecomment-1847778344))
2023-12-08 20:07:04 +00:00
angelayi
c163b3c035 [export] Remove runtime assertion pass (#115196)
Reland of https://github.com/pytorch/pytorch/pull/111949/

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115196
Approved by: https://github.com/avikchaudhuri
2023-12-07 01:44:11 +00:00
angelayi
967863d91d [export][refactor][3/n] Move unlift to separate file (#114787)
Differential Revision: [D51823960](https://our.internmc.facebook.com/intern/diff/D51823960)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114787
Approved by: https://github.com/ydwu4
ghstack dependencies: #114764, #114768
2023-12-06 16:46:47 +00:00
Zhengxu Chen
e6b3a8ce5f [export] Refactor export() and separate the non-strict part. (#114697)
Summary: Refactor torch.export to separate strict part and non strict part. Adding an option to torch.export called `strict=True`.

Test Plan: buck2 test mode/opt caffe2/test:test_export -- -r non_strict

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114697
Approved by: https://github.com/ydwu4, https://github.com/tugsbayasgalan
2023-11-30 16:47:50 +00:00
Angela Yi
f1fe0b685c [export] Remove combine_args_kwargs (#114782)
Test Plan: CI

Differential Revision: D51676479

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114782
Approved by: https://github.com/zhxchen17
2023-11-30 02:49:21 +00:00
angelayi
c10893654e [export] Fix run_decomps to work with fake mode (#114714)
Fixes https://github.com/pytorch/pytorch/issues/114711
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114714
Approved by: https://github.com/ydwu4, https://github.com/zhxchen17
2023-11-29 06:52:13 +00:00
Zhengxu Chen
e0d2a24967 Reland "[export] Support user input mutation. [1/2]" (#114496) (#114596)
Summary:

Serialization not implemented yet. Will do in the next diff.

Resolving Github issues:
https://github.com/pytorch/pytorch/issues/112429
https://github.com/pytorch/pytorch/issues/114142

Test Plan:
onnx doc test
```
python -m xdoctest /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/onnx/_internal/exporter.py ONNXProgram.model_signature:0
```

Differential Revision: D51588558

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114596
Approved by: https://github.com/angelayi
2023-11-27 20:19:04 +00:00
PyTorch MergeBot
fa1ccc34c4 Revert "[export] Support user input mutation. [1/2] (#114496)"
This reverts commit b62c0d96bc.

Reverted https://github.com/pytorch/pytorch/pull/114496 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/114496#issuecomment-1827289635))
2023-11-27 07:52:21 +00:00
Zhengxu Chen
b62c0d96bc [export] Support user input mutation. [1/2] (#114496)
Summary:
Serialization not implemented yet. Will do in the next diff.

Resolving Github issues:
https://github.com/pytorch/pytorch/issues/112429
https://github.com/pytorch/pytorch/issues/114142

Test Plan:
buck2 run mode/opt caffe2/test:test_export -- -r test_export_
input_mutation

Differential Revision: D51556962

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114496
Approved by: https://github.com/tugsbayasgalan
2023-11-27 04:53:38 +00:00
Angela Yi
50101d59ba [export][retry] Move lifted tensors out of state_dict (#113689)
Test Plan: CI

Differential Revision: D51321532

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113689
Approved by: https://github.com/zhxchen17
2023-11-15 09:24:49 +00:00
Tugsbayasgalan Manlaibaatar
a7b75f586a [RELAND] Disallow skipping dynamo (#110222)
Previous discussion: https://github.com/pytorch/pytorch/pull/109476

In this PR, I made following additions to the original PR:
1) Unlifted graph module now runs the runtime assertions in its' forward call.
2) When we retrace, we make sure we run the assertions to make sure user is tracing the module with correct inputs with respect to the assumptions we made during first tracing. The way I do is that I create new graph module type with modified call method. And the runtime assertions happen under torchdynamo.disable so that it is just run in eager directly. The reason is we don't this to be traced part of the graph.
3) Both ep.module and capture_pre_autograd now returns _UnliftedGraphModule.

Differential Revision: [D51078056](https://our.internmc.facebook.com/intern/diff/D51078056)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110222
Approved by: https://github.com/zhxchen17
2023-11-14 16:02:01 +00:00
Zhengxu Chen
aa376e31fd [export] Enable verifier [2/n] (#113075)
Summary: Turn on verifier check for exportec program ctor. Note that this effectively detect a large surface of spec violations, so we also spend some time fixing them one by one in this diff.

Test Plan: CI

Differential Revision: D51014944

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113075
Approved by: https://github.com/angelayi
2023-11-08 03:32:11 +00:00
Aaron Gokaslan
8219bf051b [BE]: Apply RUF015 to torch folder (#113025)
Removes unnecessary allocations of iterators. There is a small chance this may have side effects as the entire iterator is no longer consumed, but this is a way more efficient method for retrieving the first element.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113025
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-11-07 00:48:15 +00:00
Zhengxu Chen
50767a075a [export] Clean up verifier [1/n]. (#112505)
Summary: Some adjustments to verifier so that it's easier to use it correctly. We will enable verifier later, so the current diff is no-op.

Test Plan: CI

Differential Revision: D50839295

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112505
Approved by: https://github.com/tugsbayasgalan, https://github.com/angelayi
2023-11-02 19:36:06 +00:00
angelayi
131e0f1b75 [export] Separate out graph signature (#112412)
Differential Revision: [D50800524](https://our.internmc.facebook.com/intern/diff/D50800524)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112412
Approved by: https://github.com/zhxchen17
2023-11-02 00:18:28 +00:00