Commit Graph

36 Commits

Author SHA1 Message Date
Tugsbayasgalan Manlaibaatar
cd275dc24f Remove RangeConstraints in favor of ValueRanges (#109859)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109859
Approved by: https://github.com/avikchaudhuri
2023-10-10 22:22:05 +00:00
Kazuaki Ishizaki
bff28ec568 Fix typo under torch/_export directory (#110808)
This PR fixes typo of comments and message in files under `torch/_export` directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110808
Approved by: https://github.com/gmagogsfm
2023-10-08 11:47:51 +00:00
Adnan Akhundov
f74937741e Remove runtime assertions between export and AOT compilation (#110710)
Summary: The runtime assertions inserted in the `torch._export.export` by the `_AddRuntimeAssertionsForInlineConstraintsPass` lead to errors in AOT Inductor like #109884. In `torch._export.aot_compile` export and AOT compilation are run consecutively which would lead to the above issue if any assertions are inserted.

In this PR, we're adding a new parameter / flag to `torch._export.aot_compile`, `remove_runtime_assertions`, to remove the assertions inserted during export before AOT compilation. The flag is set to `False` for BC.

Additionally, we remove the flag `add_runtime_assertions_for_inline_constraints` recently added to `torch._dynamo.config`, as it can lead to undesirable `torch._export` behavior and is 's no longer required for the AOT Inductor testing purposes.

Test Plan: CI

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110710
Approved by: https://github.com/zhxchen17, https://github.com/chenyang78
2023-10-06 21:09:35 +00:00
Zhengxu Chen
be5dc3a00d [export] Update ArgumentSpec definition. (#110612)
Summary: Changing ArgumentSpec into a true union type in Python without changing serialization format.

Test Plan: CI

Differential Revision: D49871088

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110612
Approved by: https://github.com/angelayi
2023-10-06 03:14:45 +00:00
Angela Yi
13af952f94 [export] Add run_decomposition() function to ExportedProgram (#110236)
Summary:
https://docs.google.com/document/d/1QJJEGnj2nHGPODlw38BEG3KLLCOTfdOVjPrNQbz_LM8/edit#bookmark=id.lp80wfshq130

`exported_program.run_decompositions(decomposition_table)` will optionally take a decomposition table, and run decompositions on the exported program, returning a new exported program. By default we will run the Core ATen decomposition table.

Splitting up this diff with the following one (D49742989) to make migrating Executorch easier:
1. Land this diff
1. Wait for a pytorch nightly to include this diff
1. Update executorch's pytorch nightly
1. Land the following diff to have export() return no decomps

Test Plan: Tested in following diff

Differential Revision: D49743208

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110236
Approved by: https://github.com/gmagogsfm
2023-10-01 18:18:27 +00:00
Angela Yi
e8ab8c877d [exir] Add lift constant tensors passes after aten_to_edge (#109382)
Summary:
X-link: https://github.com/pytorch/executorch/pull/359

When exporting using enable_aot (through the torch.export path), we want to lift all constant tensors as buffers to the exported program. The ScalarToTensor pass in EXIR's aten_to_edge passes will create some constant tensors in the graph, so we will need to run a lift_constant_tensors pass afterwards.

Note that this only needs to be applied when exporting using the torch.export path because in the original path, nothing is lifted.

Test Plan: CI

Differential Revision: D49207492

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109382
Approved by: https://github.com/cccclai
2023-09-19 01:34:58 +00:00
zhxchen17
6f4b9cc9ab [export] Skip noop runtime assertion pass. (#109395)
Summary:
If there's no inline constraints added, just return the original graph.
We want to do this because sometimes this pass mess up the node names,
before we actually fix this, we could make the behavior a bit less buggy
by skipping noop passes.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109395
Approved by: https://github.com/angelayi
2023-09-18 22:37:28 +00:00
Angela Yi
58391aeaf1 [export] Lift constant tensors as buffes (reland) (#109040)
Summary:
When we retrace the graph containing constant tensors, they get lifted as buffer inputs.
AotInductor also wants to lift all the constants as inputs.
If we separate the constants as a separate thing, then it adds an additional complexity where we now have to keep track of 3 inputs (params, buffers, constants).

Cons: People might care about specifically what buffers are/are not buffers?

If people want to know specifically which buffers are constants, we can add an additional field in the graph signature to mark this.

Test Plan: CI

Differential Revision: D49153367

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109040
Approved by: https://github.com/zhxchen17
2023-09-12 15:23:00 +00:00
Huy Do
703cdd711f Revert "[export] Lift constant tensors as buffers (#108592)" (#108893)
This reverts commit e3407238f6.

I gave up trying to revert the original PR in the usual way https://github.com/pytorch/pytorch/pull/108592#issuecomment-1712135536, so let's manually revert it then.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108893
Approved by: https://github.com/izaitsevfb, https://github.com/atalman
2023-09-08 22:25:10 +00:00
angelayi
e3407238f6 [export] Lift constant tensors as buffers (#108592)
When we retrace the graph containing constant tensors, they get lifted as buffer inputs.
AotInductor also wants to lift all the constants as inputs.
If we separate the constants as a separate thing, then it adds an additional complexity where we now have to keep track of 3 inputs (params, buffers, constants).

Cons: People might care about specifically what buffers are/are not buffers?

If people want to know specifically which buffers are constants, we can add an additional field in the graph signature to mark this.

Differential Revision: [D49017872](https://our.internmc.facebook.com/intern/diff/D49017872)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108592
Approved by: https://github.com/zhxchen17
2023-09-07 01:14:30 +00:00
Zhengxu Chen
138fafe72d [export] Fix torch.export() issues for server use cases. (#108275)
Test Plan: In D48788843

Differential Revision: D48811793

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108275
Approved by: https://github.com/tugsbayasgalan
2023-08-31 07:19:18 +00:00
Angela Yi
5683ab74f4 [export] Fix autogenerated stacktrace (#108217)
Summary: Existing code is incorrectly overwriting the stacktrace to be None because since there is no exception happening, `traceback.format_exc` is None. Also we should only populate the stack trace if it not there in the first place.

Test Plan: CI

Differential Revision: D48818478

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108217
Approved by: https://github.com/zhxchen17
2023-08-30 17:44:06 +00:00
Tugsbayasgalan Manlaibaatar
52eb773e9c Add runtime assertions for prim values (#107939)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107939
Approved by: https://github.com/gmagogsfm
2023-08-26 00:51:28 +00:00
Zhengxu Chen
547ccae0db [export] Support preserving calling convention to some modules. (#106798)
Summary: APS use this feature to swap out some submodules after unflattening.

Test Plan: test_export_preserve_signature

Differential Revision: D48154341

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106798
Approved by: https://github.com/tugsbayasgalan
2023-08-11 21:17:45 +00:00
Zhengxu Chen
9891c6aa15 [export] cleanup pass base. [2/n] (#106905)
Test Plan: CI

Differential Revision: D48004717

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106905
Approved by: https://github.com/angelayi
2023-08-10 02:49:58 +00:00
Zhengxu Chen
a8e3bd97cf [export] cleanup pass base. [1/n] (#106480)
Test Plan: CI

Differential Revision: D48004635

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106480
Approved by: https://github.com/angelayi
2023-08-03 19:48:05 +00:00
Zhengxu Chen
10f55a2a94 [export] Handle the case for no placeholders during in runtime assertion pass. (#106134)
Summary: as title

Differential Revision: D47835210

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106134
Approved by: https://github.com/angelayi
2023-07-27 18:36:51 +00:00
Zhengxu Chen
2dbadd1eae [export] Remove experimental runtime assertion configs from export API. (#105043)
Test Plan: CI

Differential Revision: D47390794

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105043
Approved by: https://github.com/larryliu0820
2023-07-26 16:21:29 +00:00
Nikita Shulga
999abd56a7 [BE] Make ONNX imports lazy (#104843)
This reduces total number of imported modules by default from 1419 to 1322 according to
```
time python -c "import sys;before=len(sys.modules);import torch;after=len(sys.modules);print(f'torch-{torch.__version__} imported {after-before} modules')"
```

and slightly reduces import time, while having no effect on UX (i.e. `torch.onnx.` submodule is kept intact)

Suppress lint errors that appear after mypy accidentally starts listing more files, for more details see: https://github.com/pytorch/pytorch/issues/104940

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104843
Approved by: https://github.com/jansel, https://github.com/albanD
2023-07-11 12:54:22 +00:00
Tugsbayasgalan Manlaibaatar
361ef824ea Handle custom higher order ops (#104285)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104285
Approved by: https://github.com/zhxchen17
2023-06-28 01:53:36 +00:00
xuanqi
344bab2669 [RFC]: Functionalize assertions (#103757)
The idea here is to create do a graph mutation to:
* Create an initial dependency token at the beginning of the program.
* Replace non-functional version of assertion statements to functional version.
* The functional version of assertion statement will:
  * Accept a dependency token from output of previous functional assertion statement (or the initial dependency token if there isn't any).
  * Generate a dependency token as the output of assertion statement.
  * Augment the output to include the dependency token generated by last assertion statement.

The goal here is to:
* Form an explicit dependency chain and avoid potential reordering during other passes of compiling.
* Make the assertions a part of overall execution graph will affect the final output (or it could potentially be DCEed).

**NOTE:**
* Currently only cover `contrain_range` and WIP to support other assertions. Send out this PR to collect feedback first.
* Here it only focus on implementation itself. Will integrate it with current export in future PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103757
Approved by: https://github.com/avikchaudhuri
2023-06-24 00:23:35 +00:00
Yidi Wu
0330f67b22 Remove ExportGraphModuleMixin. (#103786)
Summary:
We remove the ExportGraphModuleMixin. There are several implications of this change:
1. The graph_module of ExportedProgram, EdgeDialectProgram and ExecutorchProgram won't have the same signature as original user function. Instead, we should directly call the *Program, which has the same calling convention. e.g:

2. All passes need to go through prog.transform(*passes). We need to make all passes return PassResult as a result.

3. We also need to make sure the graph_module.meta is preserved after transform.

Test Plan: Test with CI.

Differential Revision: D46729844

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103786
Approved by: https://github.com/avikchaudhuri
2023-06-23 01:22:28 +00:00
Angela Yi
8dc6001057 [export] Serialize symbolic values (#103273)
* Modified the SymInt schema to also store the hint of the SymInt if it is represented as a symbol so that when we reconstruct the SymInt, the hint will also exist on the node.
* GraphModuleDeserializer.deserialize now also optionally map of symbol names to range.

ReplaceSymSizeOpPass should not be needed after https://github.com/pytorch/pytorch/pull/103107 lands

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103273
Approved by: https://github.com/avikchaudhuri, https://github.com/zhxchen17
2023-06-13 20:29:47 +00:00
Angela Yi
6cb1455857 [export] Change equality constraints to list of tuples (#102998)
Changed equality constraints to a list of tuples as the dictionary wasn't providing much value -- also makes creating constraints + serialization easier.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102998
Approved by: https://github.com/avikchaudhuri
2023-06-05 21:03:02 +00:00
Tugsbayasgalan Manlaibaatar
4bb2b65ea4 Turn on add_runtime_assertion by default (#102671)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102671
Approved by: https://github.com/angelayi, https://github.com/avikchaudhuri
2023-06-05 16:27:44 +00:00
Richard Zou
74f10b9ea5 Switch most Python RAII guard usages to context manager (#102642)
There are some I can't easily switch due to reasons like:
- Dynamo modelling the guard
- BC concerns (for torch.autograd.set_multithreading_enabled)

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102642
Approved by: https://github.com/albanD
2023-06-01 16:28:37 +00:00
Angela Yi
7a569f86a0 [export] Cleanup constraints (#102666)
Redo of https://github.com/pytorch/pytorch/pull/102432 because idk how to push to that other branch...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102666
Approved by: https://github.com/zhxchen17
2023-06-01 04:22:31 +00:00
Tugsbayasgalan Manlaibaatar
d9f75dded1 [export] Add aot_export 1/N (#101490)
This PR adds aot_export_module as the lowering path from torch.level graph to aten graph. Some known limitations that need to be addressed in the follow up PRs:
1. Store param/buffer data in ExportedProgram
2. Fully support torch.cond with params/buffers
3. Making sure no duplicated ExportMetaData entry
4. This API will break Executorch if used on PyE, we will figure out a plan internally.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101490
Approved by: https://github.com/avikchaudhuri
2023-05-31 20:56:21 +00:00
Angela Yi
c4028de462 [export] ExportedProgram (#102259)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102259
Approved by: https://github.com/ydwu4, https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan, https://github.com/zhxchen17
2023-05-26 23:36:38 +00:00
Avik Chaudhuri
8751002215 equality assertions (#102256)
Previously we had runtime asserts for range constraints. This diff adds runtime asserts for equality constraints.

This requires a bit of refactoring that is worth calling out.
1. [Minor] Some of the data structures produced by export and consumed by the runtime assertion pass need to be broadened. This is a WIP. There are some associated code improvements that are included in this diff, but by and large the structures are similar to what exists now. Meanwhile @angelayi and I are chatting about how to make it qualitatively better: briefly, we want to index everything by symbols, which are 1-1 with (name, dim) pairs.
2. [Major] The order in which runtime asserts are emitted is changed. Previously we used to do the work in `placeholder`, now this diff adds a hook for "post-processing" after processing of all placeholders is done. This is needed because equality constraints can mention different placeholders. This change also opens the way to optimizing codegen: e.g., each (name, dim) pair should correspond to a single intermediate variable that is reused across runtime asserts. This is future work.

Differential Revision: [D46177642](https://our.internmc.facebook.com/intern/diff/D46177642/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102256
Approved by: https://github.com/tugsbayasgalan, https://github.com/angelayi
2023-05-26 14:57:31 +00:00
Tugsbayasgalan Manlaibaatar
9ffad5b62b Remove input tracker from runtime assertion pass (#100955)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100955
Approved by: https://github.com/ydwu4
2023-05-15 21:26:47 +00:00
Tugsbayasgalan Manlaibaatar
f542b31c9d [export] More robust view->view_copy pass (#100908)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100908
Approved by: https://github.com/ydwu4
2023-05-10 14:25:17 +00:00
ydwu4
26cd958718 Support runtime assertion for inline constraints (#100763)
This pr does the following:
1. previously, inline constraints is not properly set for tensor output data-dependent ops such as a.nonzero because of its return value is not symint. This pr just uses all the unbacked symbols i.e.those start with "i"/"f" in create_unbacked_sym* functions. Note that these symbols are guaranteed to be a super set of inline user constraints.

2. add inline assertions support by checking.

Currently, it only deal with tensor, SymInt, SymFloat, SymBool output data-dependent ops and ignore the rest. It's good enough for now as we only have a limited number of data-dependent ops (.item and .nonzero are explicitly tested).

The examples for graph that is added assertions is shown below:

```
class ExportGraphModule(torch.nn.Module):
    def forward(self, x):
        arg0: i64[s0], = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
        nonzero_default: i64[i0, 1] = torch.ops.aten.nonzero.default(arg0);  arg0 = None
        return pytree.tree_unflatten([nonzero_default], self._out_spec)

class GraphModule(torch.nn.Module):
    def forward(self, x):
        arg0: i64[s0], = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
        sym_size: Sym(s0) = torch.ops.aten.sym_size(arg0, 0)
        nonzero_default: i64[i1, 1] = torch.ops.aten.nonzero.default(arg0);  arg0 = None
        sym_size_1: Sym(i1) = torch.ops.aten.sym_size(nonzero_default, 0)
        ge: Sym(i1 >= 3) = sym_size_1 >= 3
        scalar_tensor_default: f32[] = torch.ops.aten.scalar_tensor.default(ge);  ge = None
        _assert_async_msg = torch.ops.aten._assert_async.msg(scalar_tensor_default, 'nonzero_default.shape[0] is outside of inline constraint [3, 5].');  scalar_tensor_default = None
        le: Sym(i1 <= 5) = sym_size_1 <= 5;  sym_size_1 = None
        scalar_tensor_default_1: f32[] = torch.ops.aten.scalar_tensor.default(le);  le = None
        _assert_async_msg_1 = torch.ops.aten._assert_async.msg(scalar_tensor_default_1, 'nonzero_default.shape[0] is outside of inline constraint [3, 5].');  scalar_tensor_default_1 = None
        return pytree.tree_unflatten([nonzero_default], self._out_spec)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100763
Approved by: https://github.com/tugsbayasgalan
2023-05-09 04:19:57 +00:00
Tugsbayasgalan Manlaibaatar
9b3552eb2c Add runtime assertions for input shape constraints (#100247)
This PR adds runtime assertions as an extra pass in the exported graph. Several high level information:
1. We specialize all dimensions that were not added to the user input constraints
2. We haven't added relational constraints as runtime assertions (e.g x[1] == x[0]), will do in a follow up diff

Differential Revision: [D45408971](https://our.internmc.facebook.com/intern/diff/D45408971)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100247
Approved by: https://github.com/guangy10, https://github.com/avikchaudhuri
2023-05-04 13:26:58 +00:00
Angela Yi
7bece142a9 [export] Port over const prop pass (#100102)
Stacked on top of https://github.com/pytorch/pytorch/pull/100000
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100102
Approved by: https://github.com/gmagogsfm
2023-04-27 17:06:47 +00:00
Angela Yi
9bbd3d6489 [export] ExportPassBase + view_copy pass (#100000)
* Added ExportPassBase, an interpreter based helper pass writing class
* It can also help maintain the dialect based on the operator namespace through having users override the `get_valid_dialects` function (returning an empty lists implies the pass works for any dialect).
* Added a `ReplaceBrokenOpsWithFunctionalOpsPass` to replace all ops that have not been converted with functionalization with their functional ones.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100000
Approved by: https://github.com/gmagogsfm
2023-04-26 21:01:25 +00:00