Commit Graph

220 Commits

Author SHA1 Message Date
Tugsbayasgalan (Tugsuu) Manlaibaatar
a44c072c89 Make InternalModel and Resnet work with rexportable flow (#106676)
Summary: Internal model and Resnet uses "re-export" flow now. Also did some refactoring to make the code little cleaner

Some changes for OSS:
1. Correctly use the "cached" fake tensors so that static symbols are still resolved to static
2. Change logic in PassBase to allocate static shapes for parameters
3. Add "is_torch_exported" tag to every node to make it survive during various graph transformations.
4. Added experimental wrapper API for quantization team to get pre_dispatch=True graph. Note that it doesn't actually do that right now. But we plan to switch soon.

Test Plan: CI

Differential Revision: D47890878

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106676
Approved by: https://github.com/jerryzh168
2023-08-09 20:10:48 +00:00
gmagogsfm
47014883a7 Remove unused _add_runtime_assertions (#106759)
`_add_runtime_assertions` is not used
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106759
Approved by: https://github.com/tugsbayasgalan
2023-08-09 18:58:32 +00:00
Angela Yi
d4bc27191a [exir] Update exir.pass_base to use export.pass_base (#106647)
Summary: Also fixed T159713621

Test Plan: CI

Differential Revision: D48068293

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106647
Approved by: https://github.com/tugsbayasgalan
2023-08-08 19:27:21 +00:00
angelayi
5b13c779d4 [AOTInductor] Remove call to aot_autograd when receiving ExportedProgram (#105977)
https://github.com/pytorch/pytorch/issues/105555

Existing flow first exports and then calls torch._inductor.aot_compile. However, export calls aot_autograd with the core aten decomposition table, and then torch._inductor.aot_compile calls aot_autograd again with the inductor decomposition table. The 2nd calling of aot_autograd is supposedly causing some problems, and seems excessive, so instead we will create a new function, torch._export.aot_compiler which will export using the inductor decomposition table, pass it to inductor's compile_fx_aot, and because it has already been exported, avoid recalling aot_autograd.

```
def aot_compile(
    f: Callable,
    args: Tuple[Any],
    kwargs: Optional[Dict[str, Any]] = None,
    constraints: Optional[List[Constraint]] = None,
) -> Tuple[str, ExportedProgram]:
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105977
Approved by: https://github.com/desertfire, https://github.com/zhxchen17, https://github.com/eellison
2023-08-04 15:35:23 +00:00
Tarun Karuturi
f8817d8ac8 Remove deepcopy override from ExportedProgram (#106578)
Summary: When we do a deep copy of the ExportedProgram because of the custom deep copy override the graph metadata (graph.meta) is failing to be copied over. This can be fixed but overall i don't see a need for a custom deepcopy in ExportedProgram and thus trying to get rid of it.

Test Plan: CI

Differential Revision: D48043723

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106578
Approved by: https://github.com/JacobSzwejbka
2023-08-04 06:31:32 +00:00
Zhengxu Chen
a8e3bd97cf [export] cleanup pass base. [1/n] (#106480)
Test Plan: CI

Differential Revision: D48004635

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106480
Approved by: https://github.com/angelayi
2023-08-03 19:48:05 +00:00
Tugsbayasgalan Manlaibaatar
4c46ea583f [Export] Support re-exportability (#106531)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106531
Approved by: https://github.com/zhxchen17
2023-08-03 18:27:26 +00:00
gmagogsfm
410bc558e6 Assert that args is of tuple type. (#106352)
This avoids accidental unpacking of tensor-type inputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106352
Approved by: https://github.com/tugsbayasgalan
2023-08-03 01:47:38 +00:00
gmagogsfm
b3c29cd1ec Remove unused workflow.py (#106340)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106340
Approved by: https://github.com/zhxchen17
2023-08-02 23:42:06 +00:00
Tarun Karuturi
3143d81f6c Add support for edge dialect ops in exir/serde (#106371)
Summary:
Adding support for edge dialect ops in `exir/serde`. This diff does the following:
- Moves the global `serialize_operator/deserialize_operator` implementations in`export/serde/serialize.py` into `GraphModuleSerializer` and `GraphModuleDeserializer`
- Adds implementations of `serialize_operator/deserialize_operator` inside `GraphModuleSerializer` and `GraphModuleDeserializer` in `exir/serde/serialize.py`

Test Plan: CI + Enabled edge dialect ops in `executorch/exir/tests/test_serde.py`

Differential Revision: D47938280

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106371
Approved by: https://github.com/angelayi
2023-08-02 20:09:15 +00:00
Tugsbayasgalan Manlaibaatar
fadd0859ca Expose module method in ExportedProgram (#105575)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105575
Approved by: https://github.com/zhxchen17
2023-08-01 21:28:57 +00:00
ydwu4
aaaafa1bcf [Export] remove unused flags in export (#106336)
Remove unused flags from export_dynamo_config:
Among them:
- capture_scalar_outputs: bool = True. **True by default** in dynamo.export:
- capture_dynamic_output_shape_ops: bool = True.  **True by default** in dynamo.export
- specialize_int: bool = True: **True by default** in dynamo.export.
- guard_nn_modules: bool = True: this flag is **not being used** as we never look at nn module guards and assume modules are forzen. See the [doc](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/config.py#L77) of this flag.
- dynamic_shapes: bool = True: **deprecated by dynamo**:  [here](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/config.py#L55 )

test plan:
Added new test for allow_rnn to test its effectiveness.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106336
Approved by: https://github.com/tugsbayasgalan
2023-08-01 16:10:09 +00:00
angelayi
66c537429e [export] Move attrs to properties and add BC decorator (#106170)
@SherlockNoMad mentioned that it's not bc safe to directly access these attributes, so I moved them to @property fields, and added a `@compatibility` decorator. For now I just set it to True for graph_module/graph.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106170
Approved by: https://github.com/SherlockNoMad
2023-07-31 18:13:07 +00:00
Aaron Gokaslan
52d4b1ae31 [BE]: Enable ruff rules PIE807 and PIE810 (#106218)
* Enables PIE807 + PIE810. PIE807 is do not reimplement list builtin function using lambda and PIE810 is to always fuse startswith / endswith calls (I applied the autofixes for this before we had ruff enabled).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106218
Approved by: https://github.com/albanD
2023-07-28 22:35:56 +00:00
ydwu4
5237ed55e6 [export] allow register dataclass as pytree node (#106160)
In this pr, we allow users to register a customized flatten/unflatten/serialization/deserialization for a dataclass. We provide some default implementation for flatten/unflatten. We could implement a decorator based on it when needed.

## Motivation:
HuggingFace and many internal models return dataclass output and torch.export wants to maintain the invariant that export result (i.e. exported_program) has the same calling convention and result as the original callable.

This is not supported in export yet: we cannot recover the original dataclass from flattened output produced by the underlying graph module (produced by dynamo and processed further by aot_export). We need to have a place to store the metadata of the dataclass so that we can re-construct it. To avoid adding hacky code in export and allow princinpled extensibility, we think extending pytree may be a good option.

## Implementation:
@zou3519 mentioned https://github.com/pytorch/pytorch/pull/93214/files and [jax-2371](https://github.com/google/jax/issues/2371#issuecomment-805361566), which suggests that it's not a good idea to make dataclass a default pytree node but it could be good to provide a default implementation for dataclass. Since currently, this seems to be an export-only feature, we added this extension point in export.

We also add "return_none_fields" flag to control whether none fields are returned after flattening, which is expected to be False in produce_matching of dynamo.export.

Also added some tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106160
Approved by: https://github.com/zhxchen17
2023-07-28 17:33:13 +00:00
Edward Z. Yang
7b9d250f06 Change _dynamo.export to be export(f)(*args, **kwargs) (#106109)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106109
Approved by: https://github.com/voznesenskym
2023-07-27 21:41:13 +00:00
Zhengxu Chen
10f55a2a94 [export] Handle the case for no placeholders during in runtime assertion pass. (#106134)
Summary: as title

Differential Revision: D47835210

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106134
Approved by: https://github.com/angelayi
2023-07-27 18:36:51 +00:00
Zhengxu Chen
2dbadd1eae [export] Remove experimental runtime assertion configs from export API. (#105043)
Test Plan: CI

Differential Revision: D47390794

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105043
Approved by: https://github.com/larryliu0820
2023-07-26 16:21:29 +00:00
Tugsbayasgalan Manlaibaatar
7b31732a6f Delete unused experimental export (#105873)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105873
Approved by: https://github.com/ezyang
2023-07-26 07:22:58 +00:00
PyTorch MergeBot
48cd8e29c1 Revert "Slightly improve AOTAutograd logging with ViewAndMutationMeta (#105702)"
This reverts commit cc137342d0.

Reverted https://github.com/pytorch/pytorch/pull/105702 on behalf of https://github.com/PaliC due to breaking internal export tests (relevant details shared with author) ([comment](https://github.com/pytorch/pytorch/pull/105702#issuecomment-1650492077))
2023-07-25 20:17:27 +00:00
Angela Yi
8bf253ecce [export] Remove eliminate_dead_code (#105875)
Summary: Sometimes the graph that is being serialized contains nodes with side effects + no users (ex. out variants of operators), so we don't want to eliminate those when deserializing.

Test Plan: CI

Differential Revision: D47735009

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105875
Approved by: https://github.com/ydwu4
2023-07-25 05:37:44 +00:00
Edward Z. Yang
cc137342d0 Slightly improve AOTAutograd logging with ViewAndMutationMeta (#105702)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105702
Approved by: https://github.com/albanD
2023-07-25 00:47:38 +00:00
Jacob Szwejbka
9d62c5faf6 [exir] Add deepcopy to ExportedProgram (#105852)
Summary: ExirExportedProgram would like to have this feature. Today it does it itself since it inherits from ExportedProgram but since we are moving it to composition I think it would be cleaner to upstream the behavior into the root object anyway

Test Plan: ci, but todo where are the tests for this file?

Differential Revision: D47645843

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105852
Approved by: https://github.com/tugsbayasgalan
2023-07-24 21:15:55 +00:00
Aaron Gokaslan
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
Justin Chu
4cc1745b13 [BE] f-stringify torch/ and scripts (#105538)
This PR is a follow up on the pyupgrade series to convert more strings to use f-strings using `flynt`.

- https://docs.python.org/3/reference/lexical_analysis.html#f-strings
- https://pypi.org/project/flynt/

Command used:

```
flynt torch/ -ll 120
flynt scripts/ -ll 120
flynt tools/ -ll 120
```

and excluded `collect_env.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105538
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-07-21 19:35:24 +00:00
angelayi
fed8d3608d Update core aten decomp table (#105673)
Updated the decomposition table based on the existing [Core ATen IR](https://pytorch.org/docs/stable/ir.html) list, and moved rest of decompositions to inductor's decomposition table.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105673
Approved by: https://github.com/SherlockNoMad
2023-07-21 02:45:37 +00:00
ydwu4
6abb8c382c [export] add kwargs support for export. (#105337)
Solving #105242.

During export, the exported function's signature changes multiple times. Suppose we'd like to export f as shown in following example:
```python
def f(arg1, arg2, kw1, kw2):
  pass

args = (arg1, arg2)
kwargs =  {"kw2":arg3, "kw1":arg4}

torch.export(f, args, kwargs)
```
The signature changes mutiple times during export process in the following order:
1. **gm_torch_level = dynamo.export(f, *args, \*\*kwargs)**. In this step, we turn all  kinds of parameters such as **postional_only**, **var_positioinal**, **kw_only**, and **var_kwargs** into **positional_or_kw**.It also preserves the positional and kword argument names in original function (i.e. f in this example) [here](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/export.py#L546C13-L546C27). The order of kwargs will be the **key order** of kwargs (after python 3.6, the order is the insertion of order of keys) instead of the original function signature and the order is baked into a _orig_args varaible of gm_torch_level's pytree info. So we'll have:
```python
def gm_torch_level(arg1, arg2, kw2, kw1)
```
Such difference is acceptable as it's transparent to users of export.

2. **gm_aot_export = aot_export_module(gm_torch_level, pos_or_kw_args)**. In this step, we need to turn kwargs into positional args in the order of how gm_torch_level expected, which is stored in _orig_args. The returned gm_aot_export has the graph signature of flat_args, in_spec = pytree.tree_flatten(pos_or_kw_args):
``` python
flat_args, _ = pytree.tree_flatten(pos_or_kw_args)
def gm_aot_export(*flat_args)
```

3. **exported_program(*args, \*\*kwargs)**. The epxorted artifact is exported_program, which is a wrapper over gm_aot_export and has the same calling convention as the original function "f". To do this, we need to 1. specialize the order of kwargs into pos_or_kw_args and 2. flatten the pos_or_kw_args into what gm_aot_export expected.  We can combine the two steps into one with :
```python
_, in_spec = pytree.tree_flatten((args, kwargs))

# Then during exported_program.__call__(*args, **kwargs)
flat_args  = fx_pytree.tree_flatten_spec((args, kwargs), in_spec)
```
, where kwargs is treated as a normal pytree whose keyorder is preserved in in_spec.

Implementation-wise, we treat _orig_args in dynamo exported graph module as single source of truth and kwags are ordered following it.

Test plan:
See added tests in test_export.py.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105337
Approved by: https://github.com/angelayi, https://github.com/tugsbayasgalan
2023-07-20 19:53:08 +00:00
Justin Chu
8a688277a2 [BE] Enable ruff's UP rules and autoformat dynamo / functorch and refs (#105432)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105432
Approved by: https://github.com/ezyang
2023-07-19 13:48:44 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
5666d20bb8 Add unlifting pass under private config (#104897)
Summary: We wanna do this little by little. For now, I tried only on DissectedPartsModel which needs to use aot_export version.

Test Plan: CI

Reviewed By: zhxchen17

Differential Revision: D46785735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104897
Approved by: https://github.com/JacobSzwejbka
2023-07-19 01:16:35 +00:00
Nikita Shulga
5837e95d30 [Reland] Update mypy to 1.4.1 (#105227)
This PR re-lands
- [Typing] Fix PEP 484 Violation (#105022)
- Update mypy to 1.4.1 (#91983)

That were reverted due to the conflict with internal source repo.

Mostly fixes for PEP-484 violation (i.e. when default arg is set to None, but type is not annotated as optional)
Plus few real fixes:
  - Add missing `_get_upgraders_entry_map` to `torch/_C/__init__.pyi`
  - Add missing return statement to `torch._export. deserialize_graph`
  - Fix error message in `torch.ao.ns.fx.weight_utils.get_lstm_mod_weights`
  - Add assert it `torch/optim/optimizer.py` that Optional list is not None
TODO (in followup PR):
  - Fix erroneous `isinstance` check in `torch/ao/quantization/_pt2e/qat_utils.py`

Unrelated, to bypass CI failures due to the gcc9 dependency update in Ubuntu-18.04:
- Add hack to squash older libstdc++ from conda environment in favor one from OS to `.ci/docker/install_conda.sh`
- Update bazel cuda builds to focal, as with libstdc++-6.0.32 bazel builds loose the ability to catch exceptions (probably because they link with cupti statically, but I could not found where it is done)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105227
Approved by: https://github.com/atalman, https://github.com/albanD, https://github.com/Skylion007
2023-07-15 20:30:20 +00:00
PyTorch MergeBot
0285366464 Revert "[dynamo] Maintainable code - Move export impl to a different file (#105071)"
This reverts commit 068f163dd3.

Reverted https://github.com/pytorch/pytorch/pull/105071 on behalf of https://github.com/clee2000 due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/105071#issuecomment-1636654945))
2023-07-15 04:18:07 +00:00
PyTorch MergeBot
15fd1ea118 Revert "[Reland] Update mypy to 1.4.1 (#105227)"
This reverts commit c9c4f8efc3.

Reverted https://github.com/pytorch/pytorch/pull/105227 on behalf of https://github.com/atalman due to trying to mitigate ci sev #105248 ([comment](https://github.com/pytorch/pytorch/pull/105227#issuecomment-1636510935))
2023-07-14 22:28:35 +00:00
Nikita Shulga
c9c4f8efc3 [Reland] Update mypy to 1.4.1 (#105227)
This PR re-lands
- [Typing] Fix PEP 484 Violation (#105022)
- Update mypy to 1.4.1 (#91983)

That were reverted due to the conflict with internal source repo.

Mostly fixes for PEP-484 violation (i.e. when default arg is set to None, but type is not annotated as optional)
Plus few real fixes:
  - Add missing `_get_upgraders_entry_map` to `torch/_C/__init__.pyi`
  - Add missing return statement to `torch._export. deserialize_graph`
  - Fix error message in `torch.ao.ns.fx.weight_utils.get_lstm_mod_weights`
  - Add assert it `torch/optim/optimizer.py` that Optional list is not None
TODO (in followup PR):
  - Fix erroneous `isinstance` check in `torch/ao/quantization/_pt2e/qat_utils.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105227
Approved by: https://github.com/atalman, https://github.com/albanD, https://github.com/Skylion007
2023-07-14 20:45:12 +00:00
Angela Yi
bf46b6653f [export] Allow optional call-spec (#105179)
Summary: Submodules may have a none call-spec values, which is ok. Updating types + serializer to handle this

Test Plan: CI

Reviewed By: ydwu4, zhxchen17

Differential Revision: D47353101

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105179
Approved by: https://github.com/zhxchen17
2023-07-14 19:11:47 +00:00
PyTorch MergeBot
3c5a494d7a Revert "Update mypy to 1.4.1 (#91983)"
This reverts commit 634659e262.

Reverted https://github.com/pytorch/pytorch/pull/91983 on behalf of https://github.com/malfet due to It's dependent change was reverted, so reverting this one as well, to keep CI clean ([comment](https://github.com/pytorch/pytorch/pull/91983#issuecomment-1636059709))
2023-07-14 15:59:16 +00:00
Animesh Jain
068f163dd3 [dynamo] Maintainable code - Move export impl to a different file (#105071)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105071
Approved by: https://github.com/voznesenskym
2023-07-14 09:28:33 +00:00
PyTorch MergeBot
15478a50ef Revert "[export] Allow optional call-spec (#105041)"
This reverts commit 194fe1d12f.

Reverted https://github.com/pytorch/pytorch/pull/105041 on behalf of https://github.com/atalman due to broke lintrunner ([comment](https://github.com/pytorch/pytorch/pull/105041#issuecomment-1634911637))
2023-07-13 21:01:21 +00:00
Angela Yi
194fe1d12f [export] Allow optional call-spec (#105041)
Summary: Submodules may have a none call-spec values, which is ok. Updating types + serializer to handle this

Test Plan: CI

Differential Revision: D47353101

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105041
Approved by: https://github.com/ydwu4, https://github.com/zhxchen17
2023-07-13 18:39:54 +00:00
Nikita Shulga
634659e262 Update mypy to 1.4.1 (#91983)
Mostly fixes for PEP-484 violation (i.e. when default arg is set to None, but type is not annotated as optional)
Plus few real fixes:
  - Add missing `_get_upgraders_entry_map` to `torch/_C/__init__.pyi`
  - Add missing return statement to `torch._export. deserialize_graph`
  - Fix error message in `torch.ao.ns.fx.weight_utils.get_lstm_mod_weights`
  -
TODO (in followup PR):
  - Fix erroneous `isinstance` check in `torch/ao/quantization/_pt2e/qat_utils.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91983
Approved by: https://github.com/kit1980, https://github.com/ZainRizvi, https://github.com/huydhn, https://github.com/thiagocrepaldi, https://github.com/aaronenyeshi
2023-07-13 16:30:36 +00:00
Tugsbayasgalan Manlaibaatar
1d02106e03 Preserve source_fn or nn_module_stack in the lifted params (#105017)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105017
Approved by: https://github.com/angelayi
2023-07-13 06:03:28 +00:00
Aaron Gokaslan
2f95a3d0fc [BE]: Apply ruff PERF fixes to torch (#104917)
Applies automated ruff fixes in the PERF modules and enables all automatic ones. I also updated ruff which applied some additional fixes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104917
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-07-11 20:45:21 +00:00
Nikita Shulga
999abd56a7 [BE] Make ONNX imports lazy (#104843)
This reduces total number of imported modules by default from 1419 to 1322 according to
```
time python -c "import sys;before=len(sys.modules);import torch;after=len(sys.modules);print(f'torch-{torch.__version__} imported {after-before} modules')"
```

and slightly reduces import time, while having no effect on UX (i.e. `torch.onnx.` submodule is kept intact)

Suppress lint errors that appear after mypy accidentally starts listing more files, for more details see: https://github.com/pytorch/pytorch/issues/104940

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104843
Approved by: https://github.com/jansel, https://github.com/albanD
2023-07-11 12:54:22 +00:00
Angela Yi
87e6b19ee0 [export] Make serializer more composable (#104816)
Test Plan: CI

Differential Revision: D47311044

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104816
Approved by: https://github.com/zhxchen17
2023-07-09 19:02:35 +00:00
Angela Yi
29c30b1db8 [export] Fix serialize nn_module_stack (#104721)
Summary:
Some serialized nn_module_stacks contain nested commas, something like:
`(getitem(L['module'],0),torch.nn.modules.linear.Linear)`
Fixing the parsing so that we can deserialize the string in the format of: `(local identifier, module type)`

Test Plan: CI

Differential Revision: D47252881

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104721
Approved by: https://github.com/zhxchen17
2023-07-07 17:13:17 +00:00
Angela Yi
d5a83a5f27 [export] Fix deserialization of symint (#104722)
Test Plan: CI

Differential Revision: D47269143

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104722
Approved by: https://github.com/zhxchen17
2023-07-07 17:03:46 +00:00
Angela Yi
199e93a0da [export] Serialize optional tensors (#104723)
Test Plan: Test in model inventory

Differential Revision: D47269141

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104723
Approved by: https://github.com/zhxchen17
2023-07-07 16:55:12 +00:00
Mengwei Liu
4fafe0b74c [export][serde] Hookup export upgrader with TorchScript upgrader entries (#104227)
Adding an API to get the upgraders entry map directly from:

https://github.com/pytorch/pytorch/blob/main/torch/csrc/jit/operator_upgraders/upgraders_entry.cpp#L17

Combine the information there along with the operator version map from:

https://github.com/pytorch/pytorch/blob/main/torch/csrc/jit/operator_upgraders/version_map.cpp#L18

We can get a upgrader map with: upgrader name, old schema and upgrader string.

This dict will be sent to GraphModuleOpUpgrader to populate the upgrader passes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104227
Approved by: https://github.com/angelayi, https://github.com/zhxchen17
2023-07-06 16:57:36 +00:00
xuanqi
3707fbf63b [RFC]: Add test for graph partition after assertion ops functionalization. (#104287)
This PR:
* Address comment at https://github.com/pytorch/pytorch/pull/103887/files#r1244128266.
* Add test for graph partition to make sure assertion ops functionalization won't break graph partition in unexpected way.

**NOTE**:
In the context of export, it's totally up to the user to any type of graph partition based on specific use case. It's hard to anticipate the concrete downstream use case nor provide any specific functionality to facilitate handling assertion ops (functional / non-functional). So this PR limit to itself to [`CapabilityBasedPartitioner`](2da6cae43c/torch/fx/passes/infra/partitioner.py (L34)) and make sure it doesn't break graph partition unexpectedly (by adding some test).

For the test case used in PR, a few things to highlight:
* Without assertion, the fused graph is roughly like:
```
class fused(torch.nn.Module):
    def forward(self, a, b):
        fused_1 = self.fused_1(a, b);
        relu = fused_1.relu()
        fused_0 = self.fused_0(fused_1, relu)
        return (fused_0, fused_1)

    class fused_0(torch.nn.Module):
        def forward(self, add_2, relu):
            ... # Logic after relu
            return add_4

    class fused_1(torch.nn.Module):
        def forward(self, a, b):
            ... # Logic before relu, `add_1` is only exposed within this submodule.
            return add_2
```
* With the assertion, the fused graph is roughly like:
```
class fused(torch.nn.Module):
    def forward(self, arg0_1: i64[s0], arg1_1: i64[s0]):
        dep_token0 = ...
        ...
        fused_1 = self.fused_1(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
        ...
        getitem: i64[s0] = fused_1[0] # `getitem` is actually `add_1`
        ...
        relu_default: i64[s0] = torch.ops.aten.relu.default(getitem_1)
        ...
        # For inline assertion. Note that `getitem` which is an output of `fused_1`, is consumed by it.
        select_int: i64[] = torch.ops.aten.select.int(getitem, 0, 0)
        eq_scalar: b8[] = torch.ops.aten.eq.Scalar(select_int, 5)
        dep_token2: f32[] = torch.ops.aten._functional_assert_async.msg(
            eq_scalar, 'assertion error', dep_token = dep_token1
        )
        ...
        getitem_1: i64[s0] = fused_1[1] # `getitem_1` is actually `add_2`
        fused_0: i64[s0] = self.fused_0(getitem_1, relu_default)
        ...

        return (fused_0, getitem_1, dep_token2)

    class fused_0(torch.nn.Module):
        def forward(self, add_tensor_2: i64[s0], relu_default: i64[s0]):
            ... # Logic after relu
            return add_tensor_4

    class fused_1(torch.nn.Module):
        def forward(self, arg0_1: i64[s0], arg1_1: i64[s0]):
            ... # Logic before relu
            # `add_tensor_1` (basically `add_1`) is returned to allow downstream assertion op consumes it.
            return (add_tensor_1, add_tensor_2)
```

As shown above, the extra assertion added (actually regardless whether it's funtionalized or not), it **won't** case extra submodule breakage if the asserted node is an intermediate node within the submodule - here the intermediate node will be returned as extra output of submodule so downstream assertion node can consume it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104287
Approved by: https://github.com/tugsbayasgalan
2023-06-28 22:13:27 +00:00
Tugsbayasgalan Manlaibaatar
361ef824ea Handle custom higher order ops (#104285)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104285
Approved by: https://github.com/zhxchen17
2023-06-28 01:53:36 +00:00
xuanqi
bf34ecd0c8 [RFC]: Integrate assertions functionalization to export (after AOT export) (#103887)
This PR integrated the assertion functionalization logic into current export logic.

**NOTE:**
I finally decided to do the assertion functionalization after AOT export instead of before for the following reasons:
* The benefit of AOT export is that the graph is already functionalized so things like method call is already transformed to function call. However, if we do it before AOT export, the graph is still in torch level and extra logic like bab21d20eb/torch/_export/pass_base.py (L201-L204C17) will need to be implemented.
* The graph signature is kind of already incorrect after adding runtime assertions currently (this doesn't seem break logic since we already depend on positions instead of FQNs of outputs). This PR also fixed this.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103887
Approved by: https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan
2023-06-27 18:14:29 +00:00
Tugsbayasgalan Manlaibaatar
936cd4f2f5 Migrate exportdb to torch.export (#104260)
Reapply of (https://github.com/pytorch/pytorch/pull/103861). Things that needed to be fixed:

- Fix a bug with returning dict output type
- Make pass_base work with map implementation
- Fix subtle bug with dynamo not propagating "val" in node.meta
- Add export_constraints field in ExportCase in ExportDB

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104260
Approved by: https://github.com/angelayi
2023-06-27 17:49:18 +00:00
zhxchen17
100aff9d4f [export] Deserialize subgraphs. (#103991)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103991
Approved by: https://github.com/angelayi, https://github.com/avikchaudhuri
2023-06-26 18:17:44 +00:00
Angela Yi
dd4f4bb47d [exir] Initial serialization (#103763)
Summary:
ETRecord can't use this yet because the other programs need to be migrated to using ExportedProgram (D46729844)

Note: higher order ops like call_delegate/cond are also not supported yet

Test Plan: `buck2 run @//mode/dev-nosan //executorch/exir/tests:serde`

Differential Revision: D46802454

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103763
Approved by: https://github.com/tarun292
2023-06-26 18:05:27 +00:00
xuanqi
344bab2669 [RFC]: Functionalize assertions (#103757)
The idea here is to create do a graph mutation to:
* Create an initial dependency token at the beginning of the program.
* Replace non-functional version of assertion statements to functional version.
* The functional version of assertion statement will:
  * Accept a dependency token from output of previous functional assertion statement (or the initial dependency token if there isn't any).
  * Generate a dependency token as the output of assertion statement.
  * Augment the output to include the dependency token generated by last assertion statement.

The goal here is to:
* Form an explicit dependency chain and avoid potential reordering during other passes of compiling.
* Make the assertions a part of overall execution graph will affect the final output (or it could potentially be DCEed).

**NOTE:**
* Currently only cover `contrain_range` and WIP to support other assertions. Send out this PR to collect feedback first.
* Here it only focus on implementation itself. Will integrate it with current export in future PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103757
Approved by: https://github.com/avikchaudhuri
2023-06-24 00:23:35 +00:00
Michael Voznesensky
ec24f1e4cc Simulate treespec flattening/unflattening (#101896)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101896
Approved by: https://github.com/jansel, https://github.com/anijain2305
2023-06-23 10:53:15 +00:00
Yidi Wu
0330f67b22 Remove ExportGraphModuleMixin. (#103786)
Summary:
We remove the ExportGraphModuleMixin. There are several implications of this change:
1. The graph_module of ExportedProgram, EdgeDialectProgram and ExecutorchProgram won't have the same signature as original user function. Instead, we should directly call the *Program, which has the same calling convention. e.g:

2. All passes need to go through prog.transform(*passes). We need to make all passes return PassResult as a result.

3. We also need to make sure the graph_module.meta is preserved after transform.

Test Plan: Test with CI.

Differential Revision: D46729844

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103786
Approved by: https://github.com/avikchaudhuri
2023-06-23 01:22:28 +00:00
Mengwei Liu
75716fb060 [export][serde] Add opset version check and upgrader API (#103238)
This PR adds initial implementation of an upgrader. Added test to show that this version works for one of the upgraders in https://github.com/pytorch/pytorch/blob/main/torch/csrc/jit/operator_upgraders/upgraders_entry.cpp.

Differential Revision: [D46651778](https://our.internmc.facebook.com/intern/diff/D46651778)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103238
Approved by: https://github.com/avikchaudhuri
2023-06-23 01:06:02 +00:00
PyTorch MergeBot
518abe8b7e Revert "Migrate exportdb to torch.export from torchdynamo.export (#103861)"
This reverts commit fb6173a4ac.

Reverted https://github.com/pytorch/pytorch/pull/103861 on behalf of https://github.com/huydhn due to It looks like this change is failing in trunk due to a landrace fb6173a4ac ([comment](https://github.com/pytorch/pytorch/pull/103861#issuecomment-1601960600))
2023-06-22 03:24:01 +00:00
Tugsbayasgalan Manlaibaatar
fb6173a4ac Migrate exportdb to torch.export from torchdynamo.export (#103861)
Things that needed to be fixed:
1. Fix a bug with returning dict output type
2. Make pass_base work with map implementation
3. Fix subtle bug with dynamo not propagating "val" in node.meta
4. Add export_constraints field in ExportCase in ExportDB

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103861
Approved by: https://github.com/zhxchen17, https://github.com/ydwu4
2023-06-22 02:53:41 +00:00
Zhengxu Chen
2adfd1315a [export] Serialize subgraphs. (#103901)
Differential Revision: D46865179

Deserialization part will be added in a following up PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103901
Approved by: https://github.com/larryliu0820
2023-06-21 19:17:33 +00:00
Michael Voznesensky
e5e9d563c2 Lift user defined attributes into inputs for certain cases (user defined types and tensors) (#103386)
(1) Lazy (converts to dynamo variable on access only)
(2) Uses existing side effect/reconstruct tech
(3) not tensor opinionated

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103386
Approved by: https://github.com/jansel
2023-06-20 23:45:19 +00:00
Tugsbayasgalan Manlaibaatar
d4b85f3031 Support params/buffers inside cond and map (#102310)
With #102022, params and buffers are always treated as special case of free variables. In this PR, I switch cond and map implementation to the this method and deprecate the old tracing mechanism.

Differential Revision: [D46746202](https://our.internmc.facebook.com/intern/diff/D46746202)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102310
Approved by: https://github.com/avikchaudhuri, https://github.com/zou3519
2023-06-20 05:33:10 +00:00
xuanqi
b27c3558a4 [RFC]: Create aten native op for constrain_range (#103346)
At high current implementation of constrains functions (constrain_as_**) will raise exception for the following code snippets:
```
def f(x):
    a = x.item()
    constrain_as_size(a, 4, 7)
    return torch.empty((a, 4))

inp = torch.tensor([5])
ep = torch._export.export(f, (inp,))
```

The reason is because current constrain logic is:
1) Purely python so it won't survive AOT export (the full node is gone after AOT export since AOT export only maintains aten level op).
2) Utilize side effect to add range constraints for traced symbol's shape env ([code](9591e52880/torch/fx/experimental/symbolic_shapes.py (L370-L372))).
3) If runtime assertion is turned on (by default). [`_AddRuntimeAssertionsForConstraintsPass`](9591e52880/torch/_export/passes/add_runtime_assertions_for_constraints_pass.py (L98-L100)) will try to append assertion node based on range constrains extracted from shape env of symbol during another interpretation round.
4). However, since 1), in the round of AOT export, range constraints logic won't run for symbols generated during this round. And later there is no range constrains information available for assertion round and caused issue.
5) As a result of above, it will failure at `torch.empty((a, 4))` (there is no constrains for `a` that it must be positive).

The fix here is just to implement range constrain logic as a native aten op (CPU implementation as no-op) to make it be able to survive AOT export.

**NOTE:**
[Logic](2d745b95d7/torch/fx/experimental/symbolic_shapes.py (L350-L365C15)) within [`constrain_range`](2d745b95d7/torch/fx/experimental/symbolic_shapes.py (LL313C74-L313C74)) is split out as `constrain_range_int` to capture case when non `SymInt` is passed in and reused in the new `_constrain_range`. The reason is when non `SymInt` is provided:
* If it directly calls `sym_constrain_range`, the C++ version will be called which will be no-op.
* So in this case it calls `constrain_range_int` instead to be able to capture issue like user provides a input whose tensor's shape could be out of range during exporting, like the following for above code example:
```
...
inp = torch.tensor([10])
ep = torch._export.export(f, (inp,)) # immediately raise error
```

Differential Revision: [D46734204](https://our.internmc.facebook.com/intern/diff/D46734204)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103346
Approved by: https://github.com/tugsbayasgalan
2023-06-16 14:55:40 +00:00
Angela Yi
f889c886d4 [export] Make pass base composable (#103701)
Moving ExportTracer so that EXIR can subclass it to do handling for delegates, and ExportPassBase can use the correct tracer. Upstreaming OSS changes in D45884895 first
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103701
Approved by: https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan, https://github.com/ydwu4
2023-06-16 06:07:18 +00:00
Mengwei Liu
a52b6f086d [export][serde] Add validator to compare deserializer opset version with model opset version (#103691)
This PR adds a validator to compare model opset version and deserializer opset version. This currently raises exception if any of the version doesn't match.

Note: the validator will only print warning if the op namespace in model is missing from the deserializer.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103691
Approved by: https://github.com/avikchaudhuri, https://github.com/zhxchen17
2023-06-16 01:36:43 +00:00
Angela Yi
90ef8d58cf [export] Serialize metadata (#103274)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103274
Approved by: https://github.com/zhxchen17
2023-06-15 17:34:12 +00:00
Edward Z. Yang
bc6ec97e02 Switch dynamic_shapes to True by default (#103597)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103597
Approved by: https://github.com/voznesenskym
2023-06-15 15:16:20 +00:00
Angela Yi
8dc6001057 [export] Serialize symbolic values (#103273)
* Modified the SymInt schema to also store the hint of the SymInt if it is represented as a symbol so that when we reconstruct the SymInt, the hint will also exist on the node.
* GraphModuleDeserializer.deserialize now also optionally map of symbol names to range.

ReplaceSymSizeOpPass should not be needed after https://github.com/pytorch/pytorch/pull/103107 lands

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103273
Approved by: https://github.com/avikchaudhuri, https://github.com/zhxchen17
2023-06-13 20:29:47 +00:00
Tugsbayasgalan Manlaibaatar
cea899cd57 Add early validation logic to dynamic_dim (#102982)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102982
Approved by: https://github.com/angelayi, https://github.com/avikchaudhuri
2023-06-08 20:23:49 +00:00
Angela Yi
e930c0fc35 [export] Initial deserialization v2 (#102716)
v2 of https://github.com/pytorch/pytorch/pull/102126. mentally stacked on top of https://github.com/pytorch/pytorch/pull/102707

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102716
Approved by: https://github.com/avikchaudhuri, https://github.com/zhxchen17
2023-06-07 16:02:35 +00:00
zhxchen17
6596cfa4d7 [export] Remove example custom_object_type to type_reflection_method. (#103015)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103015
Approved by: https://github.com/tugsbayasgalan
2023-06-07 00:03:57 +00:00
Angela Yi
3a385656b5 [export] Initial serialization v2 (#102707)
v2 of https://github.com/pytorch/pytorch/pull/102125 because of git issues
corresponding deserialization diff: https://github.com/pytorch/pytorch/pull/102716

Implementing serialization of the exported program to a python dataclass, and then from that dataclass to json. This is split into a couple of sections:
- `serialize(ep: ep.ExportedProgram, opset_version: Dict[str, int]) -> Tuple[bytes, bytes]` -- takes an exported program object, a dictionary mapping opset namespaces to versions, and returns the serialized exported program in bytes, and separately the state dict serialized in bytes
- `GraphModuleSerializer` class that serializes torch.fx.GraphModule
to the schema.GraphModule dataclass
- `ExportedProgramSerializer` class that serializes torch._export.exported_program.ExportedProgram to the schema.ExportedProgram dataclass

Serialization TODOs:
- [x] pytree spec: https://github.com/pytorch/pytorch/pull/102577
- [ ] higher order ops
- [ ] node metadata (specifically nn_module_stack/source_fn)
- [ ] constraints
- [ ] graph module metadata

The tests are not super comprehensive, but that's because I think it'll be better tested + easier to test once deserialization is implemented.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102707
Approved by: https://github.com/avikchaudhuri, https://github.com/zhxchen17
2023-06-06 05:12:49 +00:00
Angela Yi
6cb1455857 [export] Change equality constraints to list of tuples (#102998)
Changed equality constraints to a list of tuples as the dictionary wasn't providing much value -- also makes creating constraints + serialization easier.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102998
Approved by: https://github.com/avikchaudhuri
2023-06-05 21:03:02 +00:00
Tugsbayasgalan Manlaibaatar
4bb2b65ea4 Turn on add_runtime_assertion by default (#102671)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102671
Approved by: https://github.com/angelayi, https://github.com/avikchaudhuri
2023-06-05 16:27:44 +00:00
Zhengxu Chen
26bf8894b6 [export] Replicate exportdb examples and tests in oss. (#102769)
Summary: Initial work to copy source to OSS for exportdb and make sure tests can run properly.

Test Plan: test_export

Differential Revision: D46369152

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102769
Approved by: https://github.com/angelayi
2023-06-04 20:01:57 +00:00
Richard Zou
74f10b9ea5 Switch most Python RAII guard usages to context manager (#102642)
There are some I can't easily switch due to reasons like:
- Dynamo modelling the guard
- BC concerns (for torch.autograd.set_multithreading_enabled)

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102642
Approved by: https://github.com/albanD
2023-06-01 16:28:37 +00:00
Angela Yi
7a569f86a0 [export] Cleanup constraints (#102666)
Redo of https://github.com/pytorch/pytorch/pull/102432 because idk how to push to that other branch...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102666
Approved by: https://github.com/zhxchen17
2023-06-01 04:22:31 +00:00
Tugsbayasgalan Manlaibaatar
d9f75dded1 [export] Add aot_export 1/N (#101490)
This PR adds aot_export_module as the lowering path from torch.level graph to aten graph. Some known limitations that need to be addressed in the follow up PRs:
1. Store param/buffer data in ExportedProgram
2. Fully support torch.cond with params/buffers
3. Making sure no duplicated ExportMetaData entry
4. This API will break Executorch if used on PyE, we will figure out a plan internally.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101490
Approved by: https://github.com/avikchaudhuri
2023-05-31 20:56:21 +00:00
Angela Yi
1e4292a1e8 [export] Rename graph_module.py to exported_program.py (#102260)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102260
Approved by: https://github.com/ydwu4, https://github.com/tugsbayasgalan
2023-05-26 23:36:38 +00:00
Angela Yi
c4028de462 [export] ExportedProgram (#102259)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102259
Approved by: https://github.com/ydwu4, https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan, https://github.com/zhxchen17
2023-05-26 23:36:38 +00:00
Avik Chaudhuri
8751002215 equality assertions (#102256)
Previously we had runtime asserts for range constraints. This diff adds runtime asserts for equality constraints.

This requires a bit of refactoring that is worth calling out.
1. [Minor] Some of the data structures produced by export and consumed by the runtime assertion pass need to be broadened. This is a WIP. There are some associated code improvements that are included in this diff, but by and large the structures are similar to what exists now. Meanwhile @angelayi and I are chatting about how to make it qualitatively better: briefly, we want to index everything by symbols, which are 1-1 with (name, dim) pairs.
2. [Major] The order in which runtime asserts are emitted is changed. Previously we used to do the work in `placeholder`, now this diff adds a hook for "post-processing" after processing of all placeholders is done. This is needed because equality constraints can mention different placeholders. This change also opens the way to optimizing codegen: e.g., each (name, dim) pair should correspond to a single intermediate variable that is reused across runtime asserts. This is future work.

Differential Revision: [D46177642](https://our.internmc.facebook.com/intern/diff/D46177642/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102256
Approved by: https://github.com/tugsbayasgalan, https://github.com/angelayi
2023-05-26 14:57:31 +00:00
Yidi Wu
3cae6d2493 Make exir passes work with map_impl HigherOrderOperator. (#102009)
Summary: Forward fix t53725825. New map implementation breaks multiple internal tests. forward fix it for some of them. To unblock others, mark unfixed ones are expectedFailure first.

Test Plan: Test with CI.

Reviewed By: angelayi

Differential Revision: D46084287

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102009
Approved by: https://github.com/angelayi
2023-05-25 20:00:51 +00:00
Zhengxu Chen
351c2ea2fb [export] Prototype on serialization schema. (#101899)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101899
Approved by: https://github.com/angelayi
2023-05-21 06:31:53 +00:00
Tugsbayasgalan Manlaibaatar
47f43ed84a Actually functionalize torch.export (#101433)
I thought i enabled this, but apparently not. This PR makes the export fully functional for real this time :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101433
Approved by: https://github.com/angelayi
2023-05-17 05:09:24 +00:00
PyTorch MergeBot
eac5f2a8e4 Revert "Actually functionalize torch.export (#101433)"
This reverts commit eec752ed05.

Reverted https://github.com/pytorch/pytorch/pull/101433 on behalf of https://github.com/PaliC due to causing failures on functorch macOS tests ([comment](https://github.com/pytorch/pytorch/pull/101433#issuecomment-1550111671))
2023-05-16 17:51:45 +00:00
Tugsbayasgalan Manlaibaatar
eec752ed05 Actually functionalize torch.export (#101433)
I thought i enabled this, but apparently not. This PR makes the export fully functional for real this time :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101433
Approved by: https://github.com/angelayi
2023-05-16 16:22:13 +00:00
Tugsbayasgalan Manlaibaatar
194d360329 Add more canonical way of adding runtime pass (#100956)
* #100955
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100956
Approved by: https://github.com/ydwu4, https://github.com/guangy10
2023-05-16 03:23:04 +00:00
Tugsbayasgalan Manlaibaatar
9ffad5b62b Remove input tracker from runtime assertion pass (#100955)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100955
Approved by: https://github.com/ydwu4
2023-05-15 21:26:47 +00:00
Tugsbayasgalan Manlaibaatar
f542b31c9d [export] More robust view->view_copy pass (#100908)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100908
Approved by: https://github.com/ydwu4
2023-05-10 14:25:17 +00:00
Angela Yi
ba47a2b227 [export] Pickle of ExportGraphModule (#100924)
try 2 of reland of https://github.com/pytorch/pytorch/pull/100620 bc merge conflict 😭...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100924
Approved by: https://github.com/tugsbayasgalan
2023-05-09 16:58:24 +00:00
Guang Yang
0e08a9b057 Wrap more constraint violation cases to UserError (#100897)
Cases covered in this PR:
 - Example inputs conflict with input constraints
 - Example inputs conflict with inline constraints
 - Suggest users to use `constrain_as_*()` when trying to export with data-dependent operations

Differential Revision: [D45666627](https://www.internalfb.com/diff/D45666627)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100897
Approved by: https://github.com/avikchaudhuri
2023-05-09 16:44:57 +00:00
ydwu4
26cd958718 Support runtime assertion for inline constraints (#100763)
This pr does the following:
1. previously, inline constraints is not properly set for tensor output data-dependent ops such as a.nonzero because of its return value is not symint. This pr just uses all the unbacked symbols i.e.those start with "i"/"f" in create_unbacked_sym* functions. Note that these symbols are guaranteed to be a super set of inline user constraints.

2. add inline assertions support by checking.

Currently, it only deal with tensor, SymInt, SymFloat, SymBool output data-dependent ops and ignore the rest. It's good enough for now as we only have a limited number of data-dependent ops (.item and .nonzero are explicitly tested).

The examples for graph that is added assertions is shown below:

```
class ExportGraphModule(torch.nn.Module):
    def forward(self, x):
        arg0: i64[s0], = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
        nonzero_default: i64[i0, 1] = torch.ops.aten.nonzero.default(arg0);  arg0 = None
        return pytree.tree_unflatten([nonzero_default], self._out_spec)

class GraphModule(torch.nn.Module):
    def forward(self, x):
        arg0: i64[s0], = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
        sym_size: Sym(s0) = torch.ops.aten.sym_size(arg0, 0)
        nonzero_default: i64[i1, 1] = torch.ops.aten.nonzero.default(arg0);  arg0 = None
        sym_size_1: Sym(i1) = torch.ops.aten.sym_size(nonzero_default, 0)
        ge: Sym(i1 >= 3) = sym_size_1 >= 3
        scalar_tensor_default: f32[] = torch.ops.aten.scalar_tensor.default(ge);  ge = None
        _assert_async_msg = torch.ops.aten._assert_async.msg(scalar_tensor_default, 'nonzero_default.shape[0] is outside of inline constraint [3, 5].');  scalar_tensor_default = None
        le: Sym(i1 <= 5) = sym_size_1 <= 5;  sym_size_1 = None
        scalar_tensor_default_1: f32[] = torch.ops.aten.scalar_tensor.default(le);  le = None
        _assert_async_msg_1 = torch.ops.aten._assert_async.msg(scalar_tensor_default_1, 'nonzero_default.shape[0] is outside of inline constraint [3, 5].');  scalar_tensor_default_1 = None
        return pytree.tree_unflatten([nonzero_default], self._out_spec)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100763
Approved by: https://github.com/tugsbayasgalan
2023-05-09 04:19:57 +00:00
Angela Yi
2d2f716ddc [export] Fix cond for pass_base (#100836)
I ported over the code for the inline interpreter incorrectly in the pass base 😅

Originally the function `make_inline_interpreter` is supposed to take in a fx.Interpreter type but I accidentally passed in an fx.Interpreter object. Also realized while modifying this diff (and comments from Tugsuu) that we don't really need this InlineInterpreter.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100836
Approved by: https://github.com/zhxchen17, https://github.com/tugsbayasgalan
2023-05-08 21:51:03 +00:00
PyTorch MergeBot
f42eae4755 Revert "[export] Pickle of ExportGraphModule (#100620)"
This reverts commit d4975a5fe0.

Reverted https://github.com/pytorch/pytorch/pull/100620 on behalf of https://github.com/clee2000 due to broke export/test_serialize.py::TestSerialize::test_pickle_dynamic across various jobs d4975a5fe0, i think you hit another landrace? ([comment](https://github.com/pytorch/pytorch/pull/100620#issuecomment-1536643519))
2023-05-05 18:52:48 +00:00
Angela Yi
d4975a5fe0 [export] Pickle of ExportGraphModule (#100620)
reland of https://github.com/pytorch/pytorch/pull/100423 bc merge conflict...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100620
Approved by: https://github.com/mergennachin
2023-05-05 18:21:39 +00:00
Yanan Cao (PyTorch)
35a6b04419 Set assume_static_by_default to True in Dynamo config (#100458)
We expect fine grained dynamic shape enabled at all times, which means that a dimension is assumed to be static unless user explicitly says otherwise.

Differential Revision: D45473365

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100458
Approved by: https://github.com/avikchaudhuri
2023-05-05 00:50:41 +00:00
Tugsbayasgalan Manlaibaatar
9b3552eb2c Add runtime assertions for input shape constraints (#100247)
This PR adds runtime assertions as an extra pass in the exported graph. Several high level information:
1. We specialize all dimensions that were not added to the user input constraints
2. We haven't added relational constraints as runtime assertions (e.g x[1] == x[0]), will do in a follow up diff

Differential Revision: [D45408971](https://our.internmc.facebook.com/intern/diff/D45408971)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100247
Approved by: https://github.com/guangy10, https://github.com/avikchaudhuri
2023-05-04 13:26:58 +00:00
PyTorch MergeBot
c4fd76e7b4 Revert "[export] Pickle result of export (#100423)"
This reverts commit 7226dbcbce.

Reverted https://github.com/pytorch/pytorch/pull/100423 on behalf of https://github.com/angelayi due to merge conflict ([comment](https://github.com/pytorch/pytorch/pull/100423#issuecomment-1534163373))
2023-05-04 06:41:06 +00:00
Angela Yi
7226dbcbce [export] Pickle result of export (#100423)
Pickles the metadata["val"] into TensorMetadata struct so that it'll be retrained when we unpickle.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100423
Approved by: https://github.com/mergennachin
2023-05-04 06:37:16 +00:00
Angela Yi
af62d098fe
[export] Migrate internal verifier to subclass export/verifier
Differential Revision: D45416983nnPull Request resolved: https://github.com/pytorch/pytorch/pull/100388
2023-05-02 08:50:48 -07:00
gmagogsfm
751c54b546 Add experimental export() API (#100034)
PT2 Export API Prototype

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100034
Approved by: https://github.com/angelayi
2023-04-28 06:12:59 +00:00
Angela Yi
7bece142a9 [export] Port over const prop pass (#100102)
Stacked on top of https://github.com/pytorch/pytorch/pull/100000
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100102
Approved by: https://github.com/gmagogsfm
2023-04-27 17:06:47 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
02f059c2b7 Add private _export API (#99992)
Differential Revision: D45279206

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99992
Approved by: https://github.com/angelayi, https://github.com/gmagogsfm
2023-04-27 16:24:14 +00:00
Angela Yi
9bbd3d6489 [export] ExportPassBase + view_copy pass (#100000)
* Added ExportPassBase, an interpreter based helper pass writing class
* It can also help maintain the dialect based on the operator namespace through having users override the `get_valid_dialects` function (returning an empty lists implies the pass works for any dialect).
* Added a `ReplaceBrokenOpsWithFunctionalOpsPass` to replace all ops that have not been converted with functionalization with their functional ones.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100000
Approved by: https://github.com/gmagogsfm
2023-04-26 21:01:25 +00:00
Angela Yi
004f3d71aa [export] Move verifier over to export from torch/fx (#100019)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100019
Approved by: https://github.com/tugsbayasgalan
2023-04-26 18:26:46 +00:00
Angela Yi
1d077f28ed [export] Constraints API (#98433)
Wrapper for users to insert constraints into model code.

The constraints will not be maintained in the graph after tracing through make_fx so retracing with dynamo/make_fx will not work. This will be supported after torch._assert supported is implemented. Then we can convert the constrain_range calls to torch._asserts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98433
Approved by: https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan
2023-04-13 21:20:10 +00:00
Michael Voznesensky
ccc9a3d726 Automatic Dynamic Shapes (#98923)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98923
Approved by: https://github.com/ezyang
2023-04-13 02:39:23 +00:00
PyTorch MergeBot
ab761605ae Revert "[export] Constraints API (#98433)"
This reverts commit 1510eb4072.

Reverted https://github.com/pytorch/pytorch/pull/98433 on behalf of https://github.com/izaitsevfb due to Breaks internal tests, asked by author to revert
2023-04-12 23:37:19 +00:00
Angela Yi
1510eb4072 [export] Constraints API (#98433)
Wrapper for users to insert constraints into model code.

The constraints will not be maintained in the graph after tracing through make_fx so retracing with dynamo/make_fx will not work. This will be supported after torch._assert supported is implemented. Then we can convert the constrain_range calls to torch._asserts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98433
Approved by: https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan
2023-04-12 01:32:44 +00:00
Edward Z. Yang
8372c5dc68 Refactor dynamic dims api, stateless internals, higher level export API (#96699)
The purpose of this API is to execute a few large components of work:

1) Refactor all the internals of plumbing dynamic dimension information after dynamo to be stateless
2) Decouple allocation controls around dynamic dimensions from verification
3) For (2), for allocation, create an enum that dictates whether we are in DUCK (default today), STATIC (aka assume_static_default in the past), or DYNAMIC (aka user constrained, do not duck shape)
4) For (2), for verification, we separate out the list of dynamic ranges entirely from allocation. This means shape_env does not tracking for what we verify on, and instead, it is the callers job to invoke produce_guards() with the various things they want verified, specifically, with the valid ranges. We do use constrain ranges to refine value ranges when doing analysis.
5) We have decided, therefore, as an extension of (4) to double down on "late" checks versus "eager" checks, primarily because the mechanisms for gathering what actually matters happens during guards, and should be a purview of the caller seeking guards, not the shape env. However, for dynamo, these structures are essentially one and the same.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96699
Approved by: https://github.com/avikchaudhuri, https://github.com/ezyang
2023-03-29 16:55:49 +00:00
Brian Hirsh
98ece75043 [aot autograd] merge all outputs of funtionalization analysis into single metadata (#95991)
This makes the next PR in the stack cleaner: having the top level entry point to aot autograd perform the functionalization analysis pass once, and plumb the metadata everywhere else that we need it.

I put it in a separate PR because I recently learned that this function is used in fbcode, so I'll need to fix up internals when I land this PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95991
Approved by: https://github.com/ezyang
2023-03-08 16:22:54 +00:00
Edward Z. Yang
6fff232280 Delete torch._functorch.config.use_dynamic_shapes (#96102)
As requested in
https://github.com/pytorch/pytorch/pull/95975#discussion_r1124837162

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96102
Approved by: https://github.com/Skylion007
2023-03-06 18:50:20 +00:00
Edward Z. Yang
d303665d33 Make int unspecialization actually work (#95621)
OK, so this PR used to be about reducing the number of constants we specialize on, but it turns out that unspecialization was ~essentially never used (because we still constant specialized way too aggressively) and I ended up having to fix a bunch of issues to actually get tests to pass. So this PR is now "make int unspecialization actually work". As part of this, I have to turn off unspecialization by default, as there are still latent bugs in inductor.

The general strategy is that an unspecialized int is represented as a SymInt. Representing it as a 0d tensor (which is what the code used to do) is untenable: (1) we often need unspecialized ints to participate in size computations, but we have no way of propagating sympy expressions through tensor compute, and (2) a lot of APIs work when passed SymInt, but not when passed a Tensor. However, I continue to represent Numpy scalars as Tensors, as they are rarely used for size computation and they have an explicit dtype, so they are more accurately modeled as 0d tensors.

* I folded in the changes from https://github.com/pytorch/pytorch/pull/95099 as I cannot represent unspecialized ints as SymInts without also turning on dynamic shapes. This also eliminates the necessity for test_unspec.py, as toggling specialization without dynamic shapes doesn't do anything. As dynamic shapes defaults to unspecializing, I just deleted this entirely; for the specialization case, I rely on regular static shape tests to catch it. (Hypothetically, we could also rerun all the tests with dynamic shapes, but WITH int/float specialization, but this seems... not that useful? I mean, I guess export wants it, but I'd kind of like our Source heuristic to improve enough that export doesn't have to toggle this either.)
* Only 0/1 integers get specialized by default now
* A hodgepodge of fixes. I'll comment on the PR about them.

Fixes https://github.com/pytorch/pytorch/issues/95469

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95621
Approved by: https://github.com/jansel, https://github.com/Chillee
2023-03-04 01:22:08 +00:00
Tugsbayasgalan Manlaibaatar
454c48b987 Add experimental torch.export prototype (#95070)
This is WIP PR for adding torch.export API in OSS. Couple of points:
- I intentionally named it as experimental_export so that ppl don't get confused thinking this is our official API
- We don't plan to use AOTAutograd backend just yet. The reason we have it here is because the functionalization AOTAutograd uses is what we need for export (handling of param/buffer mutation etc). In the near future, I will extract the functionalization part and use it on top of make_fx. What we have right now is merely a placeholder.
- The reason we want to do it now is because we want to have some minimal tests running in OSS so that we can catch regressions earlier.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95070
Approved by: https://github.com/gmagogsfm, https://github.com/zhxchen17
2023-02-28 02:40:19 +00:00
zhxchen17
766d51b496 [export] Add a data type for representing export workflow information. (#95013)
upstreaming some of our internal work to OSS so that we can get a better
preiew of how export pipeline works. there'll be more modularized work
sent in later.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95013
Approved by: https://github.com/tugsbayasgalan
2023-02-17 16:28:17 +00:00
Sherlock Huang
710fe40597 [Export] Introduce as_none in ex.Argument union type (#93210)
This design has two implications
- We are **NOT** modeling nullable argument types, e.g `Tesnor?`, `int?`, `int[]?` as a special argument type
- Python None is treated as a special argument type, downstream executor/runtime need know to handle this.

For aten.convolution's schmea, it accepts an optional input: `Tensor? bias`
```
convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, SymInt[] padding, int[] dilation, bool transposed, SymInt[] output_padding, int groups) -> Tensor
```

Example: notice the **None** argument in the following fx.node

```
convolution_default = torch.ops.aten.convolution.default(arg0, _param_constant0, None, [2, 2], [3, 3], [1, 1], False, [0, 0], 1)
```

would be exported as
```
            Node(
                op='call_function',
                target='aten.convolution.default',
                args=[
                    Argument(as_tensor=TensorArgument(name='arg0')),
                    Argument(
                        as_tensor=TensorArgument(name='_param_constant0')
                    ),
                    Argument(as_none=True),
                    Argument(as_ints=[2, 2]),
                    Argument(as_ints=[3, 3]),
                    Argument(as_ints=[1, 1]),
                    Argument(as_bool=False),
                    Argument(as_ints=[0, 0]),
                    Argument(as_int=1)
                ],
                kwargs={},
                outputs=[
                    ReturnArgument(
                        as_tensor=TensorArgument(name='convolution_default')
                    )
                ],
                metadata='Skipped'
            ),
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93210
Approved by: https://github.com/suo
2023-01-30 21:32:49 +00:00
Sherlock Huang
1d25070949 [Export] Refine design around TensorValue (renamed IValue) (#93217)
See discussion in my comments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93217
Approved by: https://github.com/suo
2023-01-30 21:32:32 +00:00
Sherlock Huang
61fd1188ba [Export] Remove the concept of Scalar in export schema (#93211)
Scalar is a union type of [int, float, bool], it's only needed for the representation of operation schema.

During export, we always have the concrete argument. As ex.Argument is already an union type, we don't need Scalar type anymore.

Example
Here's the schema for aten.add.Scalar
```
add.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> Tensor
```
A fx.node
```
add_tensor: f32[s0, s0] = torch.ops.aten.add.Scalar(arg0, 1.1)
```

would be exported as
```
            Node(
                op='call_function',
                target='aten.add.Tensor',
                args=[
                    Argument(as_tensor=TensorArgument(name='arg0')),
                    Argument(as_float=1.1)
                ],
                outputs=[
                    ReturnArgument(as_tensor=TensorArgument(name='add_tensor'))
                ]
            )
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93211
Approved by: https://github.com/suo
2023-01-29 04:50:32 +00:00
Sherlock Huang
68a1065bd7 [Export] Remove op filed from ex.Node schema (#93208)
Node can only be 'call_function' ops
'placeholder' and 'output' are serialized as inputs and outputs of the Graph
'get_attr' is not needed anymore, as it's an implicit lookup from GraphModule's parameters/buffers
'call_method' and 'call_module' is not supported, as it's not used in the canonical FX Graph
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93208
Approved by: https://github.com/suo, https://github.com/Neilblaze
2023-01-29 04:35:46 +00:00
Sherlock Huang
4d107e3426 torch.export Logical Schema V1 (#93135)
This PR is for landing the initial version of logical schema.

See previous discussions in https://github.com/pytorch/pytorch/pull/91287

This is a starting point for iterations.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93135
Approved by: https://github.com/suo
2023-01-28 00:35:06 +00:00