Commit Graph

50 Commits

Author SHA1 Message Date
gmagogsfm
39854df1d3 Make validate private by renaming validate to _validate (#107927)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107927
Approved by: https://github.com/tugsbayasgalan
2023-08-25 08:14:56 +00:00
Chen Lai
4f2ff1d019 add get buffer from exported program (#107809)
Summary: We have the util function to get params, for parity we also need util function to get buffer`

Test Plan:
```
buck test //caffe2/test:test_export
```

Differential Revision: D48610877

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107809
Approved by: https://github.com/JacobSzwejbka
2023-08-25 05:46:04 +00:00
Tugsbayasgalan Manlaibaatar
485de73004 Improve unbacked symint error msg (#107806)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107806
Approved by: https://github.com/avikchaudhuri
2023-08-25 01:07:09 +00:00
Tugsbayasgalan Manlaibaatar
c81c217a2f Make ExportedProgram valid tracing callable (#107657)
In this PR, we make ExportedProgram valid callable to export for re-exporting. Note that we don't allow any new constraints specified from user as we don't have any way of handling it right now. There are some caveats that is worth mentioning in this PR.
Today, graph_module.meta is not preserved (note that this is different from node level meta which we preserve). Our export logic relies on this meta to process the constraints. But if we skip dynamo, we will have to preserve the constraints stored in graph_module.meta. Once dynamo supports retracibility, we don't have to do this anymore. I currently manually save graph_module.meta at following places:
1. After ExportedProgram.module()
2. After ExportedProgram.transform()
3. At construction site of ExportedProgram.

Jerry will add the update on the quantization side as well.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107657
Approved by: https://github.com/gmagogsfm
2023-08-23 08:01:57 +00:00
Jacob Szwejbka
c14f4d66c3 [pytorch][export] Move is_param and get_param out of exir and into export (#107264)
Summary: These doesn't feel edge specific so moving out of exir.

Test Plan: ci

Differential Revision: D48361384

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107264
Approved by: https://github.com/angelayi
2023-08-22 21:41:51 +00:00
Tugsbayasgalan Manlaibaatar
134d415615 Unlift mutated buffers (#107643)
In this PR, we extend ExportedProgram.module() functionality by also unlifting the mutated buffers. We only really care about top level buffers as we don't allow any buffer mutation inside HigherOrderOps.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107643
Approved by: https://github.com/avikchaudhuri
2023-08-22 05:16:27 +00:00
Tugsbayasgalan Manlaibaatar
ee72071fc7 Avoid executing side-effectful graph_module as validation step (#107271)
Dynamo currently runs the real graph module with real inputs as a way to match the return result of graph module with the eager return type. This is unsafe when graph module is side effectful. In the long term, we will get rid of this step. But in the short term, we just fakify the graph module again and run it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107271
Approved by: https://github.com/ezyang
2023-08-22 04:22:31 +00:00
Avik Chaudhuri
db3a199b2c fix symint meta val (#107491)
`aot_export` adds metadata for int inputs as symints. This diff turns such metadata into ints since they will be specialized anyway. We don't turn these into runtime assertions yet (but should, as future work).

Differential Revision: D48487562

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107491
Approved by: https://github.com/gmagogsfm
2023-08-20 06:05:04 +00:00
Tugsbayasgalan Manlaibaatar
20c5add133 [export] Refactor constrain_as_value and constrain_as_size (#106591)
Some notable changes:
1. `constrain_as_size` allows min value to be less than 2 as it will unconditionally assume min >= 2 for compiler purposes. Instead, we add additional check to make sure max value is always greater than 2.
2. Previously, we used to runtime assert on the unbacked symint's val range which would be always between [2, max]. I modified this logic to assert on [0, max] unless user explicitly specifies the min range.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106591
Approved by: https://github.com/gmagogsfm, https://github.com/ezyang
2023-08-15 05:41:43 +00:00
Zhengxu Chen
547ccae0db [export] Support preserving calling convention to some modules. (#106798)
Summary: APS use this feature to swap out some submodules after unflattening.

Test Plan: test_export_preserve_signature

Differential Revision: D48154341

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106798
Approved by: https://github.com/tugsbayasgalan
2023-08-11 21:17:45 +00:00
PyTorch MergeBot
745d29b0cc Revert "[export] Refactor constrain_as_value and constrain_as_size (#106591)"
This reverts commit 18989890bf.

Reverted https://github.com/pytorch/pytorch/pull/106591 on behalf of https://github.com/izaitsevfb due to Breaks inductor test on trunk ([comment](https://github.com/pytorch/pytorch/pull/106591#issuecomment-1675069091))
2023-08-11 16:37:47 +00:00
Tugsbayasgalan Manlaibaatar
18989890bf [export] Refactor constrain_as_value and constrain_as_size (#106591)
Some notable changes:
1. `constrain_as_size` allows min value to be less than 2 as it will unconditionally assume min >= 2 for compiler purposes. Instead, we add additional check to make sure max value is always greater than 2.
2. Previously, we used to runtime assert on the unbacked symint's val range which would be always between [2, max]. I modified this logic to assert on [0, max] unless user explicitly specifies the min range.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106591
Approved by: https://github.com/gmagogsfm, https://github.com/ezyang
2023-08-11 05:29:22 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
a44c072c89 Make InternalModel and Resnet work with rexportable flow (#106676)
Summary: Internal model and Resnet uses "re-export" flow now. Also did some refactoring to make the code little cleaner

Some changes for OSS:
1. Correctly use the "cached" fake tensors so that static symbols are still resolved to static
2. Change logic in PassBase to allocate static shapes for parameters
3. Add "is_torch_exported" tag to every node to make it survive during various graph transformations.
4. Added experimental wrapper API for quantization team to get pre_dispatch=True graph. Note that it doesn't actually do that right now. But we plan to switch soon.

Test Plan: CI

Differential Revision: D47890878

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106676
Approved by: https://github.com/jerryzh168
2023-08-09 20:10:48 +00:00
gmagogsfm
410bc558e6 Assert that args is of tuple type. (#106352)
This avoids accidental unpacking of tensor-type inputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106352
Approved by: https://github.com/tugsbayasgalan
2023-08-03 01:47:38 +00:00
Tugsbayasgalan Manlaibaatar
fadd0859ca Expose module method in ExportedProgram (#105575)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105575
Approved by: https://github.com/zhxchen17
2023-08-01 21:28:57 +00:00
ydwu4
aaaafa1bcf [Export] remove unused flags in export (#106336)
Remove unused flags from export_dynamo_config:
Among them:
- capture_scalar_outputs: bool = True. **True by default** in dynamo.export:
- capture_dynamic_output_shape_ops: bool = True.  **True by default** in dynamo.export
- specialize_int: bool = True: **True by default** in dynamo.export.
- guard_nn_modules: bool = True: this flag is **not being used** as we never look at nn module guards and assume modules are forzen. See the [doc](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/config.py#L77) of this flag.
- dynamic_shapes: bool = True: **deprecated by dynamo**:  [here](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/config.py#L55 )

test plan:
Added new test for allow_rnn to test its effectiveness.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106336
Approved by: https://github.com/tugsbayasgalan
2023-08-01 16:10:09 +00:00
ydwu4
5237ed55e6 [export] allow register dataclass as pytree node (#106160)
In this pr, we allow users to register a customized flatten/unflatten/serialization/deserialization for a dataclass. We provide some default implementation for flatten/unflatten. We could implement a decorator based on it when needed.

## Motivation:
HuggingFace and many internal models return dataclass output and torch.export wants to maintain the invariant that export result (i.e. exported_program) has the same calling convention and result as the original callable.

This is not supported in export yet: we cannot recover the original dataclass from flattened output produced by the underlying graph module (produced by dynamo and processed further by aot_export). We need to have a place to store the metadata of the dataclass so that we can re-construct it. To avoid adding hacky code in export and allow princinpled extensibility, we think extending pytree may be a good option.

## Implementation:
@zou3519 mentioned https://github.com/pytorch/pytorch/pull/93214/files and [jax-2371](https://github.com/google/jax/issues/2371#issuecomment-805361566), which suggests that it's not a good idea to make dataclass a default pytree node but it could be good to provide a default implementation for dataclass. Since currently, this seems to be an export-only feature, we added this extension point in export.

We also add "return_none_fields" flag to control whether none fields are returned after flattening, which is expected to be False in produce_matching of dynamo.export.

Also added some tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106160
Approved by: https://github.com/zhxchen17
2023-07-28 17:33:13 +00:00
Tugsbayasgalan Manlaibaatar
7b31732a6f Delete unused experimental export (#105873)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105873
Approved by: https://github.com/ezyang
2023-07-26 07:22:58 +00:00
gmagogsfm
f5def50461 Supress eager fallback suggestions when exporting (#105767)
Previously during torch.export(), when an exception is raised during tracing, Dynamo displays this error:

“You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True”

This is not viable in torch.export(), thus this diff suppresses this suggestion during export.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105767
Approved by: https://github.com/anijain2305
2023-07-22 19:17:08 +00:00
ydwu4
6abb8c382c [export] add kwargs support for export. (#105337)
Solving #105242.

During export, the exported function's signature changes multiple times. Suppose we'd like to export f as shown in following example:
```python
def f(arg1, arg2, kw1, kw2):
  pass

args = (arg1, arg2)
kwargs =  {"kw2":arg3, "kw1":arg4}

torch.export(f, args, kwargs)
```
The signature changes mutiple times during export process in the following order:
1. **gm_torch_level = dynamo.export(f, *args, \*\*kwargs)**. In this step, we turn all  kinds of parameters such as **postional_only**, **var_positioinal**, **kw_only**, and **var_kwargs** into **positional_or_kw**.It also preserves the positional and kword argument names in original function (i.e. f in this example) [here](https://github.com/pytorch/pytorch/blob/main/torch/_dynamo/export.py#L546C13-L546C27). The order of kwargs will be the **key order** of kwargs (after python 3.6, the order is the insertion of order of keys) instead of the original function signature and the order is baked into a _orig_args varaible of gm_torch_level's pytree info. So we'll have:
```python
def gm_torch_level(arg1, arg2, kw2, kw1)
```
Such difference is acceptable as it's transparent to users of export.

2. **gm_aot_export = aot_export_module(gm_torch_level, pos_or_kw_args)**. In this step, we need to turn kwargs into positional args in the order of how gm_torch_level expected, which is stored in _orig_args. The returned gm_aot_export has the graph signature of flat_args, in_spec = pytree.tree_flatten(pos_or_kw_args):
``` python
flat_args, _ = pytree.tree_flatten(pos_or_kw_args)
def gm_aot_export(*flat_args)
```

3. **exported_program(*args, \*\*kwargs)**. The epxorted artifact is exported_program, which is a wrapper over gm_aot_export and has the same calling convention as the original function "f". To do this, we need to 1. specialize the order of kwargs into pos_or_kw_args and 2. flatten the pos_or_kw_args into what gm_aot_export expected.  We can combine the two steps into one with :
```python
_, in_spec = pytree.tree_flatten((args, kwargs))

# Then during exported_program.__call__(*args, **kwargs)
flat_args  = fx_pytree.tree_flatten_spec((args, kwargs), in_spec)
```
, where kwargs is treated as a normal pytree whose keyorder is preserved in in_spec.

Implementation-wise, we treat _orig_args in dynamo exported graph module as single source of truth and kwags are ordered following it.

Test plan:
See added tests in test_export.py.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105337
Approved by: https://github.com/angelayi, https://github.com/tugsbayasgalan
2023-07-20 19:53:08 +00:00
Andrey Talman
c6653b65d8 Back out "Make adding buffers more like adding parameters (#104069)" (#105581)
Summary:
D47537831 is breaking pyper tests: https://fb.workplace.com/groups/802176577445480/posts/1018902842439518/

with `TypeError: register_buffer() takes 3 positional arguments but 4 were given`

Original commit changeset: d4b4069fbd38

Original Phabricator Diff: D47537831

Test Plan:
```
buck2 run //caffe2/torch/fb/training_toolkit/integration_tests/training_lifecycle/cogwheel_tests/pyper_release_v2:cogwheel_smallworld_inline_cvr_infer_pyper_pyper__canary_offline_training-launcher -- --run-harness-in-tupperware --build-fbpkg ads_dper3 --build-fbpkg training_platform
```

Reviewed By: atalman

Differential Revision: D47600140

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105581
Approved by: https://github.com/mikaylagawarecki
2023-07-20 03:39:53 +00:00
ekamiti
32d422f335 Make adding buffers more like adding parameters (#104069)
Add similar semantics for creating a buffer object similar to creating a parameter. This is done by introducing a new `Buffer` class that can be used for type disambiguation. The underlying functionality of registering a buffer remains the same as the `register_buffer` method has not been changed. The `persistent` parameter in the `Buffer` type is to indicate whether a buffer object should be persistent or not. Other non-test changes have to do with getting the new `Buffer` type recognized by inductor and dynamo. Remaining changes are test changes to make sure that the `Buffer` type can be used as a drop in replacement for `register_buffer` as it just leads to `register_buffer` being called. The addition of this new functionality still allows for normal tensors to be used as buffers so these changes are intended to be backwards compatible.

Fixes #35735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104069
Approved by: https://github.com/mikaylagawarecki
2023-07-17 17:59:05 +00:00
Tugsbayasgalan Manlaibaatar
1d02106e03 Preserve source_fn or nn_module_stack in the lifted params (#105017)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105017
Approved by: https://github.com/angelayi
2023-07-13 06:03:28 +00:00
Tugsbayasgalan Manlaibaatar
936cd4f2f5 Migrate exportdb to torch.export (#104260)
Reapply of (https://github.com/pytorch/pytorch/pull/103861). Things that needed to be fixed:

- Fix a bug with returning dict output type
- Make pass_base work with map implementation
- Fix subtle bug with dynamo not propagating "val" in node.meta
- Add export_constraints field in ExportCase in ExportDB

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104260
Approved by: https://github.com/angelayi
2023-06-27 17:49:18 +00:00
PyTorch MergeBot
518abe8b7e Revert "Migrate exportdb to torch.export from torchdynamo.export (#103861)"
This reverts commit fb6173a4ac.

Reverted https://github.com/pytorch/pytorch/pull/103861 on behalf of https://github.com/huydhn due to It looks like this change is failing in trunk due to a landrace fb6173a4ac ([comment](https://github.com/pytorch/pytorch/pull/103861#issuecomment-1601960600))
2023-06-22 03:24:01 +00:00
Tugsbayasgalan Manlaibaatar
fb6173a4ac Migrate exportdb to torch.export from torchdynamo.export (#103861)
Things that needed to be fixed:
1. Fix a bug with returning dict output type
2. Make pass_base work with map implementation
3. Fix subtle bug with dynamo not propagating "val" in node.meta
4. Add export_constraints field in ExportCase in ExportDB

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103861
Approved by: https://github.com/zhxchen17, https://github.com/ydwu4
2023-06-22 02:53:41 +00:00
xuanqi
b27c3558a4 [RFC]: Create aten native op for constrain_range (#103346)
At high current implementation of constrains functions (constrain_as_**) will raise exception for the following code snippets:
```
def f(x):
    a = x.item()
    constrain_as_size(a, 4, 7)
    return torch.empty((a, 4))

inp = torch.tensor([5])
ep = torch._export.export(f, (inp,))
```

The reason is because current constrain logic is:
1) Purely python so it won't survive AOT export (the full node is gone after AOT export since AOT export only maintains aten level op).
2) Utilize side effect to add range constraints for traced symbol's shape env ([code](9591e52880/torch/fx/experimental/symbolic_shapes.py (L370-L372))).
3) If runtime assertion is turned on (by default). [`_AddRuntimeAssertionsForConstraintsPass`](9591e52880/torch/_export/passes/add_runtime_assertions_for_constraints_pass.py (L98-L100)) will try to append assertion node based on range constrains extracted from shape env of symbol during another interpretation round.
4). However, since 1), in the round of AOT export, range constraints logic won't run for symbols generated during this round. And later there is no range constrains information available for assertion round and caused issue.
5) As a result of above, it will failure at `torch.empty((a, 4))` (there is no constrains for `a` that it must be positive).

The fix here is just to implement range constrain logic as a native aten op (CPU implementation as no-op) to make it be able to survive AOT export.

**NOTE:**
[Logic](2d745b95d7/torch/fx/experimental/symbolic_shapes.py (L350-L365C15)) within [`constrain_range`](2d745b95d7/torch/fx/experimental/symbolic_shapes.py (LL313C74-L313C74)) is split out as `constrain_range_int` to capture case when non `SymInt` is passed in and reused in the new `_constrain_range`. The reason is when non `SymInt` is provided:
* If it directly calls `sym_constrain_range`, the C++ version will be called which will be no-op.
* So in this case it calls `constrain_range_int` instead to be able to capture issue like user provides a input whose tensor's shape could be out of range during exporting, like the following for above code example:
```
...
inp = torch.tensor([10])
ep = torch._export.export(f, (inp,)) # immediately raise error
```

Differential Revision: [D46734204](https://our.internmc.facebook.com/intern/diff/D46734204)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103346
Approved by: https://github.com/tugsbayasgalan
2023-06-16 14:55:40 +00:00
Avik Chaudhuri
59ee6cd864 fix soundness bug with unsupported constraints (#102897)
We do not raise constraint violations for complex binary conditions, such as conditions involving `%`. Moreover, while these constraints are discovered by our solver, the solver does not inject new constraint violations. This can result in cases where export passes, appropriate assertions are not added, and we get runtime crashes.

Now, when the solver discovers constraints that are too complex, we force-specialize the involved dimensions and raise a constraint violation when such dimensions are marked dynamic. This forces the user to remove the dynamic marking, and causes the appropriate specialization assertions to be added.

Differential Revision: [D46415786](https://our.internmc.facebook.com/intern/diff/D46415786/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102897
Approved by: https://github.com/tugsbayasgalan
2023-06-10 01:59:55 +00:00
Tugsbayasgalan Manlaibaatar
cea899cd57 Add early validation logic to dynamic_dim (#102982)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102982
Approved by: https://github.com/angelayi, https://github.com/avikchaudhuri
2023-06-08 20:23:49 +00:00
Tugsbayasgalan Manlaibaatar
4bb2b65ea4 Turn on add_runtime_assertion by default (#102671)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102671
Approved by: https://github.com/angelayi, https://github.com/avikchaudhuri
2023-06-05 16:27:44 +00:00
Tugsbayasgalan Manlaibaatar
d9f75dded1 [export] Add aot_export 1/N (#101490)
This PR adds aot_export_module as the lowering path from torch.level graph to aten graph. Some known limitations that need to be addressed in the follow up PRs:
1. Store param/buffer data in ExportedProgram
2. Fully support torch.cond with params/buffers
3. Making sure no duplicated ExportMetaData entry
4. This API will break Executorch if used on PyE, we will figure out a plan internally.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101490
Approved by: https://github.com/avikchaudhuri
2023-05-31 20:56:21 +00:00
Angela Yi
c4028de462 [export] ExportedProgram (#102259)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102259
Approved by: https://github.com/ydwu4, https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan, https://github.com/zhxchen17
2023-05-26 23:36:38 +00:00
Angela Yi
4f9aa7cb0f [export] Error when constraining on static values (#101655)
Fixes https://github.com/pytorch/pytorch/issues/100415

Results in the following error:
```
Traceback (most recent call last):
  File "/scratch/angelayi/work/pytorch/test/export/test_export.py", line 572, in test_export_constrain_static
    export(f, example_inputs, constraints)
  File "/scratch/angelayi/work/pytorch/torch/_export/__init__.py", line 348, in export
    method_name_to_graph_module[compile_spec.method_name] = _export(
  File "/scratch/angelayi/work/pytorch/torch/_export/__init__.py", line 119, in _export
    raise UserError(UserErrorType.CONSTRAIN_VIOLATION, str(e))
torch._dynamo.exc.UserError:   File "/scratch/angelayi/work/pytorch/test/export/test_export.py", line 561, in f
    constrain_as_value(c, min=1, max=3)

It appears that you're trying to set a constraint on a value which we evaluated to have a static value of 3. Scroll up to see where this constraint was set.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101655
Approved by: https://github.com/avikchaudhuri
2023-05-19 18:27:36 +00:00
Tugsbayasgalan Manlaibaatar
47f43ed84a Actually functionalize torch.export (#101433)
I thought i enabled this, but apparently not. This PR makes the export fully functional for real this time :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101433
Approved by: https://github.com/angelayi
2023-05-17 05:09:24 +00:00
PyTorch MergeBot
eac5f2a8e4 Revert "Actually functionalize torch.export (#101433)"
This reverts commit eec752ed05.

Reverted https://github.com/pytorch/pytorch/pull/101433 on behalf of https://github.com/PaliC due to causing failures on functorch macOS tests ([comment](https://github.com/pytorch/pytorch/pull/101433#issuecomment-1550111671))
2023-05-16 17:51:45 +00:00
Tugsbayasgalan Manlaibaatar
eec752ed05 Actually functionalize torch.export (#101433)
I thought i enabled this, but apparently not. This PR makes the export fully functional for real this time :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101433
Approved by: https://github.com/angelayi
2023-05-16 16:22:13 +00:00
Guang Yang
0e08a9b057 Wrap more constraint violation cases to UserError (#100897)
Cases covered in this PR:
 - Example inputs conflict with input constraints
 - Example inputs conflict with inline constraints
 - Suggest users to use `constrain_as_*()` when trying to export with data-dependent operations

Differential Revision: [D45666627](https://www.internalfb.com/diff/D45666627)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100897
Approved by: https://github.com/avikchaudhuri
2023-05-09 16:44:57 +00:00
PyTorch MergeBot
3f2336d3fe Revert "[EZ] move test decorator up in the class def (#100719)"
This reverts commit daf5100656.

Reverted https://github.com/pytorch/pytorch/pull/100719 on behalf of https://github.com/huydhn due to Sorry for reverting your PR, but it breaks lint in trunk ([comment](https://github.com/pytorch/pytorch/pull/100719#issuecomment-1536514589))
2023-05-05 16:47:27 +00:00
Tugsbayasgalan Manlaibaatar
daf5100656 [EZ] move test decorator up in the class def (#100719)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100719
Approved by: https://github.com/angelayi
2023-05-05 15:35:56 +00:00
Yanan Cao (PyTorch)
35a6b04419 Set assume_static_by_default to True in Dynamo config (#100458)
We expect fine grained dynamic shape enabled at all times, which means that a dimension is assumed to be static unless user explicitly says otherwise.

Differential Revision: D45473365

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100458
Approved by: https://github.com/avikchaudhuri
2023-05-05 00:50:41 +00:00
gmagogsfm
751c54b546 Add experimental export() API (#100034)
PT2 Export API Prototype

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100034
Approved by: https://github.com/angelayi
2023-04-28 06:12:59 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
02f059c2b7 Add private _export API (#99992)
Differential Revision: D45279206

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99992
Approved by: https://github.com/angelayi, https://github.com/gmagogsfm
2023-04-27 16:24:14 +00:00
Edward Z. Yang
0eb59ad093 Change export tracing_mode default to symbolic (#99877)
Differential Revision: [D45231039](https://our.internmc.facebook.com/intern/diff/D45231039/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99877
Approved by: https://github.com/albanD, https://github.com/voznesenskym
2023-04-25 00:12:12 +00:00
PyTorch MergeBot
c83e1f517d Revert "Delete tracing_mode argument to export (#99555)"
This reverts commit e9786149ab.

Reverted https://github.com/pytorch/pytorch/pull/99555 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-04-24 08:21:41 +00:00
Edward Z. Yang
e9786149ab Delete tracing_mode argument to export (#99555)
You can have any color you want, as long as it's tracing_mode="symbolic"

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99555
Approved by: https://github.com/voznesenskym
2023-04-21 16:20:51 +00:00
Angela Yi
1d077f28ed [export] Constraints API (#98433)
Wrapper for users to insert constraints into model code.

The constraints will not be maintained in the graph after tracing through make_fx so retracing with dynamo/make_fx will not work. This will be supported after torch._assert supported is implemented. Then we can convert the constrain_range calls to torch._asserts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98433
Approved by: https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan
2023-04-13 21:20:10 +00:00
PyTorch MergeBot
ab761605ae Revert "[export] Constraints API (#98433)"
This reverts commit 1510eb4072.

Reverted https://github.com/pytorch/pytorch/pull/98433 on behalf of https://github.com/izaitsevfb due to Breaks internal tests, asked by author to revert
2023-04-12 23:37:19 +00:00
Angela Yi
1510eb4072 [export] Constraints API (#98433)
Wrapper for users to insert constraints into model code.

The constraints will not be maintained in the graph after tracing through make_fx so retracing with dynamo/make_fx will not work. This will be supported after torch._assert supported is implemented. Then we can convert the constrain_range calls to torch._asserts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98433
Approved by: https://github.com/avikchaudhuri, https://github.com/tugsbayasgalan
2023-04-12 01:32:44 +00:00
Yanbo Liang
f388bec985 [Dynamo] torch.Generator state should have a source and be reconstructed properly (#97403)
Fixes #97077 partially.

During FX graph propagation, we request every tensor should have source:
a524123c91/torch/_dynamo/variables/builder.py (L929)
However, the output of ```torch.Generator.get_state()``` is a tensor but without source, since it's generated inside of the FX graph. My change is following what we did for [Python random functions](https://github.com/pytorch/pytorch/blob/master/torch/_dynamo/variables/user_defined.py#L260), to have a dedicated ```GeneratorStateSource```. We have to also update the reconstruction logics, since we will reuse the ```TensorVariable``` reconstruction.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97403
Approved by: https://github.com/jansel, https://github.com/mlazos
2023-03-29 04:31:23 +00:00
Tugsbayasgalan Manlaibaatar
454c48b987 Add experimental torch.export prototype (#95070)
This is WIP PR for adding torch.export API in OSS. Couple of points:
- I intentionally named it as experimental_export so that ppl don't get confused thinking this is our official API
- We don't plan to use AOTAutograd backend just yet. The reason we have it here is because the functionalization AOTAutograd uses is what we need for export (handling of param/buffer mutation etc). In the near future, I will extract the functionalization part and use it on top of make_fx. What we have right now is merely a placeholder.
- The reason we want to do it now is because we want to have some minimal tests running in OSS so that we can catch regressions earlier.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95070
Approved by: https://github.com/gmagogsfm, https://github.com/zhxchen17
2023-02-28 02:40:19 +00:00