Commit Graph

70 Commits

Author SHA1 Message Date
Pian Pawakapan
a8dcbf2749 [export] allow Dim(1,2) for export dynamic shapes (#121642)
Current dynamic shapes implementation fixes lower range of Dims to be 2 for analysis, but allows 0/1 shapes during runtime. This leads to failures when initializing Dim(1,2). This PR sets the lower bound to 0, and avoids erroring out when conflicting with the generated (2, maxsize) constraint during analysis.

Also resolves a derived dim constraints issue with the following code:
```
class Bar(torch.nn.Module):
    def forward(self, x, y):
        return x + y[1:]

dx = Dim("dx", min=1, max=3)
ep = export(
    Bar(),
    (torch.randn(2, 2), torch.randn(3, 2)),
    dynamic_shapes=({0: dx, 1: None}, {0: dx+1, 1: None})
)
print(ep.range_constraints)
```

In main:
```
{s0: ValueRanges(lower=2, upper=3, is_bool=False), s0 + 1: ValueRanges(lower=3, upper=4, is_bool=False)}
```

This PR:
```
{s0: ValueRanges(lower=1, upper=3, is_bool=False), s0 + 1: ValueRanges(lower=2, upper=4, is_bool=False)}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121642
Approved by: https://github.com/avikchaudhuri
2024-03-13 22:59:07 +00:00
Avik Chaudhuri
7fe0cc53e9 make _process_dynamic_shapes an implementation detail (#121713)
Summary: `_process_dynamic_shapes` converts new dynamic shapes to old constraints, but in the future may not need to do so. Preparing for that future.

Test Plan: CI

Differential Revision: D54780374

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121713
Approved by: https://github.com/tugsbayasgalan
2024-03-13 08:33:00 +00:00
Tugsbayasgalan Manlaibaatar
90e886aa6c Sanity check for non-strict (#121687)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121687
Approved by: https://github.com/avikchaudhuri
ghstack dependencies: #121652, #121678
2024-03-12 18:21:32 +00:00
angelayi
e8836759d0 [export] Add effect token to export (#121424)
Following the creation of effect tokens (https://github.com/pytorch/pytorch/pull/120296), we want to now add support for these tokens in export because the calling/returning convention has changed. The inputs are now `(tokens, params, buffers, constants, user_inputs)` and the outputs are `(tokens, buffer_mutations, user_mutations, user_outputs)`. The graph looks something like:
```
graph():
    %arg0_1 : [num_users=1] = placeholder[target=arg0_1]
    %attr : [num_users=2] = placeholder[target=attr]
    %arg1_1 : [num_users=2] = placeholder[target=arg1_1]
    %with_effects : [num_users=2] = call_function[target=torch._higher_order_ops.effects.with_effects](args = (%arg0_1, _TorchScriptTesting.takes_foo.default, %attr, %arg1_1), kwargs = {})
    %getitem : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects, 0), kwargs = {})
    %getitem_1 : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects, 1), kwargs = {})
    %with_effects_1 : [num_users=2] = call_function[target=torch._higher_order_ops.effects.with_effects](args = (%getitem, _TorchScriptTesting.takes_foo.default, %attr, %getitem_1), kwargs = {})
    %getitem_2 : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects_1, 0), kwargs = {})
    %getitem_3 : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects_1, 1), kwargs = {})
    %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%arg1_1, %getitem_3), kwargs = {})
    return (getitem_2, add)
```

During unlifting, we will first remove the tokens and with_effect calls using the `remove_effect_tokens` pass. (cc @SherlockNoMad on the pass to remove tokens). This is so that this won't change the calling conventions when retracing. The graph after unlifting looks something like:
```
graph():
    %attr_1 : [num_users=2] = get_attr[target=attr]
    %arg1_1 : [num_users=2] = placeholder[target=arg1_1]
    %takes_foo_default_1 : [num_users=1] = call_function[target=torch.ops._TorchScriptTesting.takes_foo.default](args = (%attr_1, %arg1_1), kwargs = {})
    %takes_foo_default : [num_users=1] = call_function[target=torch.ops._TorchScriptTesting.takes_foo.default](args = (%attr_1, %takes_foo_default_1), kwargs = {})
    %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%arg1_1, %takes_foo_default), kwargs = {})
    return (add,)
```

Serialization support will be added in a followup.
Note: tokens only affect custom ops that take in ScriptObjects, not ScriptObject methods yet.

Differential Revision: [D54639390](https://our.internmc.facebook.com/intern/diff/D54639390)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121424
Approved by: https://github.com/tugsbayasgalan
2024-03-09 02:43:26 +00:00
angelayi
af62a70fab [export] Fix nn_module_stack in retracing (#121423)
Fixes https://fb.workplace.com/groups/1075192433118967/permalink/1391916691446538/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121423
Approved by: https://github.com/zhxchen17
2024-03-08 00:34:11 +00:00
Avik Chaudhuri
0b9bfcf9bb [non-strict export] support tensor attribute without other args (#121176)
Summary: Without args we have a hard time detecting fake modes. This causes a fake mode mismatch error in non-strict (specifically, `aot_export_module`) when the module contains tensor attributes, because we create a fresh fake mode when we cannot detect one. The fix is to pass the same fake mode throughout.

Test Plan: added test

Differential Revision: D54516595

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121176
Approved by: https://github.com/angelayi, https://github.com/tugsbayasgalan
2024-03-06 08:10:00 +00:00
Angela Yi
4b49bc19e8 [export][reland] Disable exported_program.__call__ (#120019)
Summary: Reland of D53075378 / https://github.com/pytorch/pytorch/pull/119466

Test Plan: CI

Differential Revision: D53827930

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120019
Approved by: https://github.com/ydwu4
2024-03-05 05:29:46 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
58047205ed Delete unnecessary code (#120365)
Summary: Title

Test Plan: CI

Differential Revision: D53828357

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120365
Approved by: https://github.com/Skylion007
2024-03-04 18:02:58 +00:00
suo
66b20b4297 [export][ez] minor variable rename (#121040)
since `_export()` now takes an `nn.Module` only (which is asserted against at an upper layer), we should change this variable name from `f` to `mod` and remove some unnecessary isinstance checks

Differential Revision: [D54430381](https://our.internmc.facebook.com/intern/diff/D54430381/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121040
Approved by: https://github.com/angelayi
ghstack dependencies: #121037
2024-03-02 08:49:06 +00:00
suo
505637198a [export] cleanup to rewrite steps (#121037)
1. Some underscores for consistency of private functions.
2. remove dead code in `_replace_param_buffer_names`

Differential Revision: [D54429206](https://our.internmc.facebook.com/intern/diff/D54429206/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121037
Approved by: https://github.com/angelayi, https://github.com/zhxchen17
2024-03-02 08:45:50 +00:00
Zhengxu Chen
797d4fbdf4 [export] Log operator set. (#120951)
Summary: as title. We want to count the number of total operator calls, and the distinct set of operators in the exported graph.

Test Plan: CI

Differential Revision: D54390298

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120951
Approved by: https://github.com/tugsbayasgalan
2024-03-01 20:58:31 +00:00
angelayi
c844b377fa [dynamo] Reorder logs (#116106)
Currently when there is a print/warning in the graph, dynamo graph breaks causing export to fail. However export would like to just skip over these print/warning calls: https://github.com/pytorch/pytorch/issues/113792.

Additionally there's a torch.compile feature request to "reorder prints" so that instead of graph breaking when hitting prints/logging, we can skip over these prints to create larger compiled graphs, and then print the results out after those compiled graphs: https://github.com/pytorch/pytorch/issues/93739. This PR also adds the `reorderable_logging_functions` config for users to register logging functions to be reordered (like `print` or a custom logging function). Printout of the bytecode after reordering the prints looks like the following: P914736600

There are some limitations to the printing right now:
* You can only register logging functions, not methods
* Inputs to the logging functions can only be tensors, constants, and format strings
* Inputs to the logging functions which will later be mutated in-place will not be printed correctly

TODO: Add the following tests
* print function with argument of nested data structure;
* print function with argument of nested data structure being updated inside of compile region (this would test if we handle side effect correctly);
* custom defined logging functions with nn.Module or nn.Module attribute arguments;
* custom defined logging functions with submodule input/output as arguments (we need to handle the mapping and fused-out value);
* custom defined logging functions with tensor argument and mutation inside of the function (TBD: this may increase memory usage);

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116106
Approved by: https://github.com/yanboliang
2024-03-01 17:04:24 +00:00
PyTorch MergeBot
63b259492a Revert "[dynamo] Reorder logs (#116106)"
This reverts commit c5472628ff.

Reverted https://github.com/pytorch/pytorch/pull/116106 on behalf of https://github.com/clee2000 due to landrace with 342e7929b8, which removed the import for warnings.  Should be an easy fix after rebase c5472628ff ([comment](https://github.com/pytorch/pytorch/pull/116106#issuecomment-1972586180))
2024-03-01 06:25:46 +00:00
Angela Yi
c5472628ff [dynamo] Reorder logs (#116106)
Currently when there is a print/warning in the graph, dynamo graph breaks causing export to fail. However export would like to just skip over these print/warning calls: https://github.com/pytorch/pytorch/issues/113792.

Additionally there's a torch.compile feature request to "reorder prints" so that instead of graph breaking when hitting prints/logging, we can skip over these prints to create larger compiled graphs, and then print the results out after those compiled graphs: https://github.com/pytorch/pytorch/issues/93739. This PR also adds the `reorderable_logging_functions` config for users to register logging functions to be reordered (like `print` or a custom logging function). Printout of the bytecode after reordering the prints looks like the following: P914736600

There are some limitations to the printing right now:
* You can only register logging functions, not methods
* Inputs to the logging functions can only be tensors, constants, and format strings
* Inputs to the logging functions which will later be mutated in-place will not be printed correctly

TODO: Add the following tests
* print function with argument of nested data structure;
* print function with argument of nested data structure being updated inside of compile region (this would test if we handle side effect correctly);
* custom defined logging functions with nn.Module or nn.Module attribute arguments;
* custom defined logging functions with submodule input/output as arguments (we need to handle the mapping and fused-out value);
* custom defined logging functions with tensor argument and mutation inside of the function (TBD: this may increase memory usage);

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116106
Approved by: https://github.com/yanboliang
2024-03-01 04:48:44 +00:00
Avik Chaudhuri
f7a809c96a fix dupe deprecated warning in dynamo export (#120896)
Summary:
When we convert `dynamic_shapes` to `constraints` and pass them to `_dynamo.export`, we shouldn't give a deprecation warning. Such conversion happens when calling `torch.export.export`, e.g. But it can also happen when calling `capture_pre_autograd_graph` (which itself has this deprecation warning when `constraints` are passed directly as well).

Since `_log_export_usage` is an indicator of a top-level call (it is `True` by default but set to `False`, or at least passed through, by callers), we can (ab)use it to indicate when to give this deprecation warning.

Test Plan: none

Differential Revision: D54350172

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120896
Approved by: https://github.com/BoyuanFeng, https://github.com/zhxchen17
2024-02-29 18:57:42 +00:00
Avik Chaudhuri
342e7929b8 [export] kill deprecated constraints API (#120860)
Summary:
Previously `export` would take `constraints` built with `dynamic_dim(...)`s. This has been deprecated for a while; one can now pass in a `dynamic_shapes` spec built with `Dim(...)`s.

Here we kill this deprecated API. Eventually this will lead to simplification of the underlying implementation, since the new `Dim`-based specs can map 1-1 with symbolic shapes concepts without going through indirect machinery of `dynamic_dim`-based constraints. It is expected that internal APIs like `_dynamo.export` and `_trace._export_to_torch_ir` will change when that happens.

Leaving `aot_compile` and `capture_pre_autograd_graph` entry points alone for now. This will eventually be updated anyway.

Test Plan: updated tests

Differential Revision: D54339703

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120860
Approved by: https://github.com/suo, https://github.com/tugsbayasgalan
2024-02-29 16:15:50 +00:00
Oleg Khabinov
4b18ab869f [torch.export] Support is_compiling() flag for non-strict mode (#119602)
Summary: In non-strict mode of torch.export() we didn't set those `is_compiling()` to `True` which is needed by some models.

Test Plan: Unit tests and manual testing.

Differential Revision: D53624452

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119602
Approved by: https://github.com/suo
2024-02-29 05:52:51 +00:00
Avik Chaudhuri
5472923998 derived dim (#118729)
With the current `Dim`-based dynamic shapes API for export, one can express that shapes of different input shapes must be equal by reusing the same `Dim`. However, non-trivial relationships between such input shapes cannot be expressed.

Recently we are seeing more and more examples of code that require this additional expressibility, e.g., where a pair of shapes might differ by one, or a shape might be double another (or simply even).

This PR introduces the concept of a "derived" `Dim`, i.e., a linear arithmetic expression over a `Dim`. By using a combination of `Dim`s and derived `Dim`s to specify input shapes, the desired relationships can be expressed naturally. E.g., a pair of shapes might be `dim` and `dim + 1`, or `dim` and `2*dim`, or even `2*dim` and `dim + 1`.

We extend the current infrastructure that translates `Dim`s to deprecated `dynamic_dim`-based constraints to work with derived `Dim`s. As usual, we raise constraint violation errors when shape guards cannot be verified given a dynamic shapes spec; suggest fixes; and raise runtime errors when future inputs violate the spec.

Importantly, some guards that used to cause forced specializations in the constraint solver because they were deemed "too complex" now do not do so, because they can now be specified as constraints. Since this was what motivated the introduction of a `disable_constraint_solver` flag to some internal APIs, we may not need that flag any more.

Note that shapes of placeholders in exported programs can now contain symbolic expressions and not just symbols.

Differential Revision: D53254587

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118729
Approved by: https://github.com/ezyang
2024-02-28 19:48:32 +00:00
Michael Suo
12f724c779 [export] preserve constant fqn (#120664)
Summary:
Previously we were renaming constants to `lifted_constant_tensor0` or equivalent. This PR changes things so that the constants retain the same FQN as in the original eager module.

Actually, `symbolic_trace` already is supposed to do this, but the code path is not triggered when used from `make_fx`, since we don't pass an actual `nn.Module` instance to `trace()`, but rather a multiply-wrapped-functionalized-lambda-thing.

So, I reproduced the essential logic outside of make_fx, at the export layer.

Test Plan: added a unit test

Differential Revision: D54221616

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120664
Approved by: https://github.com/SherlockNoMad
2024-02-27 06:35:51 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
8646872ff7 Make balance_gradient preserved in export (#120332)
Summary: We can only not-decompose CompositeImplicit functional custom ops. From the looks of the implementation, this op looks functional. So the fix is just fixing the schema.

Test Plan: CI

Differential Revision: D54019265

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120332
Approved by: https://github.com/zhxchen17
2024-02-23 19:14:08 +00:00
ydwu4
ac2ba7889d [export] turn on replace_set_grad_with_hop_pass in pre_dispatch (#119915)
This PR turns on replace_set_grad_with_hop_pass for pre_dispatch export. To do that, we need to propagate the meta-data from original submodule to the new higher order op and fix the names of nodes as is required by the _sig_to_specs pass.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119915
Approved by: https://github.com/tugsbayasgalan
ghstack dependencies: #119732, #119736, #119810, #119913, #119914
2024-02-17 02:18:35 +00:00
Zhengxu Chen
840426e793 [export] Log export time. (#119960)
Summary: as title. we are logging the time to complete one export session.

Test Plan: CI

Differential Revision: D53737766

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119960
Approved by: https://github.com/angelayi
2024-02-15 18:04:15 +00:00
Zhengxu Chen
8f27fde2f5 [export] Log private api uses. (#119848)
Summary:
as title.
The following APIs are logged:
- capture_preautograd_graph
- torch._export.aot_compile
- external usage of _export_to_torch_ir (AOTInductor, Pippy)
- constraints API
- public use of torch._dynamo.export

Test Plan: CI

Differential Revision: D53735599

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119848
Approved by: https://github.com/suo
2024-02-14 22:58:23 +00:00
Angela Yi
ee3a7bdc2d [export] Don't error if nn_module_stack doesn't contain a class (#119753)
Summary: When we deserialize nn_module_stack, sometimes the module no longer exists in the python environment so we cannot deserialize it back into the python type and instead it's kept as a string. This causes downstream failures when retracing due to one of our checks in export. This diff just bypasses the check.

Test Plan: CI

Reviewed By: chakriu

Differential Revision: D53527706

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119753
Approved by: https://github.com/zhxchen17
2024-02-14 16:56:11 +00:00
Zhengxu Chen
8069b29603 [export] Implement logging for scuba. (#119585)
Summary: As we're growing the user surface of torch.export, we'd like to understand better how people are using our APIs. It's also possible to analyze the usages based on static analysis, but due to the fact that there could be many creative ways to call things in Python, I think just building some logging infra will benefit us in the short term and gain us some insights.

Test Plan:
buck test caffe2/test:test_export
{F1454519846}

Reviewed By: tugsbayasgalan

Differential Revision: D53618220

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119585
Approved by: https://github.com/avikchaudhuri
2024-02-12 17:28:14 +00:00
Avik Chaudhuri
70a364d402 non-strict improvements: constant args and kwargs (#119529)
This PR makes a couple of improvements to non-strict to bring it closer to strict. (This lets us remove some expected failures from test_export.)

1. Support constant arguments (easy).
2. Support keyword arguments. This forces us to add kwargs to `aot_export_module`. Indeed there is no way to make this work otherwise, because some arguments in a function signature can be keyword-only and thus cannot be simulated by positional arguments alone. Adding kwargs to `aot_export_module` turns out to be fairly routine, but there is a bit of a unsatisfactory fork between how it is called by strict and non-strict: because strict calls it on a graph module, kwargs must be converted to positional arguments. So kwargs in `aot_export_module` really only comes into play in non-strict.

Differential Revision: D53600977

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119529
Approved by: https://github.com/zhxchen17, https://github.com/tugsbayasgalan
2024-02-10 02:55:40 +00:00
suo
0f12c0af44 [export] allow user input mutation in aot_export (#119356)
This PR enables input mutation in aot_export by removing the guard and ensuring that the GraphSignature is properly wired up.

This allows to undo the gross hack in torch.export where we lift user inputs to buffers in order to get around aot_export upstream support. It also makes input mutation work properly for non-strict mode.

Mutations on inputs that require_grad are still banned (I added a test for a non-parameter input as well, just to make sure).

Differential Revision: [D53507440](https://our.internmc.facebook.com/intern/diff/D53507440/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119356
Approved by: https://github.com/bdhirsh, https://github.com/zhxchen17, https://github.com/titaiwangms
2024-02-08 22:02:24 +00:00
Michael Suo
f79ae7599a [export] fakify module state in nonstrict (#119297)
Summary:
Previously, we were not fakifying module state explicitly in the nonstrict path.

This led to errors when modules were constructed under a fake mode, since the user-provided fake mode was clashing with the one that we had constructed internally to fakify the inputs.

This fixes things to use a single fake mode for everything.

As a side effect, this raised the question of how we ought to serialize state_dicts/constants that might be fake tensors. Naively calling torch.save understandably explodes—so this diff piggybacks on our infra for doing this on meta["val"]. Open to revising this, I'm low confidence that it's the best way to do it.

Test Plan: unit tests

Differential Revision: D53484942

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119297
Approved by: https://github.com/tugsbayasgalan
2024-02-07 17:12:22 +00:00
Edward Z. Yang
3f0fd36835 Introduce size oblivious guards (#118579)
Fixes https://github.com/pytorch/pytorch/issues/117361

The implementation here slightly diverges from what was proposed in the issue, so I will recap what this PR is doing here. Today, when doing computations involving size-like unbacked SymInts, we assume for all operations that the compile time range of the integer is `[2, inf]`, even though at runtime we also accept zero and one.

This PR removes the carte blanche assumption, and instead does the analysis in a much more limited and controlled fashion: only for guards which we have designated as "size oblivious" are we willing to do the analysis under the assumption that the range of all size-like unbacked SymInts is `[2, inf]`; otherwise, we will faithfully only do analysis with `[0, inf]` (or whatever the user provided) bounds.

The infra pieces of this PR are:

* Remove runtime_var_to_range from torch/fx/experimental/symbolic_shapes.py; modify `_constrain_range_for_size` to refine the range without clamping min to 2, and instead add the symbol to a `size_like` set in the ShapeEnv
* When evaluating an expression, if the expression is requested to be evaluated in a `size_oblivious` way, we attempt to statically compute the value of the expression with the assumption that all symbols in `size_like` are updated to assume that they are `>= 2`.
* Add Python and C++ APIs for guarding on a SymBool in a size-oblivious way. In C++, I also need to add some helpers for performing symbolic comparisons, since the stock comparisons immediately specialize in the "normal" way.

The rest of the changes of the PR are marking various spots in PyTorch framework code as size oblivious, based on what our current test suite exercises.

As you review the places where we have marked things as size oblivious, it may become clear why I ended up not opting for the "designate a branch as the default branch when it's not statically obvious which way to go": for some of the conditions, this answer is rather non-obvious. I think potentially there is another refinement on top of this PR, which is something like "I don't care if you can't figure it out with ValueRange analysis, go down this path anyway if there are unbacked sizes involved." But even if we add this API, I think we are obligated to attempt the ValueRange analysis first, since it can lead to better outcomes sometimes (e.g., we are able to figure out that something is contiguous no matter what the unbacked size is.)

When is it permissible to mark something as size oblivious? Heuristically, it is OK anywhere in framework code if it gets you past a guard on unbacked SymInt problem. It is somewhat difficult to provide a true semantic answer, however. In particular, these annotations don't have any observational equivalence guarantee; for example, if I have `torch.empty(u0, 1).squeeze()`, we will always produce a `[u0]` size tensor, even though if `u0 == 1` PyTorch will actually produce a `[]` size tensor. The argument that I gave to Lezcano is that we are in fact defining an alternate semantics for a "special" size = 0, 1, for which we have these alternate eager mode semantics. In particular, suppose that we have a constant `special1` which semantically denotes 1, but triggers alternate handling rules. We would define `torch.empty(special1, 1).squeeze()` to always produce a `[special1]` size tensor, making its semantics coincide with unbacked SymInt semantics. In this model, the decision to designate guards as size oblivious is simply a user API question: you put them where ever you need some handling for special1! As we conservatively error out whenever it is not obvious what `special1` semantics should be, it is always valid to expand these semantics to cover more cases (although you can always choose the wrong semantics!)

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118579
Approved by: https://github.com/eellison, https://github.com/lezcano
2024-02-06 19:45:32 +00:00
Wilson Hong
5ebed6f1c3 [torch] fix comment typo (#118656)
Summary: as title

Differential Revision: D49841787

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118656
Approved by: https://github.com/Skylion007, https://github.com/zhxchen17
2024-02-04 22:20:09 +00:00
Angela Yi
454e6b380c [export] Prevent specialization on backends (#118683)
Summary: https://github.com/pytorch/pytorch/issues/118289 shows that sometimes we will decompose into backend-specific operators, causing some specializations. We should probably avoid this by disabling these by default?

Test Plan: CI

Differential Revision: D53241300

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118683
Approved by: https://github.com/zhxchen17
2024-02-02 23:33:59 +00:00
Michael Suo
bf4e171539 [export] support non-persistent buffers (#118969)
Summary:
X-link: https://github.com/pytorch/executorch/pull/1817

Basic support for non-persistent buffers, which are buffers that do not show up in the state dict.

One weird twist is that most of our other systems (FX, aot_export, dynamo) have completely buggy handling of non-persistent buffers. I tried to go on a wild goose chase to fix them all, but it got to be too much. So I introduced some sad rewrite passes in `_export` make the final state dict correctly align with the original module's state dict.

This exposed some bugs/ambiguous handling of parameters/buffers in existing test code. For example, `TestSaveLoad.test_save_buffer` traced over a module that was not in the root module hierarchy and caused some weird behavior. I think we should error explicitly on use cases like this: https://github.com/pytorch/pytorch/issues/118410. For now I just rewrote the tests or skipped them.

As a side effect, this diff tightened up quite a few sloppy  behaviors around state dict handling:
- Tensor attributes were getting promoted to be buffers—bad!
- Tracing through a module not in the children of the root module would add its parameters/buffers to the state dict—bad!

This behavior is unlikely to show up in user code since the model would be totally broken, but did show up in a bunch of tests.

#buildmore

Test Plan:
unit tests
sandcastle

Differential Revision: D53340041

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118969
Approved by: https://github.com/guangy10, https://github.com/huydhn, https://github.com/titaiwangms
2024-02-02 19:16:08 +00:00
angelayi
05ac295177 [export] Fix bug with user input mutations (#118942)
We hit an edge case where the graph exporting contains placeholder nodes whose names conflict with names from aot_export, we don't update the user_inputs_to_mutate in the graph signature correctly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118942
Approved by: https://github.com/tugsbayasgalan, https://github.com/zhxchen17
2024-02-02 09:02:04 +00:00
PyTorch MergeBot
221747507d Revert "[export] support non-persistent buffers (#118612) (#118722)"
This reverts commit a43c28368c.

Reverted https://github.com/pytorch/pytorch/pull/118722 on behalf of https://github.com/atalman due to broke linux-jammy-py3-clang12-executorch ([comment](https://github.com/pytorch/pytorch/pull/118722#issuecomment-1921484565))
2024-02-01 14:39:29 +00:00
Michael Suo
a43c28368c [export] support non-persistent buffers (#118612) (#118722)
Summary:
X-link: https://github.com/pytorch/executorch/pull/1769

Basic support for non-persistent buffers, which are buffers that do not show up in the state dict.

One weird twist is that most of our other systems (FX, aot_export, dynamo) have completely buggy handling of non-persistent buffers. I tried to go on a wild goose chase to fix them all, but it got to be too much. So I introduced some sad rewrite passes in `_export` make the final state dict correctly align with the original module's state dict.

This exposed some bugs/ambiguous handling of parameters/buffers in existing test code. For example, `TestSaveLoad.test_save_buffer` traced over a module that was not in the root module hierarchy and caused some weird behavior. I think we should error explicitly on use cases like this: https://github.com/pytorch/pytorch/issues/118410. For now I just rewrote the tests or skipped them.

Test Plan: added a unit test

Differential Revision: D53253905

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118722
Approved by: https://github.com/SherlockNoMad, https://github.com/angelayi
2024-02-01 00:36:09 +00:00
Boyuan Feng
7aff92c838 [torch] Expose dynamic_shapes api at multiple levels (#118695)
Summary: Exposes `dynamic_shapes` api at multiple levels so it's easier to replace the old API `dynamic_dim()` with the new API `Dim()`.

Test Plan: CI

Differential Revision: D53246409

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118695
Approved by: https://github.com/ydwu4
2024-01-31 18:50:01 +00:00
Sherlock Huang
9afd539075 [sigmoid] update serialization to include custom objs (#118684)
Summary: Update the serialization code to handle custom objs.

Test Plan: buck2 run 'fbcode//mode/dev-nosan' fbcode//sigmoid/frontend/test_gpu:serializer_test

Differential Revision: D53139356

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118684
Approved by: https://github.com/angelayi, https://github.com/suo
2024-01-31 08:23:34 +00:00
suo
6511811ebb [export] preserve metadata during nonstrict tracing (#118607)
Previously, nonstrict tracing would wipe metadata of graphmodules, because the wrapper class we're using was not detected as a graphmodule and thus meta preservation was not turned on

Differential Revision: [D53139354](https://our.internmc.facebook.com/intern/diff/D53139354/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118607
Approved by: https://github.com/zhxchen17
2024-01-30 18:27:52 +00:00
Angela Yi
a93940b5db [export] Allow constant outputs + None input/outputs (#117894)
Added support for constant outputs. We will just embed the constant directly into the output, like `return (x, 1)`.
Also adds support for None input/outputs. For None inputs we address it the same way we do to constants, which is that a placeholder with no users will be inserted into the graph, and the None will be embedded into whatever operator is using the None. For None outputs, we will also address the same way we do constants, which is that we embed it into the output, like `return (x, None)`.

Differential Revision: D52881070

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117894
Approved by: https://github.com/zhxchen17
2024-01-25 23:37:34 +00:00
Edward Z. Yang
903e1913ff Rename unbacked SymInt prefix to u (#117859)
Currently, it conflicts with Inductor's naming convention for index
variables

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117859
Approved by: https://github.com/lezcano, https://github.com/jansel, https://github.com/avikchaudhuri
2024-01-22 20:53:47 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
f316c35a34 [export] Support preserving submodule callling convention in non-strict export (#117796)
Summary: Title

Test Plan: CI

Reviewed By: zhxchen17

Differential Revision: D52889236

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117796
Approved by: https://github.com/angelayi
2024-01-19 17:16:45 +00:00
suo
4057d005ff Initial torchbind support in PT2 (#117697)
This PR adds the bare minimum functionality to get torchbind working in an e2e testable way on PT2.

It implements:
* ProxyTensor support
* Simple torch.export support (proxytensor-only path, e.g. non-strict).
* add some tests exercising the path.

Because all this is not fully baked, I hide the functionality behind a feature flag (`enable_torchbind_tracing()`) so it does not affect regular users for now.

Still on the agenda:
* Dynamo support
* Actual FakeMode support
* Mutability support

Hoping to get this first bit in as a standalone, as it will unblock some more extensive experimentation/testing going on internally.

Differential Revision: [D51825372](https://our.internmc.facebook.com/intern/diff/D51825372/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117697
Approved by: https://github.com/SherlockNoMad
2024-01-19 06:28:20 +00:00
Angela Yi
92d718aed1 [export] Add lifted constant obj to input (#116985)
Test Plan: wip

Differential Revision: D52556070

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116985
Approved by: https://github.com/suo
2024-01-18 22:10:53 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
1a790f5a61 [RELAND] Error grad mode op in export API (#117420)
Summary: Title

Test Plan: CI

Differential Revision: D52706691

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117420
Approved by: https://github.com/angelayi
2024-01-13 21:36:29 +00:00
PyTorch MergeBot
bfc336308a Revert "Error grad mode op in export API (#117187)"
This reverts commit 89ef426ba0.

Reverted https://github.com/pytorch/pytorch/pull/117187 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/117187#issuecomment-1887363580))
2024-01-11 15:01:36 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
89ef426ba0 Error grad mode op in export API (#117187)
Summary:
This is reland of https://github.com/pytorch/pytorch/pull/116339
Needed to some internal adjustments to make it work properly. Original credit goes to andrewlee302

Test Plan: CI

Differential Revision: D52674706

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117187
Approved by: https://github.com/suo
2024-01-11 09:06:59 +00:00
angelayi
79de14546d [export] Add TORCH_LOGS=export (#116993)
Adds TORCH_LOGS=export which currently includes dynamo/dynamic logs. In the future if we add any logs under the torch/export directory it will also show up in the TORCH_LOGS=export

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116993
Approved by: https://github.com/avikchaudhuri
2024-01-11 03:02:23 +00:00
PyTorch MergeBot
e88d0648ed Revert "[export] Error grad mode op in export API (#116339)"
This reverts commit 9431798521.

Reverted https://github.com/pytorch/pytorch/pull/116339 on behalf of https://github.com/tugsbayasgalan due to PR below this in the stack broke torchrec/sigmoid tests ([comment](https://github.com/pytorch/pytorch/pull/116339#issuecomment-1884599027))
2024-01-10 10:42:33 +00:00
chundian
9431798521 [export] Error grad mode op in export API (#116339)
Summary:
As current export doesn't support training, so grad mode ops doesn't
make sense. To avoid the confusion, we choose to early error if there
exist grad mode ops.

Test Plan:
python test/export/test_safeguard.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116339
Approved by: https://github.com/tugsbayasgalan
2024-01-05 22:28:57 +00:00
Angela Yi
6413511713 [export][refactor][4/n] Make equality_constraints optional (#116233)
Summary: needed to remove equality_contraints eventually :P

Test Plan: CI

Differential Revision: D52351709

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116233
Approved by: https://github.com/tugsbayasgalan
2024-01-05 00:50:52 +00:00