Commit Graph

16 Commits

Author SHA1 Message Date
zhxchen17
2cb16df6e2 [dynamo] Guard serialization for DUPLICATE_INPUT. (#152687)
Seems this guard is not very active. Adding a test to detect error handling at least.

Differential Revision: [D74074837](https://our.internmc.facebook.com/intern/diff/D74074837/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152687
Approved by: https://github.com/jansel
ghstack dependencies: #152615, #152616
2025-05-05 18:05:56 +00:00
zhxchen17
ffd58293f7 [dynamo] Guard serialization for FUNCTORCH_STACK_MATCH (#152616)
Make Functorch interpreters serializable most of the time, so that we can save the guards on functorch states.

## Test Cases:

0. torch.compile() without functorch layers present. Guard should fail with any layer being pushed.
1. torch.compile() nested in vmap.
2. torch.compile() nested in grad.
3. torch.compile() nested in jvp + vmap
4. torch.compile() nested functionalize
5. torch.compile() nested in vmap + grad

Differential Revision: [D74008787](https://our.internmc.facebook.com/intern/diff/D74008787/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152616
Approved by: https://github.com/zou3519
ghstack dependencies: #152615
2025-05-05 18:05:56 +00:00
zhxchen17
1d1cbcd8a3 [dynamo] Guard serialization for DUAL LEVEL. (#152615)
Seem dual level counter should be stored in OutputGraph so that the value can be preserved through roundtripping.

Differential Revision: [D74008786](https://our.internmc.facebook.com/intern/diff/D74008786/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152615
Approved by: https://github.com/jansel, https://github.com/zou3519
2025-05-05 18:05:56 +00:00
zhxchen17
1d8cdf373b [dynamo] Guard serialization for NAME_MATCH (#152332)
Differential Revision: [D73780430](https://our.internmc.facebook.com/intern/diff/D73780430/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152332
Approved by: https://github.com/jansel
ghstack dependencies: #152325, #152326, #152327, #152328, #152329, #152330, #152331
2025-04-29 20:16:00 +00:00
zhxchen17
5c297b2846 [dynamo] Guard serialization for DISPATCH_KEY_SET_MATCH (#152331)
Differential Revision: [D73780433](https://our.internmc.facebook.com/intern/diff/D73780433/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152331
Approved by: https://github.com/jansel
ghstack dependencies: #152325, #152326, #152327, #152328, #152329, #152330
2025-04-29 20:16:00 +00:00
zhxchen17
4cb75d7afc [dynamo] Guard serialization for ID_MATCH (#152330)
Differential Revision: [D73780431](https://our.internmc.facebook.com/intern/diff/D73780431/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152330
Approved by: https://github.com/jansel
ghstack dependencies: #152325, #152326, #152327, #152328, #152329
2025-04-29 20:16:00 +00:00
zhxchen17
0b39124ea3 [dynamo] Guard serialization for NONE_MATCH. (#152329)
Differential Revision: [D73780435](https://our.internmc.facebook.com/intern/diff/D73780435/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152329
Approved by: https://github.com/jansel
ghstack dependencies: #152325, #152326, #152327, #152328
2025-04-29 20:16:00 +00:00
zhxchen17
ab4091a9fa [dynamo] Guard serialization for BOOL_MATCH. (#152328)
Differential Revision: [D73780434](https://our.internmc.facebook.com/intern/diff/D73780434/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152328
Approved by: https://github.com/jansel
ghstack dependencies: #152325, #152326, #152327
2025-04-29 20:16:00 +00:00
zhxchen17
c521c45a8a [dynamo] Guard serialization for DICT_CONTAINS (#152327)
Adding serialization for DICT_CONTAINS

Differential Revision: [D73780432](https://our.internmc.facebook.com/intern/diff/D73780432/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152327
Approved by: https://github.com/jansel
ghstack dependencies: #152325, #152326
2025-04-29 20:16:00 +00:00
zhxchen17
52202525b9 [dynamo] Guard serialization for DICT_VERSION (#152326)
I think we shouldn't support DICT_VERSION for 2 reasons:
1. dict version is not well defined across processes
2. they are pretty rare (only with pytree calls)

Differential Revision: [D73780437](https://our.internmc.facebook.com/intern/diff/D73780437/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152326
Approved by: https://github.com/jansel
ghstack dependencies: #152325
2025-04-29 20:16:00 +00:00
zhxchen17
df663b9e72 [dynamo] Guard serialization for TYPE_MATCH (#152325)
Adding guard serialization for TYPE_MATCH

Differential Revision: [D73780438](https://our.internmc.facebook.com/intern/diff/D73780438/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152325
Approved by: https://github.com/jansel
2025-04-29 20:16:00 +00:00
zhxchen17
d4a8e4e30c [dynamo] Guard serialization for HASATTR (#151349)
Adding guard serialization for type HASATTR

Differential Revision: [D73059073](https://our.internmc.facebook.com/intern/diff/D73059073/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151349
Approved by: https://github.com/jansel, https://github.com/anijain2305
ghstack dependencies: #151318, #151343
2025-04-25 14:16:30 +00:00
zhxchen17
558f45190e [dynamo] Guard serialization for NOT_PRESENT_IN_GENERIC_DICT (#151343)
Adding guard serialization for type NOT_PRESENT_IN_GENERIC_DICT

Differential Revision: [D73057304](https://our.internmc.facebook.com/intern/diff/D73057304/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151343
Approved by: https://github.com/jansel, https://github.com/anijain2305
ghstack dependencies: #151318
2025-04-25 14:16:30 +00:00
zhxchen17
a34c28e0d2 [dynamo] Add guard serialization for tensor matches. (#151318)
This is a proof-of-concept of how we could serialize a guard and deserialize it back from the bytes.

The main behavioral change introduced in this diff is on CheckFunctionManager:

```
check_fn_manager = CheckFunctionManager(code, output_graph, guards_serialization_mode="save")

guards_state: bytes = check_fn_manager.guards_state
```

Once `guards_serialization_mode` is set to `save`, CheckFunctionManager will return an addtional `bytes` object called `guards_state` which should contain all the information needed for deserializing guards later.

When we load back guards state, we will set `guards_serialization_mode` is set to `load`:

```
output_graph_state = pickle.loads(guards_state)
check_fn_manager = CheckFunctionManager(code, output_graph_state, guards_serialization_mode="load")
```

# TENSOR_MATCH

Since we have many types of guards to support, we will break the work into small diffs instead of a single diff to support every guards.

We kick off the work from TENSOR_MATCH from this diff.

# Testing

For each type of guard we will test it like the following:
1. Use guard_filter_fn to select 1 type of guard each time.
2. Call InstructionTranslator directly on an example function to get OutputGraph and CheckFunctionManager (reference guard manager)
3. Serialize->deserialize the output graph state and re-build the guards with a new CheckFunctionManager (loaded guard manager)
4. Throw a set of example inputs to both reference and loaded guard manager to see if their behavior match.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151318
Approved by: https://github.com/jansel, https://github.com/anijain2305
2025-04-25 14:16:23 +00:00
PyTorch MergeBot
b1d055fd6a Revert "[dynamo] Add guard serialization for tensor matches. (#151318)"
This reverts commit 81c4369d81.

Reverted https://github.com/pytorch/pytorch/pull/151318 on behalf of https://github.com/zhxchen17 due to macos test failing ([comment](https://github.com/pytorch/pytorch/pull/151318#issuecomment-2828638168))
2025-04-24 19:22:45 +00:00
zhxchen17
81c4369d81 [dynamo] Add guard serialization for tensor matches. (#151318)
This is a proof-of-concept of how we could serialize a guard and deserialize it back from the bytes.

The main behavioral change introduced in this diff is on CheckFunctionManager:

```
check_fn_manager = CheckFunctionManager(code, output_graph, guards_serialization_mode="save")

guards_state: bytes = check_fn_manager.guards_state
```

Once `guards_serialization_mode` is set to `save`, CheckFunctionManager will return an addtional `bytes` object called `guards_state` which should contain all the information needed for deserializing guards later.

When we load back guards state, we will set `guards_serialization_mode` is set to `load`:

```
output_graph_state = pickle.loads(guards_state)
check_fn_manager = CheckFunctionManager(code, output_graph_state, guards_serialization_mode="load")
```

# TENSOR_MATCH

Since we have many types of guards to support, we will break the work into small diffs instead of a single diff to support every guards.

We kick off the work from TENSOR_MATCH from this diff.

# Testing

For each type of guard we will test it like the following:
1. Use guard_filter_fn to select 1 type of guard each time.
2. Call InstructionTranslator directly on an example function to get OutputGraph and CheckFunctionManager (reference guard manager)
3. Serialize->deserialize the output graph state and re-build the guards with a new CheckFunctionManager (loaded guard manager)
4. Throw a set of example inputs to both reference and loaded guard manager to see if their behavior match.

Differential Revision: [D72987485](https://our.internmc.facebook.com/intern/diff/D72987485/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151318
Approved by: https://github.com/jansel, https://github.com/anijain2305
2025-04-24 18:07:01 +00:00