pytorch/torch/_higher_order_ops
Shangdi Yu cf19efd3d9 Support basic TorchBind in aot_compile and aoti_compile_and_package (#148506)
Summary:
**Codegen**

- Skip some codegen parts for torchbind (such as arg decleration) because they are loaded in proxy executor, so we do not need to declare torchbind args in cpp code
- Added a helper method to get the schema of CallTorchBind HOP. The returned schema is only the schema of `obj.method()`.

**Serialization**
Add support for torchbind object in serialization

- For CallTorchBind HOP, we need to handle it specially because of it's schema. The output serialized args is in the format of `(obj, method, *args, **kwargs)`.
- it.TorchBindObject inputs are serialized to `as_custom_obj` Argument.

**Packaging**

Add torchbind objects file and `custom_objs_config.json` file to generated files output of `aot_compile`.

The json file is stored in the `data/aotinductor/<model_name>` folder in pt2 archive.

The torchbind objects are stored in data/constants/ folder in pt2 archive.
The format of torchbind objects are `f"{CUSTOM_OBJ_FILENAME_PREFIX}{custom_obj_idx}"`. e.g. `custom_obj_0`.
CustomClassHolder objects implement their own pickle methods.

Note that this `custom_objs_config.json` file is different from the `model_constants_config.json` file produced in package_sigmoid(). The keys in `custom_objs_config` directly correspond to the arg name in extern nodes json.
The key in `model_constants_config.json` produced by `package_sigmoid` is the attribute name in the user mode code.

This is required for both internal and OSS torchbind support.
For OSS torchbind support, we also need to package torchbind_constants into the .pt2 output.

**Work Left**
We still need to add torchbind support in ProxyExecutor for inductor.aoti_load_package to work. See other diffs in the stack.

Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r schema
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r aot_compile
```

Differential Revision: D69490718

Pull Request resolved: https://github.com/pytorch/pytorch/pull/148506
Approved by: https://github.com/angelayi
2025-03-11 20:55:18 +00:00
..
__init__.py Rename PrimHOPBase to BaseHOP + minor changes (#146727) 2025-02-11 02:43:37 +00:00
_invoke_quant.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
aoti_call_delegate.py [FX] Refactor immutable collections implementation (#144640) 2025-02-24 09:14:08 +00:00
associative_scan.py [associative_scan] compile backend change to "eager" (#146973) 2025-02-21 20:21:41 +00:00
auto_functionalize.py Fix auto_functionalize x inference_mode (#147925) 2025-02-26 18:05:30 +00:00
base_hop.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
cond.py [cond] support output sizes mismatch in front end (#147130) 2025-02-25 20:28:41 +00:00
effects.py Support static method of torchbind attributes in torch.compile with inductor backend (#146927) 2025-02-20 03:33:19 +00:00
executorch_call_delegate.py [FX] Refactor immutable collections implementation (#144640) 2025-02-24 09:14:08 +00:00
flat_apply.py [dynamo] Make nonstrict_trace work with some pytree.register_constant-ed instances (#148007) 2025-03-05 21:28:26 +00:00
flex_attention.py Fix broken meta function for flex-attention backwards (#146563) 2025-02-08 04:13:52 +00:00
foreach_map.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
hints_wrap.py [hop][be] add utils for more comprehensive input alias and mutation (#145298) 2025-01-23 18:12:28 +00:00
invoke_subgraph.py Rename PrimHOPBase to BaseHOP + minor changes (#146727) 2025-02-11 02:43:37 +00:00
map.py
out_dtype.py [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
run_const_graph.py [export] Unify single and multiple return for hops (#143227) 2025-01-13 03:31:14 +00:00
scan.py [scan] Refactoring of input checking and dynamo invocation (#142125) 2025-03-06 01:06:54 +00:00
strict_mode.py
torchbind.py Support basic TorchBind in aot_compile and aoti_compile_and_package (#148506) 2025-03-11 20:55:18 +00:00
triton_kernel_wrap.py Revert "Use the device interface for detecting Triton availability (#139171)" 2025-03-11 18:49:21 +00:00
utils.py [scan] Refactoring of input checking and dynamo invocation (#142125) 2025-03-06 01:06:54 +00:00
while_loop.py [scan] Refactoring of input checking and dynamo invocation (#142125) 2025-03-06 01:06:54 +00:00
wrap.py Require that all HOPs be imported at import torch time (#145939) 2025-01-29 22:27:52 +00:00