pytorch/torch/_higher_order_ops
Shangdi Yu 7869196482 Fix torchbind schema str generation (#149239)
Summary: Fix Torchbind HOP schema generation when there's no input

Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r schema
```

Differential Revision: D71231164

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149239
Approved by: https://github.com/zou3519
2025-03-18 04:29:56 +00:00
..
__init__.py Rename PrimHOPBase to BaseHOP + minor changes (#146727) 2025-02-11 02:43:37 +00:00
_invoke_quant.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
aoti_call_delegate.py [FX] Refactor immutable collections implementation (#144640) 2025-02-24 09:14:08 +00:00
associative_scan.py [associative_scan] compile backend change to "eager" (#146973) 2025-02-21 20:21:41 +00:00
auto_functionalize.py Fix auto_functionalize x inference_mode (#147925) 2025-02-26 18:05:30 +00:00
base_hop.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
cond.py [cond] support output sizes mismatch in front end (#147130) 2025-02-25 20:28:41 +00:00
effects.py Support static method of torchbind attributes in torch.compile with inductor backend (#146927) 2025-02-20 03:33:19 +00:00
executorch_call_delegate.py [FX] Refactor immutable collections implementation (#144640) 2025-02-24 09:14:08 +00:00
flat_apply.py [dynamo] Make nonstrict_trace work with some pytree.register_constant-ed instances (#148007) 2025-03-05 21:28:26 +00:00
flex_attention.py Fix broken meta function for flex-attention backwards (#146563) 2025-02-08 04:13:52 +00:00
foreach_map.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
hints_wrap.py [hop][be] add utils for more comprehensive input alias and mutation (#145298) 2025-01-23 18:12:28 +00:00
invoke_subgraph.py Rename PrimHOPBase to BaseHOP + minor changes (#146727) 2025-02-11 02:43:37 +00:00
map.py [BE]: Apply PERF401 autofixes from ruff (#140980) 2024-11-20 17:52:07 +00:00
out_dtype.py [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
run_const_graph.py [export] Unify single and multiple return for hops (#143227) 2025-01-13 03:31:14 +00:00
scan.py [scan] Refactoring of input checking and dynamo invocation (#142125) 2025-03-06 01:06:54 +00:00
strict_mode.py [Dynamo] Ensure torch function modes are dispatched on builtin ops (#137117) 2024-10-09 02:29:40 +00:00
torchbind.py Fix torchbind schema str generation (#149239) 2025-03-18 04:29:56 +00:00
triton_kernel_wrap.py Revert "Use the device interface for detecting Triton availability (#139171)" 2025-03-11 18:49:21 +00:00
utils.py [scan] Refactoring of input checking and dynamo invocation (#142125) 2025-03-06 01:06:54 +00:00
while_loop.py [scan] Refactoring of input checking and dynamo invocation (#142125) 2025-03-06 01:06:54 +00:00
wrap.py Require that all HOPs be imported at import torch time (#145939) 2025-01-29 22:27:52 +00:00