pytorch/torch/_higher_order_ops
2024-10-17 16:09:06 +00:00
..
__init__.py [HOO] add hints_wrapper to support passing context hints (#132860) 2024-08-26 18:21:22 +00:00
associative_scan.py Implementation of scan (#134102) 2024-09-10 04:51:16 +00:00
auto_functionalize.py Avoid generating as_strided for alaising views in auto_functionalize_v2 (#137149) 2024-10-10 05:00:41 +00:00
cond.py [cond] support lifted symint inputs in subgraph (#137519) 2024-10-17 16:09:06 +00:00
effects.py [effects] Turn off dtype promotion for with_effects lowering (#136039) 2024-09-16 16:14:05 +00:00
executorch_call_delegate.py [hop] require hops to override __call__. (#134352) 2024-08-28 19:56:40 +00:00
flex_attention.py [FlexAttention] Support training bias for eager (#136910) (#137526) 2024-10-15 18:55:22 +00:00
hints_wrap.py [HOO] add hints_wrapper to support passing context hints (#132860) 2024-08-26 18:21:22 +00:00
map.py [hop] preserve metadata in re-tracing hop subgraph by running with interpreter (#135159) 2024-09-05 21:36:56 +00:00
out_dtype.py Make the __module__ name of HOO to be always "torch.ops.higher_order" (#132775) 2024-08-08 16:55:09 +00:00
run_const_graph.py [hop] require hops to override __call__. (#134352) 2024-08-28 19:56:40 +00:00
scan.py [scan] support closure (#135602) 2024-10-16 22:28:03 +00:00
strict_mode.py [Dynamo] Ensure torch function modes are dispatched on builtin ops (#137117) 2024-10-09 02:29:40 +00:00
torchbind.py [hop] require hops to override __call__. (#134352) 2024-08-28 19:56:40 +00:00
triton_kernel_wrap.py Add host-side Triton TMA support to Dynamo (#137677) 2024-10-16 02:18:48 +00:00
utils.py [cond] support lifted symint inputs in subgraph (#137519) 2024-10-17 16:09:06 +00:00
while_loop.py [Dynamo] Use custom backend to reenter metadata tf mode when tracing while/cond (#134732) 2024-09-14 18:52:22 +00:00
wrap.py Allow fx graph caching higher order operators (opt-in) (#135877) 2024-09-24 17:23:09 +00:00