pytorch/torch/_higher_order_ops
2024-09-23 09:04:24 +00:00
..
__init__.py [HOO] add hints_wrapper to support passing context hints (#132860) 2024-08-26 18:21:22 +00:00
associative_scan.py Implementation of scan (#134102) 2024-09-10 04:51:16 +00:00
auto_functionalize.py Track base of FunctionalTensor in inference mode. (#135141) 2024-09-06 00:10:25 +00:00
cond.py [Dynamo] Use custom backend to reenter metadata tf mode when tracing while/cond (#134732) 2024-09-14 18:52:22 +00:00
effects.py [effects] Turn off dtype promotion for with_effects lowering (#136039) 2024-09-16 16:14:05 +00:00
executorch_call_delegate.py [hop] require hops to override __call__. (#134352) 2024-08-28 19:56:40 +00:00
flex_attention.py Revert "Allow fx graph caching higher order operators (opt-in) (#135877)" 2024-09-23 09:04:24 +00:00
hints_wrap.py [HOO] add hints_wrapper to support passing context hints (#132860) 2024-08-26 18:21:22 +00:00
map.py [hop] preserve metadata in re-tracing hop subgraph by running with interpreter (#135159) 2024-09-05 21:36:56 +00:00
out_dtype.py Make the __module__ name of HOO to be always "torch.ops.higher_order" (#132775) 2024-08-08 16:55:09 +00:00
run_const_graph.py [hop] require hops to override __call__. (#134352) 2024-08-28 19:56:40 +00:00
scan.py [Dynamo] Use custom backend to reenter metadata tf mode when tracing while/cond (#134732) 2024-09-14 18:52:22 +00:00
strict_mode.py [hop] require hops to override __call__. (#134352) 2024-08-28 19:56:40 +00:00
torchbind.py [hop] require hops to override __call__. (#134352) 2024-08-28 19:56:40 +00:00
triton_kernel_wrap.py Revert "Allow fx graph caching higher order operators (opt-in) (#135877)" 2024-09-23 09:04:24 +00:00
utils.py Implementation of scan (#134102) 2024-09-10 04:51:16 +00:00
while_loop.py [Dynamo] Use custom backend to reenter metadata tf mode when tracing while/cond (#134732) 2024-09-14 18:52:22 +00:00
wrap.py Revert "Allow fx graph caching higher order operators (opt-in) (#135877)" 2024-09-23 09:04:24 +00:00