pytorch/torch/_higher_order_ops
Aaron Orenstein db4ce78d46 PEP585: More UP006 fixes (#146392)
This should be the final PR before we can enable RUFF UP006.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146392
Approved by: https://github.com/justinchuby, https://github.com/albanD, https://github.com/Skylion007
2025-02-20 06:18:13 +00:00
..
__init__.py Rename PrimHOPBase to BaseHOP + minor changes (#146727) 2025-02-11 02:43:37 +00:00
_invoke_quant.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
aoti_call_delegate.py PEP585: More UP006 fixes (#146392) 2025-02-20 06:18:13 +00:00
associative_scan.py [associative_scan] Lifted arguments (#140043) 2025-02-11 23:25:55 +00:00
auto_functionalize.py [auto_functionalized] Support Tensor(a!)[]? (#145400) 2025-02-05 14:52:39 +00:00
base_hop.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
cond.py [cond] make cond re-dispatch in proxy mode (#146954) 2025-02-14 23:13:14 +00:00
effects.py Support static method of torchbind attributes in torch.compile with inductor backend (#146927) 2025-02-20 03:33:19 +00:00
executorch_call_delegate.py
flat_apply.py Barebones flat_apply HOP (#146060) 2025-02-01 16:17:48 +00:00
flex_attention.py Fix broken meta function for flex-attention backwards (#146563) 2025-02-08 04:13:52 +00:00
foreach_map.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
hints_wrap.py [hop][be] add utils for more comprehensive input alias and mutation (#145298) 2025-01-23 18:12:28 +00:00
invoke_subgraph.py Rename PrimHOPBase to BaseHOP + minor changes (#146727) 2025-02-11 02:43:37 +00:00
map.py [BE]: Apply PERF401 autofixes from ruff (#140980) 2024-11-20 17:52:07 +00:00
out_dtype.py [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
run_const_graph.py [export] Unify single and multiple return for hops (#143227) 2025-01-13 03:31:14 +00:00
scan.py [scan] scan dim handling in user-facing scan() (#145179) 2025-01-30 21:09:07 +00:00
strict_mode.py
torchbind.py [torchbind] Differentiate ScriptModule and ScriptObject with qualified name (#147399) 2025-02-20 04:57:57 +00:00
triton_kernel_wrap.py [inductor] Make triton kernel autotune config defaults backward-compatible (#145494) 2025-01-29 00:31:39 +00:00
utils.py PEP585: More UP006 fixes (#146392) 2025-02-20 06:18:13 +00:00
while_loop.py Revert "Implement cuda graphs implementation of torch.cond and torch.while_loop (#140979)" 2025-02-13 18:04:26 +00:00
wrap.py Require that all HOPs be imported at import torch time (#145939) 2025-01-29 22:27:52 +00:00