pytorch/torch/_higher_order_ops
Sijia Chen 4995e058bf [user-triton] handle inline_asm_case (#148043)
Summary: We currently failed the mutation analysis for all inline_asm ops. In this diff, we handle the case when "is_pure" is set to True since it indicates the operation doesn't mutate the input value

Test Plan:
../buck-out/v2/gen/fbcode/854b9ed00d28c5c5/caffe2/test/inductor/__triton_kernels__/triton_kernels.par --r test_mutations_inline_asm_kernel

```
test_mutations_inline_asm_kernel_is_pure_true (caffe2.test.inductor.test_triton_kernels.MutationTests) ... W0226 18:10:34.261000 1906801 /data/users/sijiac/fbsource/fbcode/caffe2/torch/_higher_order_ops/triton_kernel_wrap.py:656] TTIR mutation analysis: Skipping pure tt.elementwise_inline_asm op (is_pure=True)
ok

----------------------------------------------------------------------
Ran 2 tests in 0.706s

OK
```

Differential Revision: D69878591

Pull Request resolved: https://github.com/pytorch/pytorch/pull/148043
Approved by: https://github.com/zou3519
2025-02-28 20:52:51 +00:00
..
__init__.py Rename PrimHOPBase to BaseHOP + minor changes (#146727) 2025-02-11 02:43:37 +00:00
_invoke_quant.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
aoti_call_delegate.py [FX] Refactor immutable collections implementation (#144640) 2025-02-24 09:14:08 +00:00
associative_scan.py [associative_scan] compile backend change to "eager" (#146973) 2025-02-21 20:21:41 +00:00
auto_functionalize.py Fix auto_functionalize x inference_mode (#147925) 2025-02-26 18:05:30 +00:00
base_hop.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
cond.py [cond] support output sizes mismatch in front end (#147130) 2025-02-25 20:28:41 +00:00
effects.py Support static method of torchbind attributes in torch.compile with inductor backend (#146927) 2025-02-20 03:33:19 +00:00
executorch_call_delegate.py [FX] Refactor immutable collections implementation (#144640) 2025-02-24 09:14:08 +00:00
flat_apply.py [dynamo] Initial support for nonstrict_trace (#146367) 2025-02-26 19:47:39 +00:00
flex_attention.py Fix broken meta function for flex-attention backwards (#146563) 2025-02-08 04:13:52 +00:00
foreach_map.py [BaseHOP] change hop(subgraph, operands) to hop(subgraph, *operands) (#146730) 2025-02-20 02:30:36 +00:00
hints_wrap.py [hop][be] add utils for more comprehensive input alias and mutation (#145298) 2025-01-23 18:12:28 +00:00
invoke_subgraph.py Rename PrimHOPBase to BaseHOP + minor changes (#146727) 2025-02-11 02:43:37 +00:00
map.py [BE]: Apply PERF401 autofixes from ruff (#140980) 2024-11-20 17:52:07 +00:00
out_dtype.py [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
run_const_graph.py [export] Unify single and multiple return for hops (#143227) 2025-01-13 03:31:14 +00:00
scan.py [scan] User-facing reverse flag handling (#147886) 2025-02-26 20:04:57 +00:00
strict_mode.py [Dynamo] Ensure torch function modes are dispatched on builtin ops (#137117) 2024-10-09 02:29:40 +00:00
torchbind.py [torchbind] Differentiate ScriptModule and ScriptObject with qualified name (#147399) 2025-02-20 04:57:57 +00:00
triton_kernel_wrap.py [user-triton] handle inline_asm_case (#148043) 2025-02-28 20:52:51 +00:00
utils.py PEP585: More UP006 fixes (#146392) 2025-02-20 06:18:13 +00:00
while_loop.py Revert "Implement cuda graphs implementation of torch.cond and torch.while_loop (#140979)" 2025-02-13 18:04:26 +00:00
wrap.py Require that all HOPs be imported at import torch time (#145939) 2025-01-29 22:27:52 +00:00