mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Previously, we bailed out of the Triton kernel analysis pass when seeing a `tt.reduce` op. In this PR, we support the op and don't bail out anymore. Test Plan: This is a bit tricky, as the extension is added to the MLIR walk-based analysis code path which is active only on when the MLIR bindings added in https://github.com/openai/triton/pull/3191 are available. So for now I've run the `test_argmax` and `test_reduce_sum` manually with a newer Triton version than the current pin. When pin updates, we'll make those tests official (left a TODO comment). Pull Request resolved: https://github.com/pytorch/pytorch/pull/121706 Approved by: https://github.com/jansel |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| auto_functionalize.py | ||
| cond.py | ||
| effects.py | ||
| map.py | ||
| out_dtype.py | ||
| strict_mode.py | ||
| torchbind.py | ||
| triton_kernel_wrap.py | ||
| utils.py | ||
| while_loop.py | ||
| wrap.py | ||