Commit Graph

735 Commits

Author SHA1 Message Date
Boyuan Feng
eecc8e362c [Inductor] Inplacing with Donated Buffer (#140113)
Currently, inductor does not inplace update a buffer if it is an input buffer. Because we don't know if an input will be used by other functions.

Donated buffer provides additional information that an input buffer will not be used by other functions. So we can inplace update donated buffer when possible.

[Dashboard](https://hud.pytorch.org/benchmark/torchbench/inductor_dynamic?dashboard=torchinductor&startTime=Mon,%2011%20Nov%202024%2018:14:36%20GMT&stopTime=Mon,%2018%20Nov%202024%2018:14:36%20GMT&granularity=hour&mode=training&dtype=amp&deviceName=cuda%20(a100)&lBranch=bf/donated-buffer-inplace&lCommit=5df0769c00e6f9000caeb10fd5cbf0b165f69c2a&rBranch=main&rCommit=2b39a8db7741b816b03677a9c6fec1af05640dee)

![image](https://github.com/user-attachments/assets/f19d961f-7973-418e-9de8-5c2a97950478)
![image](https://github.com/user-attachments/assets/df3bd6a9-58b8-4e8a-8397-9e3b1de9adfe)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140113
Approved by: https://github.com/eellison
2024-11-26 17:19:50 +00:00
Jason Ansel
5268754ebd [inductor] Default impl refactors to IRNode (#141321)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141321
Approved by: https://github.com/yanboliang
2024-11-23 06:25:59 +00:00
Jason Ansel
5062bbcd86 [inductor] Add missing get_reads() method (#141310)
Summary: This is a possible fix for https://fb.workplace.com/groups/1075192433118967/permalink/794017756161443/

Test Plan:
```
buck2 test 'fbcode//mode/opt' fbcode//ai_infra/distributed_ai/pyper_test_framework/pt2_staging_tests/sw_v2:smallworld_cmf_test -- --exact 'ai_infra/distributed_ai/pyper_test_framework/pt2_staging_tests/sw_v2:smallworld_cmf_test - test_train (ai_infra.distributed_ai.pyper_test_framework.pt2_staging_tests.sw_v2.smallworld_cmf_test.CmfTest)'
```

Differential Revision: D66340927

Pull Request resolved: https://github.com/pytorch/pytorch/pull/141310
Approved by: https://github.com/ezyang
2024-11-22 22:00:18 +00:00
Jason Ansel
3acc6eac49 [inductor] Add typing to ir.py 2 (#140915)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140915
Approved by: https://github.com/aorenste
2024-11-22 04:56:54 +00:00
Jason Ansel
da94ab0b66 [inductor] Add typing to ir.py 1 (#140912)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140912
Approved by: https://github.com/aorenste
ghstack dependencies: #140895, #140910
2024-11-21 20:01:57 +00:00
Jason Ansel
6eca0aee76 [inductor] Refactor ir.Layout into ir.OutputSpec (#140910)
This separate the concepts of a Layout (size/stride/etc) and an OutputSpec (which includes multiple outputs).  Which should make typing easier.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140910
Approved by: https://github.com/ezyang
ghstack dependencies: #140895
2024-11-21 20:01:57 +00:00
Sam Ginzburg
a847790400 [inductor] reset to zero support for user defined Triton kernels (#140982)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140982
Approved by: https://github.com/aakhundov
2024-11-21 18:53:23 +00:00
Jason Ansel
808f0f656d [inductor] Refactor MutableBox to make IRNode typing easier (#140895)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140895
Approved by: https://github.com/ezyang, https://github.com/Skylion007
2024-11-20 19:50:46 +00:00
Benjamin Glass
4ffce45100 AOTInductor: properly generate cpp_wrapper runtime assertions (#141050)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141050
Approved by: https://github.com/desertfire
ghstack dependencies: #141058
2024-11-20 19:17:47 +00:00
Aaron Gokaslan
12e95aa4ee [BE]: Apply PERF401 autofixes from ruff (#140980)
* Automatically applies ruff rule 401. Turns loops into equivalent list comprehensions which are faster and do not leak the scope of the loop variables.
* list comprehensions not only often have better typing, but are 50+% faster than for loops on overhead. They also preserve length information etc and are better for the interpreter to optimize.
* Manually went back and made mypy happy after the change.
* Also fixed style lints in files covered by flake8 but not by pyfmt

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140980
Approved by: https://github.com/justinchuby, https://github.com/malfet
2024-11-20 17:52:07 +00:00
Benjamin Glass
34b2165bdb Insert aten.add into fallback_ops, and fix Tensor -> Scalar conversion in ir.FallbackKernel (#140624)
The code in ir.FallbackKernel will long-term be obviated by the solution for #90923.

Closes #131334.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140624
Approved by: https://github.com/desertfire
2024-11-19 23:34:20 +00:00
PyTorch MergeBot
d472a5f680 Revert "[inductor] Refactor MutableBox to make IRNode typing easier (#140895)"
This reverts commit c79e78b503.

Reverted https://github.com/pytorch/pytorch/pull/140895 on behalf of https://github.com/huydhn due to Sorry for reverting your change but I think test_torchbind_inductor is failing in trunk after this lands ([comment](https://github.com/pytorch/pytorch/pull/140895#issuecomment-2484679319))
2024-11-19 04:25:41 +00:00
Jason Ansel
c79e78b503 [inductor] Refactor MutableBox to make IRNode typing easier (#140895)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140895
Approved by: https://github.com/ezyang, https://github.com/Skylion007
2024-11-19 00:24:35 +00:00
chilli
c1f21bf2b6 Made FlexAttention error on subgraph lowering failure (#140331)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140331
Approved by: https://github.com/drisspg
2024-11-17 02:43:58 +00:00
PyTorch MergeBot
de34f581f1 Revert "Made FlexAttention error on subgraph lowering failure (#140331)"
This reverts commit e68bc76c28.

Reverted https://github.com/pytorch/pytorch/pull/140331 on behalf of https://github.com/malfet due to Looks like it regressed trunk, see 55f1959fc1/1 ([comment](https://github.com/pytorch/pytorch/pull/140331#issuecomment-2479435705))
2024-11-15 17:00:21 +00:00
chilli
e68bc76c28 Made FlexAttention error on subgraph lowering failure (#140331)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140331
Approved by: https://github.com/drisspg
2024-11-15 04:26:01 +00:00
Bin Bao
85deef9ede [AOTI][refactor] Rename generate_extern_kernel_alloc_and_find_schema_if_needed (#140447)
Summary: Rename generate_extern_kernel_alloc_and_find_schema_if_needed to better reflect its meaning.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140447
Approved by: https://github.com/chenyang78
2024-11-14 01:40:58 +00:00
Bin Bao
1f590feaf7 [AOTI][refactor] Update codegen_int_array_var API (#140299)
Summary: codegen_int_array_var and codegen_reinterpret_view need to call different writeline functions depending on which part of code it's writing. Previously their APIs take a writer and implicitly assign a default writer if needed, which is not intuitive. Update their APIs to explicitly take a writeline function.

Differential Revision: [D65774584](https://our.internmc.facebook.com/intern/diff/D65774584)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140299
Approved by: https://github.com/frank-wei, https://github.com/chenyang78
2024-11-12 21:39:41 +00:00
Animesh Jain
469eae2ba2 [inductor][invoke_subgraph] Fix SDPA seed/offset issue (#140070)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140070
Approved by: https://github.com/eellison
2024-11-12 04:40:03 +00:00
Animesh Jain
5f7ea7ca6a [invoke_subgraph] Support symint/int as inputs (#140058)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140058
Approved by: https://github.com/ydwu4, https://github.com/eellison
ghstack dependencies: #139162
2024-11-11 22:26:43 +00:00
Adnan Akhundov
838958de94 [inductor] Support autotune restore_value for user-defined Triton kernels (#139851)
This PR adds support for the `restore_value` argument of the
`@triton.autotune` for the user-defined Triton kernels in PT2.

The `kernel.restore_idx` are extracted in the
`ir.UserDefinedTritonKernel` and the corresponding arg names are
placed into the `triton_meta["restore_value"]`. From there, those
are added to the existing `mutated_arg_names` in the caching autotuner
infra which already exists and leads to the listed argss being cloned.
This achieves the equivalent effect to the native `restore_value`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139851
Approved by: https://github.com/oulgen
2024-11-08 14:59:00 +00:00
Wu, Chunyuan
a3052b3b7c Inductor cpp wrapper: clean-up hard-coded schema and related code (#139873)
Fixes https://github.com/pytorch/pytorch/issues/112552.

non-ABI compatible mode has been removed thus the following values are not needed anymore:
`extern_call_ops`
`cpp_op_schema`
`cpp_kernel_key`
`cpp_kernel_overload_name`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139873
Approved by: https://github.com/jgong5, https://github.com/desertfire
2024-11-08 08:15:51 +00:00
Aaron Orenstein
06f619d999 typing ir.py - part 2 (#131846)
See #131852

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131846
Approved by: https://github.com/eellison
ghstack dependencies: #139238
2024-11-06 00:01:15 +00:00
Aaron Orenstein
c2109ec479 typing ir.py - Disallow untyped defs for ir.py (#139238)
- Remove "mypy: allow-untyped-defs" and mark functions individually with "no-untyped-def"
- Mark some trivial functions with the proper return types (`None` and `torch.dtype`)
- Fixed a type bug in the signature of supported_dtype_of_cpp_wrapper()
- `ruff check torch/_inductor/ir.py --select ANN --fix --unsafe-fixes` and then fixed up things that looked incorrectly applied.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139238
Approved by: https://github.com/Skylion007, https://github.com/ezyang
2024-11-06 00:01:15 +00:00
Aaron Orenstein
51a3d6dbc3 Fix existing lint issues in ir.py (#139237)
- Remove stale mypy "type: ignores"
- Made ir.py pass the rest of the lints

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139237
Approved by: https://github.com/Skylion007
2024-11-05 06:06:12 +00:00
CaoE
3672c688e3 Fix layout for SetSourceTensorKernel (#137973)
Fixes #136837.
`aten.set_.source_Tensor` will make the size and stride of the first input and output follow that of the second input: https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/TensorShape.cpp#L440. If the layouts of the two inputs are different, the following `assert_size_stride` will fail.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137973
Approved by: https://github.com/jgong5, https://github.com/jansel
2024-11-05 00:55:17 +00:00
Jason Ansel
ed30fa74ab [inductor] sympy.Integer([01]) -> sympy.S.(Zero|One) (#139523)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139523
Approved by: https://github.com/ezyang
ghstack dependencies: #139364, #139365, #139370, #139452
2024-11-04 04:28:40 +00:00
PyTorch MergeBot
98e11b0021 Revert "[inductor] sympy.Integer([01]) -> sympy.S.(Zero|One) (#139523)"
This reverts commit c53beab377.

Reverted https://github.com/pytorch/pytorch/pull/139523 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing lots of internal tests in D65345157 ([comment](https://github.com/pytorch/pytorch/pull/139364#issuecomment-2452897337))
2024-11-02 06:49:10 +00:00
Jason Ansel
c53beab377 [inductor] sympy.Integer([01]) -> sympy.S.(Zero|One) (#139523)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139523
Approved by: https://github.com/ezyang
ghstack dependencies: #139364, #139365, #139370, #139452
2024-11-02 03:04:22 +00:00
rzou
ccaa2a206a [inductor] make requires_stride_order more unbacked-symint-aware (#137063)
Previously, we tried to sort SymInt strides to determine the stride
order. This PR makes the sorting more unbacked symint aware: given a Tensor
with sizes (u0, u1, u2), it has strides (u1 * u2, u1, 1), which is
sortable under the guard_size_oblivious assumptions.

Test Plan:
- test case

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137063
Approved by: https://github.com/eellison
2024-10-31 13:11:02 +00:00
Yifu Wang
7765d1ef70 Preliminary registered-buffer collective support via Inductor (#138029)
```
NOTE [lowering-time collective optimization]

In collective communication libraries such as NCCL, every rank maintains
communication buffers that are remotely accessible by some peers. Depending
on the underlying transport, remote accessibility may be established via
mechanisms such as ib_reg_mr, CUDA P2P, or CUDA multicast. Typically, these
buffers are private to the communication library by default, and
communication ops copy user data in and out of these buffers.

To prevent these copies, an optimization commonly known as "user buffer
registration" can be employed. This allows direct establishment of remote
accessibility on user buffers, eliminating the need for copying. However,
this optimization introduces stringent usage requirements, which are
typically hard to satisfy without being intrusive to the user code:

- Establishing remote accessibility is expensive and often done ahead of
time. In such implementations, all ranks must agree on the set of allocations
used for every collective op. Failing to meet this requirement can
lead to runtime errors or even silent correctness issues.
- Even if the collective communication library supports gracefully falling
back to "unregistered" implementations, the fallback mechanism would nullify
the optimization.
- Some communication mechanisms impose stricter requirements than others. For
example, CUDA's multicast + multi-mem instructions require all ranks to agree
not only on the allocations used for every collective but also on the offsets
within these allocations.

To support all different mechanisms with optimal results, we aim to satisfy
the strictest requirement for this family of optimizations - we ensures that
every collective op invocation is guaranteed to operate on the same
allocation, at the same offset, in every iteration.

For eligible collective ops, we identify communication buffers at lowering
time and optionally choose to lower the op to a different kernel
(ommunication libraries like NCCL handle both registered and non-registered
buffers transparently within the same op, though some may require different
ops for different cases). Later, the codegen will perform "persistent
allocation" to satisfy the aforementioned constraints, and optionally,
perform buffer planning to optimize overall memory usage.
```

### Changes
- Created `comm_lowering.py` for the lowerings of `_c10d_functional` ops. This is to prevent cluttering `lowering.py` as we add more lowering-time collective optimizations. This PR moved the lowerings for `all_reduce` and `all_reduce_` to the file.
- Added `comm_buffer_type: Dict[str, str]` to `GraphLowering` to track whether a buffer is a comm buffer and the type of the comm buffer.
- Added codegen allocation support for comm buffers of type "symm_mem".
- Added support for auto-lowering `_c10d_functional.all_reduce_` to `symm_mem.one_shot_all_reduce`.
- Added an Inductor config for collective optimizations in general (`config._collective`).

### Limitation
Currently, each persistently allocated comm buffer is dedicated to a single callsite. This is not viable in terms of memory usage. However, this is a neccesary intermediate state before we tackle memory planning for comm buffers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138029
Approved by: https://github.com/Chillee
ghstack dependencies: #138028
2024-10-30 18:11:09 +00:00
Sam Ginzburg
93d7f90c3a [inductor] getting AOT inductor to treat None args correctly (#139114)
Differential Revision: [D65102228](https://our.internmc.facebook.com/intern/diff/D65102228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139114
Approved by: https://github.com/aakhundov
2024-10-29 08:11:53 +00:00
Jason Ansel
2b937e4e6d [inductor] Cooperative reductions (#137756)
Example generated code for `(x+y).sum()`:
```py
@triton.jit
def triton_unk_fused_add_sum_0(in_ptr0, in_ptr1, out_ptr0, ws_ptr, semaphores_ptr, xnumel, rnumel, XBLOCK : tl.constexpr, RBLOCK : tl.constexpr, RSPLIT : tl.constexpr):
    xnumel = 1
    rnumel = 1048576
    rsplit_id = tl.program_id(0)
    num_rblocks = (rnumel + RBLOCK - 1) // RBLOCK
    rsplit_chunk = (num_rblocks + RSPLIT - 1) // RSPLIT * RBLOCK
    rsplit_start = rsplit_chunk * rsplit_id
    rsplit_end = rsplit_chunk * (rsplit_id + 1)
    xoffset = tl.program_id(1) * XBLOCK
    xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
    xmask = tl.full([XBLOCK, RBLOCK], True, tl.int1)
    rbase = tl.arange(0, RBLOCK)[None, :]
    _tmp4 = tl.full([XBLOCK, RBLOCK], 0, tl.float32)
    for roffset in range(rsplit_start, rsplit_end, RBLOCK):
        rindex = roffset + rbase
        rmask = rindex < rnumel
        r0 = rindex
        tmp0 = tl.load(in_ptr0 + (r0), rmask, eviction_policy='evict_first', other=0.0)
        tmp1 = tl.load(in_ptr1 + (r0), rmask, eviction_policy='evict_first', other=0.0)
        tmp2 = tmp0 + tmp1
        tmp3 = tl.broadcast_to(tmp2, [XBLOCK, RBLOCK])
        tmp5 = _tmp4 + tmp3
        _tmp4 = tl.where(rmask, tmp5, _tmp4)
    tmp4 = tl.sum(_tmp4, 1)[:, None]
    if RSPLIT > 1:
        tmp4_ws = (ws_ptr + 0).to(tl.pointer_type(tl.float32))
        tl.store(tmp4_ws + (xindex * RSPLIT + rsplit_id), tmp4, None)
    if RSPLIT > 1:
        triton_helpers.gpu_barrier(semaphores_ptr + (2 * tl.program_id(1) + 0), RSPLIT, True)
    if RSPLIT > 1:
        tmp4_peers = tl.load(tmp4_ws + (xindex * RSPLIT + tl.arange(0, RSPLIT)[None,:]), None, eviction_policy='evict_first')
        tmp4 = tl.sum(tmp4_peers, 1)[:, None]
    if rsplit_id == (0 % RSPLIT):
        tl.store(out_ptr0 + (tl.full([XBLOCK, 1], 0, tl.int32)), tmp4, None)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137756
Approved by: https://github.com/eellison
2024-10-29 00:45:53 +00:00
PyTorch MergeBot
60d1c7138d Revert "[inductor] Cooperative reductions (#137756)"
This reverts commit fed37dbfbc.

Reverted https://github.com/pytorch/pytorch/pull/137756 on behalf of https://github.com/jeanschmidt due to ROCM tests are timing out :( ([comment](https://github.com/pytorch/pytorch/pull/137756#issuecomment-2441579322))
2024-10-28 13:24:33 +00:00
Jason Ansel
fed37dbfbc [inductor] Cooperative reductions (#137756)
Example generated code for `(x+y).sum()`:
```py
@triton.jit
def triton_unk_fused_add_sum_0(in_ptr0, in_ptr1, out_ptr0, ws_ptr, semaphores_ptr, xnumel, rnumel, XBLOCK : tl.constexpr, RBLOCK : tl.constexpr, RSPLIT : tl.constexpr):
    xnumel = 1
    rnumel = 1048576
    rsplit_id = tl.program_id(0)
    num_rblocks = (rnumel + RBLOCK - 1) // RBLOCK
    rsplit_chunk = (num_rblocks + RSPLIT - 1) // RSPLIT * RBLOCK
    rsplit_start = rsplit_chunk * rsplit_id
    rsplit_end = rsplit_chunk * (rsplit_id + 1)
    xoffset = tl.program_id(1) * XBLOCK
    xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
    xmask = tl.full([XBLOCK, RBLOCK], True, tl.int1)
    rbase = tl.arange(0, RBLOCK)[None, :]
    _tmp4 = tl.full([XBLOCK, RBLOCK], 0, tl.float32)
    for roffset in range(rsplit_start, rsplit_end, RBLOCK):
        rindex = roffset + rbase
        rmask = rindex < rnumel
        r0 = rindex
        tmp0 = tl.load(in_ptr0 + (r0), rmask, eviction_policy='evict_first', other=0.0)
        tmp1 = tl.load(in_ptr1 + (r0), rmask, eviction_policy='evict_first', other=0.0)
        tmp2 = tmp0 + tmp1
        tmp3 = tl.broadcast_to(tmp2, [XBLOCK, RBLOCK])
        tmp5 = _tmp4 + tmp3
        _tmp4 = tl.where(rmask, tmp5, _tmp4)
    tmp4 = tl.sum(_tmp4, 1)[:, None]
    if RSPLIT > 1:
        tmp4_ws = (ws_ptr + 0).to(tl.pointer_type(tl.float32))
        tl.store(tmp4_ws + (xindex * RSPLIT + rsplit_id), tmp4, None)
    if RSPLIT > 1:
        triton_helpers.gpu_barrier(semaphores_ptr + (2 * tl.program_id(1) + 0), RSPLIT, True)
    if RSPLIT > 1:
        tmp4_peers = tl.load(tmp4_ws + (xindex * RSPLIT + tl.arange(0, RSPLIT)[None,:]), None, eviction_policy='evict_first')
        tmp4 = tl.sum(tmp4_peers, 1)[:, None]
    if rsplit_id == (0 % RSPLIT):
        tl.store(out_ptr0 + (tl.full([XBLOCK, 1], 0, tl.int32)), tmp4, None)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137756
Approved by: https://github.com/eellison
ghstack dependencies: #138970
2024-10-27 16:31:38 +00:00
chilli
07dbc42881 Stop force realizing to prevent recursion errors unless it's much bigger (#138881)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138881
Approved by: https://github.com/shunting314
ghstack dependencies: #138733, #138794
2024-10-25 18:59:01 +00:00
Adnan Akhundov
f737e3fe2f [inductor] Fix ReinterpretView call in TMADescriptor IR (#138759)
As a result of #137768, `ReinterpretView` call in the `TMADescriptor`
has become invalid. This leads to some TMA tests breaking in
test_triton_kernels.py. In this PR, we fix this.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138759
Approved by: https://github.com/Chillee, https://github.com/eellison
2024-10-25 00:45:44 +00:00
Animesh Jain
dd4dd85210 [hierarchical-compilation][inductor] Support invoke_subgraph HOP (#138031)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138031
Approved by: https://github.com/eellison
ghstack dependencies: #137538, #138036, #137965
2024-10-23 21:32:14 +00:00
eqy
c0e8458aab [Flex Attention] Don't compute fill order to compute stride order just to get fill order back (#138376)
Was a bit confusing to read when working on #138354

"computer-assisted proof"
```
import random

def argsort(seq):
    # preserve original order for equal strides
    getter = seq.__getitem__
    a_r = range(len(seq))
    return list(reversed(sorted(a_r, key=getter, reverse=True)))  # noqa: C413

def stride_order2fill_order(order):
    """
    Convert stride order to fill order
    For channel last format,

    stride order = [3, 0, 2, 1] and fill order = [1, 3, 2, 0]
    """
    lookup = {pos: idx for idx, pos in enumerate(order)}
    fill_order = [lookup[i] for i in range(len(order))]
    return fill_order

def get_stride_order(seq):
    """
    Convert strides to stride order
    """
    sorted_idx: List[int] = argsort(seq)
    out = [0 for _ in range(len(seq))]
    a = sorted_idx.copy()
    for i, elem in enumerate(sorted_idx):
        out[elem] = i
    fillorder = stride_order2fill_order(out)
    assert fillorder == sorted_idx
    return out

for _ in range(1000):
    a = [0, 1, 2, 3]
    random.shuffle(a)
    get_stride_order(a)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138376
Approved by: https://github.com/drisspg
2024-10-22 18:40:39 +00:00
Sam Ginzburg
c1ead6fba3 Bugfix for passing None args to user defined Triton kernel (#138472)
add test

fewer failing tests

more tests passing

tests passing

lint

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138472
Approved by: https://github.com/aakhundov
2024-10-21 20:00:04 +00:00
Jason Ansel
85a6a782e5 [inductor] Generalize WorkspaceArg for graph-level semaphores (#138170)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138170
Approved by: https://github.com/Chillee
2024-10-18 23:05:54 +00:00
Jason Ansel
620039c38c [inductor] Respect ir_dataclass(frozen=...) in Python 3.9 (#138247)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138247
Approved by: https://github.com/Skylion007, https://github.com/Chillee
2024-10-18 17:55:12 +00:00
Adnan Akhundov
d116d007ee Add host-side Triton TMA support to Inductor (#137950)
This adds Dynamo tracing support for the host-side Triton TMA API (see `create_2d_tma_descriptor` calls on the host in the [Triton tutorial](https://triton-lang.org/main/getting-started/tutorials/09-persistent-matmul.html#sphx-glr-getting-started-tutorials-09-persistent-matmul-py)). A few notes:

- Here we assume the availability of the host-side TMA API added to upstream Triton in https://github.com/triton-lang/triton/pull/4498. As of time of writing, this is not a part of the PT2 OSS Triton pin (although back-ported internally). OSS Triton pin update should be done in December 2024.
- Due to Dynamo support implemented in the previous PR, the `tma_descriptor_metadata` dict is delivered to the `triton_kerenl_wrap_` lowering and passed to the `ir.UserDefinedTritonKernel` as additional argument.
- Looking into the `tma_descriptor_metadata`, `ir.UserDefinedTritonKernel` substitutes the corresponding `TensorBox` arguments of the kernel (swapped upstream in Dynamo) by the new `ir.TMADescriptor` nodes implementing TMA descriptors in Inductor IR.
- `ir.TMADescriptor.__init__` provides the wiring between the upstream underlying `ir.TensorBox` and the downstream `ir.UserDefinedTritonKernel` kernel. In particular, we use `ir.NonOwnedLayout` wrapping `ir.ReinterpretView` to avoid the upstream tensor's buffer being deleted prematurely (before the TMA descriptor is used in the Triton kernel).
- Via `ir.TMADescriptor.codegen`, the Triton's `create_{1d,2d}_tma_descriptor` function call is codegened in the wrapper (in the host code).
- New `TMADescriptorArg` dataclass is added to handle the Triton kernel metadata pertinent to host-side TMA.
- AOT Inductor support will be implemented in a follow-up PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137950
Approved by: https://github.com/eellison
ghstack dependencies: #137677
2024-10-18 06:27:24 +00:00
Xingyuan Li
cc6c248919 [Inductor UT] Generalize newly introduced inductor UTs for intel GPU (Part 2) (#136856)
[Inductor UT] Generalize Newly introduced inductor UTs for intel GPU
reuse `test/inductor/test_inductor_freezing.py`
reuse `test/inductor/test_layout_optim.py`
reuse `test/inductor/test_loop_ordering.py`
reuse `test/inductor/test_memory_planning.py`
reuse `test/inductor/test_padding.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136856
Approved by: https://github.com/EikanWang, https://github.com/etaf, https://github.com/jansel
2024-10-18 03:58:00 +00:00
chilli
6752e7dc3e Moved some of Inductor IR nodes to be frozen (#137859)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137859
Approved by: https://github.com/ezyang
2024-10-17 18:04:45 +00:00
Yidi Wu
3087b5e431 [cond] support lifted symint inputs in subgraph (#137519)
As titled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137519
Approved by: https://github.com/eellison
2024-10-17 16:09:06 +00:00
Bin Bao
fe43f72be7 [AOTI] Remove the non-ABI-compatible mode (part 2) (#138047)
Summary: Continue to clean up non-ABI-compatible mode related code.

Differential Revision: [D64444327](https://our.internmc.facebook.com/intern/diff/D64444327)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138047
Approved by: https://github.com/chenyang78
ghstack dependencies: #137982, #138016, #138009
2024-10-17 02:54:24 +00:00
Brian Hirsh
a682194a11 inductor: use previous guards to know if a size is 1 for broadcasting (#136670)
Fixes https://github.com/pytorch/pytorch/issues/136640

Today, inductor has some logic to figure out when it needs to do broadcasting during lowering, which just checks if any of the input shapes have sizes equal to 1.

In particular: we should already have this information by the time we get to inductor, because our FakeTensor compute will have branched/guarded on whether any ops performed broadcasting, appropriately.

In particular, if we have a tensor with a size value of `(64//((2048//(s3*((s2//s3)))))))`, and it happens to be equal to one (and it is used in an op that requires this dim to be broadcasted), FakeTensorProp will have generated a guard:
```
Eq((64//((2048//(s3*((s2//s3))))))), 1)
```

I chose the simplest possible way to beef up inductor's checks to know when a given size is equal to 1: loop over the existing shape env guards, and if our current size is a sympy expression on the LHS of one of our `Eq(LHS, 1)` guards, then return True.

I'm hoping for feedback on whether or not this approach is reasonable. One better option I could imagine is that our symbolic reasoning should have automatically simplified the size of our tensor down to a constant as part of evaluating that guard. I was originally going to try to do this directly in the shape env, but I ran into a few issues:

(1) I wanted to call some version of `set_replacement(expr, 1)`. But `set_replacement()` only accepts plain symbols on the LHS, not expressions

(2) in theory I could get this to work if I could rework the above expression to move everything that is not a free variable to the RHS, e.g. `Eq(s2, 32)`. It looks like our existing  `try_solve()` logic is... [not quite able](https://github.com/pytorch/pytorch/blob/main/torch/utils/_sympy/solve.py#L27) to do this generally though.

Checking the guards feels pretty simple-and-easy. Are we worried that it is too slow to iterate over all the guards? I could also cache the lookup so we only need to iterate over guards that are of the form `Eq(LHS, 1)`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136670
Approved by: https://github.com/ezyang
2024-10-16 22:41:39 +00:00
chilli
1cf78bbf62 Refactored debug_extra to be on ChoiceCaller (and called description) (#137857)
Before:
<img width="644" alt="image" src="https://github.com/user-attachments/assets/17b0fa8a-37c8-494b-8914-9d42c3db4bef">

After:
<img width="1292" alt="image" src="https://github.com/user-attachments/assets/5ee59747-a34f-4dd6-b943-cb5a53d52080">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137857
Approved by: https://github.com/ezyang, https://github.com/jansel, https://github.com/masnesral
ghstack dependencies: #137768
2024-10-15 00:48:14 +00:00
chilli
0e4d42634e Port Inductor dataclasses to be kw_only (#137768)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137768
Approved by: https://github.com/ezyang
2024-10-14 10:33:43 +00:00