pytorch/torch/nn/attention
Eddie Yan 0dcd482e54 [SDPA] Respect sdpa_kernel's priority_order setting in torch.compile (#147768)
[https://github.com/pytorch/pytorch/pull/140467](https://github.com/pytorch/pytorch/pull/140467) added the option to specify a priority order for SDPA but the `torch.compile` path silently ignored this setting as I wasn't aware of the separate context manager handling on `torch.compile`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147768
Approved by: https://github.com/drisspg
2025-03-13 18:52:34 +00:00
..
experimental [PagedAttention] Support different input position for each batch index (#144693) 2025-01-15 18:03:52 +00:00
__init__.py [SDPA] Respect sdpa_kernel's priority_order setting in torch.compile (#147768) 2025-03-13 18:52:34 +00:00
_utils.py [5/N] Apply Ruff fixes and pyupgrade to Python 3.9 (#144205) 2025-01-15 04:00:47 +00:00
bias.py [BE]: Update mypy to 1.11.2 (#133816) 2024-09-16 19:44:11 +00:00
flex_attention.py Fix a number of flexattention issues (cse, cudagraph, etc.) (#145059) 2025-01-29 20:27:39 +00:00