pytorch/torch/nn/attention
albanD bdbf2792a8 Fix docs build (#155129)
Not sure why the online doc build passes but it fails locally with these broken strings...

~Also pinning numpy version even though it is technically optional to ensure users have the right version as most users have numpy in their environment anyways.~
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155129
Approved by: https://github.com/janeyx99, https://github.com/svekars
2025-06-09 22:25:20 +00:00
..
experimental [PagedAttention] Support different input position for each batch index (#144693) 2025-01-15 18:03:52 +00:00
__init__.py [SDPA] Respect sdpa_kernel's priority_order setting in torch.compile (#147768) 2025-03-13 18:52:34 +00:00
_utils.py [FlexAttention] Remove dead code (#150575) 2025-04-03 01:46:19 +00:00
bias.py Fix docs build (#155129) 2025-06-09 22:25:20 +00:00
flex_attention.py [FlexAttention] Enforce Q,K,V memory layouts for fp8 flex attention to avoid perf degradation (#153357) 2025-05-16 04:56:50 +00:00