pytorch/torch/nn/attention
2025-09-23 22:46:51 +00:00
..
experimental [FlexAttn] Fix Paged Attention Accuracy via Upper Mask Mod and Prevent Invalid Memory Access (#160861) 2025-08-30 04:50:23 +00:00
__init__.py [Intel GPU] Support SDPA backend selection and priority setting on XPU (#159464) 2025-08-14 08:55:31 +00:00
_utils.py [BE][PYFMT] migrate PYFMT for {torch,test}/{nn,optim}/** to ruff format (#144548) 2025-06-14 11:27:04 +00:00
bias.py [BE][12/16] fix typos in torch/ (#156602) 2025-07-02 22:55:29 +00:00
flex_attention.py Fix warn message (#163578) 2025-09-23 22:46:51 +00:00