pytorch/torch/nn/attention
chilli b82000c1b3 Removed _compile workaround for create_block_mask (#137477)
I also put in a change for supporting `create_block_mask` to properly handle non-multiples of BLOCK_SIZE.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137477
Approved by: https://github.com/drisspg, https://github.com/BoyuanFeng
2024-10-11 19:04:23 +00:00
..
__init__.py Param fixes in docstring (#136097) 2024-09-21 18:56:34 +00:00
_utils.py [FlexAttention] Enable different qk and v head-dims (#134043) 2024-08-23 01:06:57 +00:00
bias.py [BE]: Update mypy to 1.11.2 (#133816) 2024-09-16 19:44:11 +00:00
flex_attention.py Removed _compile workaround for create_block_mask (#137477) 2024-10-11 19:04:23 +00:00