pytorch/torch/nn
chilli b82000c1b3 Removed _compile workaround for create_block_mask (#137477)
I also put in a change for supporting `create_block_mask` to properly handle non-multiples of BLOCK_SIZE.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137477
Approved by: https://github.com/drisspg, https://github.com/BoyuanFeng
2024-10-11 19:04:23 +00:00
..
attention Removed _compile workaround for create_block_mask (#137477) 2024-10-11 19:04:23 +00:00
backends
intrinsic
modules Fixed issue with nn.Transformer().generate_square_subsequent_mask() (#137654) 2024-10-10 03:10:01 +00:00
parallel Revert "Add back DistributedDataParallel types that were lost when pyi was removed (#136835)" 2024-10-07 18:59:41 +00:00
qat
quantizable
quantized
utils Allow optional positional arguments for torch.func.functional_call (#134643) 2024-09-12 15:22:06 +00:00
__init__.py [BE][Easy][17/19] enforce style for empty lines in import segments in torch/[a-c]*/ and torch/[e-n]*/ (#129769) 2024-08-04 10:24:09 +00:00
_reduction.py
common_types.py
cpp.py
functional.py Support embedding_bag() with NJT input (#135888) 2024-09-23 17:35:19 +00:00
functional.pyi.in
grad.py
init.py [DTensor] Added naive support for nn.init.orthogonal_ (#132104) 2024-07-30 21:55:09 +00:00
parameter.py Make adding Buffers more like adding Parameters (#125971) 2024-07-31 10:32:40 +00:00
parameter.pyi Revert "[BE]: Update Typeguard to TypeIs for better type inference (#133814)" 2024-08-21 16:13:34 +00:00