..
attention
Revert "[FlexAttention] Enable different qk and v head-dims ( #134043 )"
2024-08-22 13:44:17 +00:00
backends
[BE][Easy] enable UFMT for torch/nn/parallel ( #128596 )
2024-06-17 16:29:22 +00:00
intrinsic
[BE][Easy] enable UFMT for torch/nn/ ( #128865 )
2024-07-25 02:48:42 +00:00
modules
Fix docs for L1Loss and MSELoss ( #133501 )
2024-08-15 18:56:55 +00:00
parallel
[DeviceMesh] Remove parent mesh concept from _MeshEnv and replace by root mesh ( #132339 )
2024-08-07 07:01:12 +00:00
qat
[BE][Easy] enable UFMT for torch/nn/ ( #128865 )
2024-07-25 02:48:42 +00:00
quantizable
[BE][Easy] enable UFMT for torch/nn/ ( #128865 )
2024-07-25 02:48:42 +00:00
quantized
[BE][Easy] enable UFMT for torch/nn/ ( #128865 )
2024-07-25 02:48:42 +00:00
utils
Add proper casting to fuse_linear_bn_weights ( #134105 )
2024-08-22 14:26:12 +00:00
__init__.py
[BE][Easy][17/19] enforce style for empty lines in import segments in torch/[a-c]*/ and torch/[e-n]*/ ( #129769 )
2024-08-04 10:24:09 +00:00
_reduction.py
[BE] enable UFMT for torch/nn/*.py ( #128593 )
2024-06-23 16:05:13 +00:00
common_types.py
[BE] enable UFMT for torch/nn/*.py ( #128593 )
2024-06-23 16:05:13 +00:00
cpp.py
Flip default value for mypy disallow_untyped_defs [8/11] ( #127845 )
2024-06-08 18:49:56 +00:00
functional.py
Grouped Query Attention ( #132689 )
2024-08-07 05:35:36 +00:00
functional.pyi.in
[torchgen] reference generated comment to actual location of the generator and template ( #130020 )
2024-07-05 21:47:14 +00:00
grad.py
[BE] enable UFMT for torch/nn/*.py ( #128593 )
2024-06-23 16:05:13 +00:00
init.py
[DTensor] Added naive support for nn.init.orthogonal_ ( #132104 )
2024-07-30 21:55:09 +00:00
parameter.py
Make adding Buffers more like adding Parameters ( #125971 )
2024-07-31 10:32:40 +00:00
parameter.pyi
Revert "[BE]: Update Typeguard to TypeIs for better type inference ( #133814 )"
2024-08-21 16:13:34 +00:00