pytorch/torch/nn
2025-10-05 19:32:21 +00:00
..
attention Change default device to current acclerator (#164399) 2025-10-03 16:15:09 +00:00
backends
intrinsic
modules Revert "Enable all SIM rules except disabled ones (#164645)" 2025-10-05 19:32:21 +00:00
parallel Revert "Enable all SIM rules except disabled ones (#164645)" 2025-10-05 19:32:21 +00:00
qat [BE][PYFMT] migrate PYFMT for {torch,test}/{nn,optim}/** to ruff format (#144548) 2025-06-14 11:27:04 +00:00
quantizable [BE][PYFMT] migrate PYFMT for {torch,test}/{nn,optim}/** to ruff format (#144548) 2025-06-14 11:27:04 +00:00
quantized [BE][PYFMT] migrate PYFMT for {torch,test}/{nn,optim}/** to ruff format (#144548) 2025-06-14 11:27:04 +00:00
utils [2/N] Apply ruff UP035 check in torch files (#164054) 2025-09-29 03:35:32 +00:00
__init__.py
_reduction.py
common_types.py [2/N] Apply ruff UP035 check in torch files (#164054) 2025-09-29 03:35:32 +00:00
cpp.py
functional.py [2/N] Fix ruff warnings (#164460) 2025-10-04 03:40:32 +00:00
functional.pyi.in Revert "Add label_smoothing param in nn.BCELoss and nn.BCEWithLogitsLoss (#150282)" 2025-08-13 09:01:52 +00:00
grad.py
init.py [1/N] Fix ruff warnings (#164333) 2025-10-01 16:48:32 +00:00
parameter.py [BE] More torch.nn docs coverage test (except for torch.nn.parallel) (#158654) 2025-07-25 22:03:55 +00:00
parameter.pyi