pytorch/torch/nn
ishanjmukherjee d82610c2af docs: fix "should not to be" typo in register_buffer docstring (#153817)
Corrects a small grammatical error in `register_buffer` docstring, from "...  should not to be ..." to "...  should not be ...". Docs-only change, so no runtime behavior, tests, or APIs are affected.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153817
Approved by: https://github.com/mikaylagawarecki
2025-05-21 22:46:50 +00:00
..
attention [FlexAttention] Enforce Q,K,V memory layouts for fp8 flex attention to avoid perf degradation (#153357) 2025-05-16 04:56:50 +00:00
backends
intrinsic
modules docs: fix "should not to be" typo in register_buffer docstring (#153817) 2025-05-21 22:46:50 +00:00
parallel [ddp] propagate use_python_reducer to C++ reducer (#152735) 2025-05-16 01:38:03 +00:00
qat
quantizable
quantized
utils [BE]: Type previously untyped decorators (#153726) 2025-05-21 15:56:19 +00:00
__init__.py
_reduction.py
common_types.py
cpp.py
functional.py Add pad limit of avg_poolnd and AvgPoolnd (#152680) 2025-05-04 17:25:22 +00:00
functional.pyi.in [BE] Add __all__ to torch/nn/functional.pyi and torch/return_types.pyi (#150729) 2025-05-15 19:01:57 +00:00
grad.py
init.py
parameter.py
parameter.pyi