pytorch/docs
Bruce Chang 311ea0dec0 shrink_group implementation to expose ncclCommShrink API (#164518)
Closes #164529

To expose the new [ncclCommShrink](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/api/comms.html#ncclcommshrink) API to PyTorch.

This is useful when you need to exclude certain GPUs or nodes from a collective operation, for example in fault tolerance scenarios or when dynamically adjusting resource utilization.

For more info:  [Shrinking a communicator](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/communicators.html#shrinking-a-communicator)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164518
Approved by: https://github.com/kwen2501
2025-10-30 01:50:54 +00:00
..
cpp [Fix] fix gramma error in PyTorch docs (#166158) 2025-10-29 03:01:07 +00:00
source shrink_group implementation to expose ncclCommShrink API (#164518) 2025-10-30 01:50:54 +00:00
.gitignore
libtorch.rst Add ROCm documentation to libtorch (C++) reST. (#136378) 2024-09-25 02:30:56 +00:00
make.bat
Makefile [ONNX] Filter out torchscript sentences (#158850) 2025-07-24 20:59:06 +00:00
README.md
requirements.txt Revert "Switch to standard pep517 sdist generation (#152098)" 2025-07-01 14:14:52 +00:00

Please see the Writing documentation section of CONTRIBUTING.md for details on both writing and building the docs.