pytorch/torch/distributed
ProGamerGov 357b7d589c Fix docstring inconsistencies: string -> str, boolean -> bool (#82410)
### Description

Throughout the PyTorch docs and codebase, the `string` type in docstrings is referred to by two separate names. This leads to inconsistent docs, like you can see here: https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html#torch.nn.Conv3d

This PR fixes this issue by ensuring that all mentions of the string type in docstrings, are using the same format that Sphinx generates hyperlinks for.

### Testing
No testing should be required for this change

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82410
Approved by: https://github.com/jbschlosser
2022-07-28 21:29:57 +00:00
..
_shard Fix docstring inconsistencies: string -> str, boolean -> bool (#82410) 2022-07-28 21:29:57 +00:00
_sharded_tensor [reland] Create torch.distributed._shard package. (#72141) 2022-02-02 06:58:20 +00:00
_sharding_spec [reland] Create torch.distributed._shard package. (#72141) 2022-02-02 06:58:20 +00:00
algorithms Adding fsdp fp16 and bf16 hooks (#81711) 2022-07-19 23:54:51 +00:00
autograd
benchmarks
elastic Make sure that exit code is propagated from Child to parent process (#81408) 2022-07-19 18:47:54 +00:00
fsdp [FSDP] Refactor casting of grad to full param dtype (#81574) 2022-07-25 18:05:52 +00:00
launcher Add __all__ to torch.distributed and tensorboard submodules (#80444) 2022-06-28 16:33:22 +00:00
nn Remove unused rank from _AllGatherBase backward (#81515) 2022-07-15 15:30:07 +00:00
optim Enable Zero1's ddp_with_overlap for hpu backend (#80438) 2022-07-18 15:05:27 +00:00
pipeline Revert "Add __all__ for torch.distributed and fx modules (#80460)" 2022-06-29 16:20:55 +00:00
rpc Prevent automatic cuda init in init_rpc (#80180) 2022-07-08 14:18:02 +00:00
__init__.py [Dynamic RPC] Allow for optional world_size argument in init_rpc (#73372) 2022-03-24 16:19:28 +00:00
argparse_util.py
constants.py
CONTRIBUTING.md Fix some links in torch/distributed/CONTRIBUTING.md (#79855) 2022-06-21 00:48:30 +00:00
distributed_c10d.py [PFC] Native UCC process group for Pytorch (#79918) 2022-07-12 14:45:44 +00:00
launch.py Introduce the torchrun entrypoint (#64049) 2021-08-26 20:17:48 -07:00
remote_device.py Rewrite ShardedTensor.gather to use dist.gather instead of gather_object (#77272) 2022-05-17 02:14:40 +00:00
rendezvous.py [rpc/distributed] eliminate code duplication in distributed/rendezvou… (#81577) 2022-07-22 16:21:00 +00:00
run.py (torch/elastic) add documentation clarifying that torchrun is a console script to torch.distributed.run (#73598) 2022-03-03 08:35:50 +00:00
utils.py FSDP parameter sync 2022-05-17 19:58:49 +00:00