pytorch/torch/distributed
Jane Xu 8817e5ac80 Render Example: and not Example:: in docs (#153978)
Everything here is a grep except the changes in tools/autograd/load_derivatives.py which I manually corrected.

The correct notation is:
```
Example::

    >>> ...
```

It is common and wrong to have:
```
Example::
    >>> ...
```

In the wrong example, we get these pesky double colons:
![image](https://github.com/user-attachments/assets/20ffd349-68bb-4552-966c-e23923350476)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153978
Approved by: https://github.com/soulitzer, https://github.com/malfet
2025-05-21 01:03:26 +00:00
..
_composable [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
_shard Revert "[Ez][BE]: Remove accidental classvar (#153540)" 2025-05-16 08:26:37 +00:00
_sharded_tensor [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
_sharding_spec
_symmetric_memory [Async TP] Fix dim swapping before reduction in fused_scaled_matmul_reduce_scatter (#153595) 2025-05-15 21:44:57 +00:00
_tensor [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
_tools [BE]: Update ruff to 0.11.8 (#153249) 2025-05-12 18:30:52 +00:00
algorithms Revert "[BE]: Enable RUFF TRY400 rule - log.exception (#153473)" 2025-05-16 08:29:26 +00:00
autograd
benchmarks
checkpoint Revert "[BE]: Enable RUFF TRY400 rule - log.exception (#153473)" 2025-05-16 08:29:26 +00:00
elastic Revert "[BE]: Enable RUFF TRY400 rule - log.exception (#153473)" 2025-05-16 08:29:26 +00:00
examples [BE]: Add PEP621 project section to pyproject.toml (#153055) 2025-05-12 02:16:07 +00:00
fsdp [FSDP1] print fqns when debug FlatParamHandle (#151336) 2025-04-24 04:49:24 +00:00
launcher
nn [BE]: Update ruff to 0.11.8 (#153249) 2025-05-12 18:30:52 +00:00
optim Fix typo (#153561) 2025-05-14 21:38:51 +00:00
pipelining Revert "[BE]: Enable RUFF TRY400 rule - log.exception (#153473)" 2025-05-16 08:29:26 +00:00
rpc Revert "[BE]: Enable RUFF TRY400 rule - log.exception (#153473)" 2025-05-16 08:29:26 +00:00
tensor Fix negative dim issue in for parallel loss context manager (#152785) 2025-05-14 10:43:27 +00:00
__init__.py c10d/Store: add nonblocking mode to queue_pop (#151485) 2025-04-18 02:14:50 +00:00
_checkpointable.py
_composable_state.py
_functional_collectives_impl.py
_functional_collectives.py [device_mesh] replace dim_group_info with group_name (#150898) 2025-05-13 17:16:45 +00:00
_serialization.py PEP585: More UP006 fixes (#146392) 2025-02-20 06:18:13 +00:00
_state_dict_utils.py Create and send full_tensor on ProcessGroup-supported device in _broadcast_tensors (#148865) 2025-03-12 20:56:31 +00:00
argparse_util.py
c10d_logger.py
collective_utils.py
constants.py
CONTRIBUTING.md
device_mesh.py Render Example: and not Example:: in docs (#153978) 2025-05-21 01:03:26 +00:00
distributed_c10d.py [c10d] Simplify new_subgroups() by using new_subgroups_by_enumeration() (#153843) 2025-05-20 19:15:20 +00:00
launch.py [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
logging_handlers.py
remote_device.py
rendezvous.py Fix dist.init_process_group on windows (#148266) 2025-03-05 00:07:56 +00:00
run.py [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
utils.py Refactor to use torch.accelerator.device_index instead of torch.cuda.device for generic device context manager (#148880) 2025-04-25 09:45:25 +00:00