pytorch/torch/distributed
zpcore 50d8168c8b [DTensor] Support in gradient placement for local_map() (#155181)
Support `in_grad_placements` argument in torch.distributed.tensor.experimental.local_map().  The argument helps enforce placement of gradient of the input Dtensor.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155181
Approved by: https://github.com/wanchaol
2025-06-12 17:07:04 +00:00
..
_composable
_shard Add torch.Tensor._make_wrapper_subclass to torch/_C/__init__.pyi (#154022) 2025-05-27 14:10:00 +00:00
_sharded_tensor
_sharding_spec
_symmetric_memory [SymmMem] Enable NVSHMEM for Triton (#155506) 2025-06-12 00:22:49 +00:00
_tensor
_tools [BE]: Update ruff to 0.11.8 (#153249) 2025-05-12 18:30:52 +00:00
algorithms Revert "[BE]: Enable RUFF TRY400 rule - log.exception (#153473)" 2025-05-16 08:29:26 +00:00
autograd
benchmarks
checkpoint Updates to HFStorageReader to use TensorStorageMetadata instead of BytesStorageMetadata (#154518) 2025-06-11 23:35:05 +00:00
elastic [1/n]adding torch.distributed.run option to provide destination for event logging (#154644) (#155268) 2025-06-09 10:43:52 +00:00
examples [BE]: Add PEP621 project section to pyproject.toml (#153055) 2025-05-12 02:16:07 +00:00
fsdp Convert rst files to md (#155369) 2025-06-11 23:00:52 +00:00
launcher [1/n]adding torch.distributed.run option to provide destination for event logging (#154644) (#155268) 2025-06-09 10:43:52 +00:00
nn [BE]: Update ruff to 0.11.8 (#153249) 2025-05-12 18:30:52 +00:00
optim [BE][Ez]: Remove unneeded mypy suppressions (#154800) 2025-06-01 06:10:41 +00:00
pipelining [BE][Ez]: Optimize unnecessary lambda with operator (#154722) 2025-05-30 23:47:10 +00:00
rpc Make torch importable if compiled without TensorPipe (#154382) 2025-05-27 18:13:38 +00:00
tensor [DTensor] Support in gradient placement for local_map() (#155181) 2025-06-12 17:07:04 +00:00
__init__.py c10d/Store: add nonblocking mode to queue_pop (#151485) 2025-04-18 02:14:50 +00:00
_checkpointable.py [BE]: Backport runtime_checkable perf improvements/behavior from 3.12 (#155130) 2025-06-06 13:28:05 +00:00
_composable_state.py
_functional_collectives_impl.py
_functional_collectives.py Add torch.Tensor._make_wrapper_subclass to torch/_C/__init__.pyi (#154022) 2025-05-27 14:10:00 +00:00
_serialization.py
_state_dict_utils.py fix numpy compatibility for 2d small list indices (#154806) 2025-06-04 01:58:52 +00:00
argparse_util.py
c10d_logger.py
collective_utils.py
constants.py
CONTRIBUTING.md
device_mesh.py Revert "[inductor] Add typing to _inductor/ir.py (#149958)" 2025-06-06 15:19:16 +00:00
distributed_c10d.py [c10d] Enhance get_process_group_ranks() to accept group=None (#154902) 2025-06-11 23:41:03 +00:00
launch.py
logging_handlers.py
remote_device.py
rendezvous.py Fix tcp init when using port 0 (#154156) 2025-05-23 21:41:58 +00:00
run.py [1/n]adding torch.distributed.run option to provide destination for event logging (#154644) (#155268) 2025-06-09 10:43:52 +00:00
utils.py Refactor to use torch.accelerator.device_index instead of torch.cuda.device for generic device context manager (#148880) 2025-04-25 09:45:25 +00:00