..
_composable
[BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format ( #144547 )
2025-02-28 07:35:56 +00:00
_shard
[BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format ( #144547 )
2025-02-28 07:35:56 +00:00
_sharded_tensor
[BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format ( #144547 )
2025-02-28 07:35:56 +00:00
_sharding_spec
_symmetric_memory
[Async TP] More robust support for rowwise scales when fusing matmul reduce-scatter ( #149247 )
2025-03-27 03:15:30 +00:00
_tensor
[BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format ( #144547 )
2025-02-28 07:35:56 +00:00
_tools
Add support for non functional collectives under FakeTensorMode and fake_pg for memory tracking ( #147566 )
2025-03-08 18:00:49 +00:00
algorithms
[BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format ( #144547 )
2025-02-28 07:35:56 +00:00
autograd
benchmarks
[BE][CI] bump ruff to 0.8.4 ( #143753 )
2024-12-24 12:24:10 +00:00
checkpoint
Support having no metadata file for HuggingFaceStorageReader ( #150701 )
2025-04-07 22:10:39 +00:00
elastic
Expose the rendezvous keepalive arguments ( #145228 )
2025-03-03 19:11:56 +00:00
examples
fsdp
[dynamo] add dynamo disable reasons to codebase ( #150440 )
2025-04-02 04:26:48 +00:00
launcher
PEP585 update - torch/distributed ( #145164 )
2025-01-21 04:23:29 +00:00
nn
[BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format ( #144547 )
2025-02-28 07:35:56 +00:00
optim
[BE][Ez]: Use itertools.chain.from_iterable when possible ( #148190 )
2025-03-06 20:37:06 +00:00
pipelining
[Codemod][AddExplicitStrictExportForTrainingInferenceArg] caffe2/ ( #149595 )
2025-04-03 23:50:13 +00:00
rpc
[BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format ( #144547 )
2025-02-28 07:35:56 +00:00
tensor
[torchrec] update local_shards_wrapper to latest version ( #150469 )
2025-04-07 13:00:52 +00:00
__init__.py
[c10d] Add a collective time estimator for NCCL comms ( #149343 )
2025-03-19 07:54:02 +00:00
_checkpointable.py
PEP585 update - torch/distributed ( #145164 )
2025-01-21 04:23:29 +00:00
_composable_state.py
_functional_collectives_impl.py
PEP585 update - torch/distributed ( #145164 )
2025-01-21 04:23:29 +00:00
_functional_collectives.py
[Async TP] More robust support for rowwise scales when fusing matmul reduce-scatter ( #149247 )
2025-03-27 03:15:30 +00:00
_serialization.py
PEP585: More UP006 fixes ( #146392 )
2025-02-20 06:18:13 +00:00
_state_dict_utils.py
Create and send full_tensor on ProcessGroup-supported device in _broadcast_tensors ( #148865 )
2025-03-12 20:56:31 +00:00
argparse_util.py
c10d_logger.py
PEP585 update - torch/distributed ( #145164 )
2025-01-21 04:23:29 +00:00
collective_utils.py
PEP585 update - torch/distributed ( #145164 )
2025-01-21 04:23:29 +00:00
constants.py
CONTRIBUTING.md
device_mesh.py
[DeviceMesh] Add some documentation for from_group API and add a 2D test ( #146364 )
2025-03-01 00:57:37 +00:00
distributed_c10d.py
[Reland] Launch kernel on current stream & remove record_stream entirely ( #150398 )
2025-04-01 16:46:07 +00:00
launch.py
[BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format ( #144547 )
2025-02-28 07:35:56 +00:00
logging_handlers.py
PEP585 update - torch/distributed ( #145164 )
2025-01-21 04:23:29 +00:00
remote_device.py
rendezvous.py
Fix dist.init_process_group on windows ( #148266 )
2025-03-05 00:07:56 +00:00
run.py
[BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format ( #144547 )
2025-02-28 07:35:56 +00:00
utils.py
[BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format ( #144547 )
2025-02-28 07:35:56 +00:00