pytorch/torch/distributed/checkpoint
Gufan Yin 96dfe95ed0 Fix DDPLoadBalancingPlanner docstring (#134044)
Summary:
1. Indentation in chunk function was wrong.
1. The previous logic missed a level of zip.

This diff uses the idiom in python zip doc to do chunking https://docs.python.org/3/library/functions.html#zip

Test Plan: Run the docstring locally

Differential Revision: D61548758

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134044
Approved by: https://github.com/fegin
2024-08-21 21:28:22 +00:00
..
examples Revert "[dtensor] move DTensor to public namespace (#133113)" 2024-08-19 05:00:19 +00:00
__init__.py
_checkpointer.py
_dedup_save_plans.py [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
_dedup_tensors.py [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
_fsspec_filesystem.py [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
_nested_dict.py [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
_sharded_tensor_utils.py [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
_storage_utils.py [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
_traverse.py Revert "[dtensor] move DTensor to public namespace (#133113)" 2024-08-19 05:00:19 +00:00
api.py [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
default_planner.py Revert "[dtensor] move DTensor to public namespace (#133113)" 2024-08-19 05:00:19 +00:00
filesystem.py [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200) 2024-08-15 15:50:19 +00:00
format_utils.py Pass torch.load(weights_only=) internally to avoid FutureWarning (#130663) 2024-07-16 01:24:38 +00:00
logger.py [BE][Easy][18/19] enforce style for empty lines in import segments in torch/d*/ (#129770) 2024-08-01 04:22:50 +00:00
logging_handlers.py
metadata.py Back out "Pass device to is_pinned call inside TensorProperties.create_from_tensor" (#129972) 2024-07-06 01:07:32 +00:00
optimizer.py Revert "[dtensor] move DTensor to public namespace (#133113)" 2024-08-19 05:00:19 +00:00
planner_helpers.py Revert "[dtensor] move DTensor to public namespace (#133113)" 2024-08-19 05:00:19 +00:00
planner.py Fix DDPLoadBalancingPlanner docstring (#134044) 2024-08-21 21:28:22 +00:00
resharding.py [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
staging.py [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200) 2024-08-15 15:50:19 +00:00
state_dict_loader.py [BE] mypy: disallow untyped decorators (#131428) 2024-07-23 21:50:55 +00:00
state_dict_saver.py [BE] mypy: disallow untyped decorators (#131428) 2024-07-23 21:50:55 +00:00
state_dict.py Revert "[dtensor] move DTensor to public namespace (#133113)" 2024-08-19 05:00:19 +00:00
stateful.py
storage.py [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200) 2024-08-15 15:50:19 +00:00
utils.py [torchrec][pt-d][model store] introduce LocalShardsWrapper for DTensor (#129150) 2024-06-21 01:58:51 +00:00