pytorch/torch/distributed/fsdp
2024-10-19 16:45:22 +00:00
..
__init__.py
_common_utils.py Remove unused Python variables in torch/[b-z]* (#136963) 2024-10-19 16:45:22 +00:00
_debug_utils.py
_dynamo_utils.py
_exec_order_utils.py
_flat_param.py Remove unused Python variables in torch/[b-z]* (#136963) 2024-10-19 16:45:22 +00:00
_fsdp_extensions.py [reland][dtensor] move DTensor to public namespace (#134203) 2024-09-08 17:08:40 +00:00
_init_utils.py [BE] Raise when the target model has scalar parameters (#132934) 2024-08-12 18:28:02 +00:00
_limiter_utils.py Integrate device agnostic APIs in FSDP library [1/n] (#134337) 2024-08-27 17:31:11 +00:00
_optim_utils.py Remove unused Python variables in torch/[b-z]* (#136963) 2024-10-19 16:45:22 +00:00
_runtime_utils.py
_shard_utils.py [reland][dtensor] move DTensor to public namespace (#134203) 2024-09-08 17:08:40 +00:00
_state_dict_utils.py Remove unused Python variables in torch/[b-z]* (#136963) 2024-10-19 16:45:22 +00:00
_trace_utils.py [BE] typing for decorators - fx/_compatibility (part 1) (#134202) 2024-08-22 17:07:33 +00:00
_traversal_utils.py
_unshard_param_utils.py
_wrap_utils.py
api.py
fully_sharded_data_parallel.py [reland][dtensor] move DTensor to public namespace (#134203) 2024-09-08 17:08:40 +00:00
sharded_grad_scaler.py Use _amp_foreach_non_finite_check_and_unscale_ for CPU grads of ShardedGradScaler (#135232) 2024-09-14 09:53:17 +00:00
wrap.py