pytorch/torch/distributed
Jerry Zhang c12f829cce [nn] Add remove_duplicate flag to named_buffers (#674) (#85903)
Summary:
X-link: https://github.com/pytorch/torchrec/pull/674

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84984

this is to allow named_buffers to return the same buffer objects with different names multiple times, needed by internal use cases
ghstack-source-id: 168589597

Test Plan:
python test/test_nn.py -k test_buffers_and_named_buffers

Imported from OSS

Reviewed By: albanD

Differential Revision: D39493161

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85903
Approved by: https://github.com/albanD
2022-10-11 18:49:09 +00:00
..
_shard [ShardedTensor] Add is_floating_point (#85483) 2022-09-23 04:48:03 +00:00
_sharded_tensor
_sharding_spec Add __all__ for a few distributed modules plus a little typing (reland) (#84872) 2022-09-13 21:57:49 +00:00
_spmd Remove eager mode support form CommTensor (#84978) 2022-09-14 17:23:23 +00:00
algorithms Update hierarchical_model_averager.py (#85648) 2022-10-03 06:15:20 +00:00
autograd Integrate xdoctest - Rebased (#82797) 2022-08-12 02:08:01 +00:00
benchmarks
elastic Log a new "timer expired" event to Scuba in file_based_local_timer (#85861) 2022-10-05 18:23:53 +00:00
fsdp [FSDP][Easy] Rename _prefixed_param_names -> _fqns for consistency (#86653) 2022-10-11 12:49:45 +00:00
launcher Add __all__ to torch.distributed and tensorboard submodules (#80444) 2022-06-28 16:33:22 +00:00
nn [nn] Add remove_duplicate flag to named_buffers (#674) (#85903) 2022-10-11 18:49:09 +00:00
optim resubmit: "resubmit: [mta] APEX style Fused Adam (#81705) (#85507)" (#85739) 2022-09-29 16:58:59 +00:00
pipeline [CUBLAS][CUDA GRAPHS] (re-re-re-re-open of #83461) Explicitly set the workspace for cuBLAS handles (#86645) 2022-10-11 16:03:49 +00:00
rpc Add correct __all__ for torch.distributed and torch.cuda submodules (#85702) 2022-10-10 19:15:24 +00:00
__init__.py Add correct __all__ for torch.distributed and torch.cuda submodules (#85702) 2022-10-10 19:15:24 +00:00
argparse_util.py
constants.py
CONTRIBUTING.md Fix some links in torch/distributed/CONTRIBUTING.md (#79855) 2022-06-21 00:48:30 +00:00
distributed_c10d.py Enable capturing of comm collective parameters (#98) (#85368) 2022-10-11 04:38:26 +00:00
launch.py Integrate xdoctest - Rebased (#82797) 2022-08-12 02:08:01 +00:00
remote_device.py Rewrite ShardedTensor.gather to use dist.gather instead of gather_object (#77272) 2022-05-17 02:14:40 +00:00
rendezvous.py [rpc/distributed] eliminate code duplication in distributed/rendezvou… (#81577) 2022-07-22 16:21:00 +00:00
run.py Integrate xdoctest - Rebased (#82797) 2022-08-12 02:08:01 +00:00
utils.py [DDP] Add PackedSequence support when device_ids is specified (#86614) 2022-10-10 21:50:59 +00:00