pytorch/torch/distributed/algorithms
Yi Wang c22fc448cd [Gradient Compression] Remove cuda.syncrhonize in batched powerSGD (#54482)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54482

`cuda.synchronize` is unnecessary for `batched_powerSGD_hook`.
ghstack-source-id: 124607761

Test Plan:
f259607860
f259563921

Reviewed By: rohan-varma

Differential Revision: D27254314

fbshipit-source-id: 4744c07a6f0c8939e766ffa935ddbf3c47e85d18
2021-03-23 00:55:53 -07:00
..
ddp_comm_hooks [Gradient Compression] Remove cuda.syncrhonize in batched powerSGD (#54482) 2021-03-23 00:55:53 -07:00
__init__.py [Gradient Compression] Add unit tests that test default Python comm hook implementations (#47158) 2020-11-06 00:28:09 -08:00