pytorch/torch/distributed/algorithms
Yi Wang 6c31f56bf4 [Gradient Compression] Add cuda.syncrhonize back to batched powerSGD (#54838)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54838

Realize that an explicit sync is somehow still needed for batched PowerSGD hook. I find that a job failure can be fixed by this change.

The sync was once removed by #54482.

Test Plan:
f260900882
f260899693

Reviewed By: rohan-varma

Differential Revision: D27384738

fbshipit-source-id: 3efd738b9fd375e2ceb36ed3a6bf99cd8ce8ff95
2021-03-30 09:27:11 -07:00
..
ddp_comm_hooks [Gradient Compression] Add cuda.syncrhonize back to batched powerSGD (#54838) 2021-03-30 09:27:11 -07:00
__init__.py [Gradient Compression] Add unit tests that test default Python comm hook implementations (#47158) 2020-11-06 00:28:09 -08:00