mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: For similar reasons as documented in the `[Sync Streams]` note. For a current example, `ProcessGroupNCCL::allgather` must also call `recordStream` and does so already. The output tensor is created on the default stream (by the application). NCCL/RCCL internally uses another stream (i.e., ncclStream). If we do not record the output tensor on the ncclStream, there is a chance that the output tensor might be deallocated while NCCL/RCCL is using it. The application is not aware of the ncclStream since it's internal to ProcessGroupNCCL. So, the application cannot record the output tensor on the ncclStream. Patch originally developed by sarunyap. Pull Request resolved: https://github.com/pytorch/pytorch/pull/46603 Reviewed By: srinivas212 Differential Revision: D24458530 fbshipit-source-id: b02e74d1c3a176ea1b9bbdd7dc671b221fcadaef |
||
|---|---|---|
| .. | ||
| c10d | ||
| libshm | ||
| libshm_windows | ||