pytorch/torch/lib
Jeff Daily ce5bca5502 ProcessGroupNCCL::alltoall_base needs to call recordStream (#46603)
Summary:
For similar reasons as documented in the `[Sync Streams]` note.  For a current example, `ProcessGroupNCCL::allgather` must also call `recordStream` and does so already.

The output tensor is created on the default stream (by the application).  NCCL/RCCL internally uses another stream (i.e., ncclStream).  If we do not record the output tensor on the ncclStream, there is a chance that the output tensor might be deallocated while NCCL/RCCL is using it.

The application is not aware of the ncclStream since it's internal to ProcessGroupNCCL.  So, the application cannot record the output tensor on the ncclStream.

Patch originally developed by sarunyap.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46603

Reviewed By: srinivas212

Differential Revision: D24458530

fbshipit-source-id: b02e74d1c3a176ea1b9bbdd7dc671b221fcadaef
2020-10-22 15:53:19 -07:00
..
c10d ProcessGroupNCCL::alltoall_base needs to call recordStream (#46603) 2020-10-22 15:53:19 -07:00
libshm [cmake] add HAVE_SOVERSION option (default=OFF). (#37502) 2020-04-30 06:52:33 -07:00
libshm_windows CMake script cleanup - mixed case for function names (#35589) 2020-03-30 11:37:02 -07:00