mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Current implementation reads as: we will only actually use the "python_reducer" config if the DDP forward is compiled. Otherwise, we will silently fallback to C++ reducer + no DDPOptimizer. I'm changing this behavior to always use the python reducer if the config is specified. Pull Request resolved: https://github.com/pytorch/pytorch/pull/147123 Approved by: https://github.com/fegin |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| _functions.py | ||
| comm.py | ||
| data_parallel.py | ||
| distributed.py | ||
| parallel_apply.py | ||
| replicate.py | ||
| scatter_gather.py | ||