pytorch/torch/distributed/rpc/constants.py
Luca Wehrstedt 7c9e78fdf5 [TensorPipe] Add options for agent, including backend killswitches (#40162)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40162

The only public option is `num_worker_threads`. The other ones are private (as indicated by the leading underscore, is that enough?) and allow to specify a different set and order of transports/channels. These can thus be used to disable a backend (by not specifying it) or by forcing one (by raising its priority). They can therefore be used to work around defective backends, in case we'll find any post-release.
ghstack-source-id: 106103238

Test Plan: Built //caffe2:ifbpy and, using TensorPipe's verbose logging, verified that the transports/channels I specified were indeed the ones that were being registered.

Differential Revision: D22090661

fbshipit-source-id: 789bbe3bde4444cfa20c40276246e4ab67c50cd0
2020-06-18 02:54:17 -07:00

25 lines
810 B
Python

from datetime import timedelta
from . import (
_DEFAULT_INIT_METHOD,
_DEFAULT_NUM_SEND_RECV_THREADS,
_DEFAULT_NUM_WORKER_THREADS,
_DEFAULT_RPC_TIMEOUT_SEC,
_UNSET_RPC_TIMEOUT,
)
# For any RpcAgent.
DEFAULT_RPC_TIMEOUT_SEC = _DEFAULT_RPC_TIMEOUT_SEC
DEFAULT_INIT_METHOD = _DEFAULT_INIT_METHOD
# For ProcessGroupAgent.
DEFAULT_NUM_SEND_RECV_THREADS = _DEFAULT_NUM_SEND_RECV_THREADS
# For TensorPipeAgent.
DEFAULT_NUM_WORKER_THREADS = _DEFAULT_NUM_WORKER_THREADS
# Ensure that we don't time out when there are long periods of time without
# any operations against the underlying ProcessGroup.
DEFAULT_PROCESS_GROUP_TIMEOUT = timedelta(milliseconds=2 ** 31 - 1)
# Value indicating that timeout is not set for RPC call, and the default should be used.
UNSET_RPC_TIMEOUT = _UNSET_RPC_TIMEOUT