pytorch/c10/core/AutogradState.cpp
Elias Ellison d04889323e Add Context Manager for Disabling Multithreading in Backwards, use in aot autograd (#86245)
We were running into a few issues with running multithreaded backwards in aot_autograd: such as https://github.com/pytorch/pytorch/issues/86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86245
Approved by: https://github.com/albanD, https://github.com/yf225
2022-10-06 03:27:42 +00:00

24 lines
548 B
C++

#include <c10/core/AutogradState.h>
namespace c10 {
namespace {
// By default, grad mode and mulithreading are enabled, inference mode is
// disabled,
thread_local AutogradState autograd_state_tls = AutogradState(
/* grad_mode */ true,
/* inference_mode */ false,
/* fw_grad_mode */ true,
/* multithreading_enabled */ true);
} // namespace
AutogradState& AutogradState::get_tls_state() {
return autograd_state_tls;
}
void AutogradState::set_tls_state(AutogradState state) {
autograd_state_tls = state;
}
} // namespace c10