mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Next stage of breaking up https://github.com/pytorch/pytorch/pull/74710 Move shape cache implementation to the backend interface. Also, clean up some of the hashing logic in the base node class. CC: wconstab JackCaoG henrytwo Partially Fixes https://github.com/pytorch/pytorch/issues/74628 Pull Request resolved: https://github.com/pytorch/pytorch/pull/75324 Reviewed By: anjali411 Differential Revision: D35730823 Pulled By: wconstab fbshipit-source-id: cf6fa326319b9324e5f422a78817b6fb5bf7e9b8 (cherry picked from commit faec5043df56639e2fd23de2d91ae796e4f3df70)
15 lines
456 B
C++
15 lines
456 B
C++
#include <torch/csrc/lazy/core/config.h>
|
|
|
|
// TODO(whc) unclear if this is useful, has only been tested as true
|
|
C10_DEFINE_bool(
|
|
torch_lazy_ts_tensor_update_sync,
|
|
true,
|
|
"Use synchronous copy inside _copy_from op");
|
|
|
|
// TODO(whc) we need to hook up these flags in a more useful way
|
|
// possibly also keep LTC_TS_CUDA env working?
|
|
C10_DEFINE_bool(
|
|
torch_lazy_ts_cuda,
|
|
false,
|
|
"Use cuda device for torchscript backend (instead of CPU)");
|