pytorch/torch/_functorch/config.py
Han Qi 3864207c2a Replace _dynamo.config with an object instead of module (#96455)
Summary:
    Replace _dynamo.config with an object instead of module

    Current usage patterns of setting and reading fields on config will work
    unchanged.

    Only changes needed going forward:
    1. import torch._dynamo.config will not work. However, just doing
       import torch._dynamo is sufficient to access dynamo config
       as torch._dynamo.config.

    2. Files inside of _dynamo folder need to access config via
       from torch._dynamo.config_util import config instead of
       from torch._dynamo import config. Because _dynamo/__init__.py
       imports some of the files so it would be circular import.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96455
Approved by: https://github.com/jansel
2023-06-03 23:18:41 +00:00

39 lines
1.2 KiB
Python

# Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
"""
Global flags for aot autograd
"""
import os
import sys
from typing import Union
# Converts torch rng ops to their functional philox rng equivalents. Note that
# we functionalize only CUDA rng ops today.
functionalize_rng_ops: bool = False
# can be useful for debugging if we are incorrectly creating meta fake tensors
fake_tensor_allow_meta: Union[str, bool] = os.environ.get("FAKE_ALLOW_META", True)
# Enables optional asserts in hotpath code to check for errors. If
# you are seeing weird accuracy problems, try turning this on.
# This is currently off by default as it will harm tracing time,
# but it is on by default for aot_eager.
debug_assert: bool = False
debug_partitioner: Union[str, bool] = os.environ.get("AOT_PARTITIONER_DEBUG", False)
static_weight_shapes: bool = True
# Applies CSE to the graph before partitioning
cse: bool = True
# Restricts the amount of computation AOTAutograd can do.
max_dist_from_bw: int = 3
from torch._config_utils import install_config_module
install_config_module('FunctorchConfig', sys.modules[__name__])