mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
By upstreaming functorch's tensor printing logic into PyTorch. There's no way of creating a custom print function for a TensorImpl subclass (as opposed to a torch_dispatch or torch_function tensor subclass, which can just override repr()) right now, so we need to directly interpose inside regular Tensor printing in PyTorch. Monkey patching is bad; users do not expect `import blah` to change something about another library. Fixes https://github.com/pytorch/functorch/issues/900 Test Plan: - existing tests Pull Request resolved: https://github.com/pytorch/pytorch/pull/85430 Approved by: https://github.com/ezyang
13 lines
499 B
Python
13 lines
499 B
Python
from torch import Tensor
|
|
|
|
# Defined in torch/csrc/functorch/init.cpp
|
|
|
|
def _set_dynamic_layer_keys_included(included: bool) -> None: ...
|
|
def get_unwrapped(tensor: Tensor) -> Tensor: ...
|
|
def is_batchedtensor(tensor: Tensor) -> bool: ...
|
|
def is_functionaltensor(tensor: Tensor) -> bool: ...
|
|
def is_functorch_wrapped_tensor(tensor: Tensor) -> bool: ...
|
|
def is_gradtrackingtensor(tensor: Tensor) -> bool: ...
|
|
def maybe_get_bdim(tensor: Tensor) -> int: ...
|
|
def maybe_get_level(tensor: Tensor) -> int: ...
|