mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137596 Approved by: https://github.com/albanD
This commit is contained in:
parent
c73d2634b9
commit
cfe970260a
|
|
@ -215,7 +215,7 @@ torch.backends.opt_einsum
|
|||
|
||||
.. attribute:: enabled
|
||||
|
||||
A :class:``bool`` that controls whether opt_einsum is enabled (``True`` by default). If so,
|
||||
A :class:`bool` that controls whether opt_einsum is enabled (``True`` by default). If so,
|
||||
torch.einsum will use opt_einsum (https://optimized-einsum.readthedocs.io/en/stable/path_finding.html)
|
||||
if available to calculate an optimal path of contraction for faster performance.
|
||||
|
||||
|
|
@ -224,7 +224,7 @@ torch.backends.opt_einsum
|
|||
|
||||
.. attribute:: strategy
|
||||
|
||||
A :class:``str`` that specifies which strategies to try when ``torch.backends.opt_einsum.enabled``
|
||||
A :class:`str` that specifies which strategies to try when ``torch.backends.opt_einsum.enabled``
|
||||
is ``True``. By default, torch.einsum will try the "auto" strategy, but the "greedy" and "optimal"
|
||||
strategies are also supported. Note that the "optimal" strategy is factorial on the number of
|
||||
inputs as it tries all possible paths. See more details in opt_einsum's docs
|
||||
|
|
|
|||
|
|
@ -16,7 +16,14 @@ except ImportError:
|
|||
|
||||
@_lru_cache
|
||||
def is_available() -> bool:
|
||||
r"""Return a bool indicating if opt_einsum is currently available."""
|
||||
r"""Return a bool indicating if opt_einsum is currently available.
|
||||
|
||||
You must install opt-einsum in order for torch to automatically optimize einsum. To
|
||||
make opt-einsum available, you can install it along with torch: ``pip install torch[opt-einsum]``
|
||||
or by itself: ``pip install opt-einsum``. If the package is installed, torch will import
|
||||
it automatically and use it accordingly. Use this function to check whether opt-einsum
|
||||
was installed and properly imported by torch.
|
||||
"""
|
||||
return _opt_einsum is not None
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -257,17 +257,22 @@ def einsum(*args: Any) -> Tensor:
|
|||
|
||||
.. note::
|
||||
|
||||
This function uses opt_einsum (https://optimized-einsum.readthedocs.io/en/stable/) to speed up computation or to
|
||||
consume less memory by optimizing contraction order. This optimization occurs when there are at least three
|
||||
inputs, since the order does not matter otherwise. Note that finding _the_ optimal path is an NP-hard problem,
|
||||
thus, opt_einsum relies on different heuristics to achieve near-optimal results. If opt_einsum is not available,
|
||||
the default order is to contract from left to right.
|
||||
Please install opt-einsum (https://optimized-einsum.readthedocs.io/en/stable/) in order to enroll into a more
|
||||
performant einsum. You can install when installing torch like so: `pip install torch[opt-einsum]` or by itself
|
||||
with `pip install opt-einsum`.
|
||||
|
||||
To bypass this default behavior, add the following line to disable the usage of opt_einsum and skip path
|
||||
calculation: `torch.backends.opt_einsum.enabled = False`
|
||||
If opt-einsum is available, this function will automatically speed up computation and/or consume less memory
|
||||
by optimizing contraction order through our opt_einsum backend :mod:`torch.backends.opt_einsum` (The _ vs - is
|
||||
confusing, I know). This optimization occurs when there are at least three inputs, since the order does not matter
|
||||
otherwise. Note that finding `the` optimal path is an NP-hard problem, thus, opt-einsum relies on different
|
||||
heuristics to achieve near-optimal results. If opt-einsum is not available, the default order is to contract
|
||||
from left to right.
|
||||
|
||||
To bypass this default behavior, add the following to disable opt_einsum and skip path calculation:
|
||||
``torch.backends.opt_einsum.enabled = False``
|
||||
|
||||
To specify which strategy you'd like for opt_einsum to compute the contraction path, add the following line:
|
||||
`torch.backends.opt_einsum.strategy = 'auto'`. The default strategy is 'auto', and we also support 'greedy' and
|
||||
``torch.backends.opt_einsum.strategy = 'auto'``. The default strategy is 'auto', and we also support 'greedy' and
|
||||
'optimal'. Disclaimer that the runtime of 'optimal' is factorial in the number of inputs! See more details in
|
||||
the opt_einsum documentation (https://optimized-einsum.readthedocs.io/en/stable/path_finding.html).
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user