mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
Fix torch.compile links (#121824)
Fixes https://github.com/pytorch/pytorch.github.io/issues/1567 Pull Request resolved: https://github.com/pytorch/pytorch/pull/121824 Approved by: https://github.com/svekars, https://github.com/peterbell10, https://github.com/malfet ghstack dependencies: #121823
This commit is contained in:
parent
8a5a377190
commit
d0d09f5977
2
.github/ISSUE_TEMPLATE/pt2-bug-report.yml
vendored
2
.github/ISSUE_TEMPLATE/pt2-bug-report.yml
vendored
|
|
@ -33,7 +33,7 @@ body:
|
|||
label: Minified repro
|
||||
description: |
|
||||
Please run the minifier on your example and paste the minified code below
|
||||
Learn more here https://pytorch.org/docs/main/compile/troubleshooting.html
|
||||
Learn more here https://pytorch.org/docs/main/torch.compiler_troubleshooting.html
|
||||
placeholder: |
|
||||
env TORCHDYNAMO_REPRO_AFTER="aot" python your_model.py
|
||||
or
|
||||
|
|
|
|||
|
|
@ -84,7 +84,7 @@ Registration serves two purposes:
|
|||
|
||||
* You can pass a string containing your backend function's name to ``torch.compile`` instead of the function itself,
|
||||
for example, ``torch.compile(model, backend="my_compiler")``.
|
||||
* It is required for use with the `minifier <https://pytorch.org/docs/main/compile/troubleshooting.html>`__. Any generated
|
||||
* It is required for use with the `minifier <https://pytorch.org/docs/main/torch.compiler_troubleshooting.html>`__. Any generated
|
||||
code from the minifier must call your code that registers your backend function, typically through an ``import`` statement.
|
||||
|
||||
Custom Backends after AOTAutograd
|
||||
|
|
|
|||
|
|
@ -742,7 +742,7 @@ Optimizations
|
|||
|
||||
compile
|
||||
|
||||
`torch.compile documentation <https://pytorch.org/docs/main/compile/index.html>`__
|
||||
`torch.compile documentation <https://pytorch.org/docs/main/torch.compiler.html>`__
|
||||
|
||||
Operator Tags
|
||||
------------------------------------
|
||||
|
|
|
|||
|
|
@ -1817,7 +1817,7 @@ def compile(model: Optional[Callable] = None, *,
|
|||
|
||||
- Experimental or debug in-tree backends can be seen with `torch._dynamo.list_backends(None)`
|
||||
|
||||
- To register an out-of-tree custom backend: https://pytorch.org/docs/main/compile/custom-backends.html
|
||||
- To register an out-of-tree custom backend: https://pytorch.org/docs/main/torch.compiler_custom_backends.html
|
||||
mode (str): Can be either "default", "reduce-overhead", "max-autotune" or "max-autotune-no-cudagraphs"
|
||||
|
||||
- "default" is the default mode, which is a good balance between performance and overhead
|
||||
|
|
|
|||
|
|
@ -102,8 +102,10 @@ from torch.utils._triton import has_triton, has_triton_package
|
|||
|
||||
counters: DefaultDict[str, Counter[str]] = collections.defaultdict(collections.Counter)
|
||||
optimus_scuba_log: Dict[str, Any] = {}
|
||||
troubleshooting_url = "https://pytorch.org/docs/main/compile/troubleshooting.html"
|
||||
nnmodule_doc_url = "https://pytorch.org/docs/main/compile/nn-module.html"
|
||||
troubleshooting_url = (
|
||||
"https://pytorch.org/docs/main/torch.compiler_troubleshooting.html"
|
||||
)
|
||||
nnmodule_doc_url = "https://pytorch.org/docs/main/torch.compiler_nn_module.html"
|
||||
nnmodule_doc_url_msg = f"See {nnmodule_doc_url} for more information and limitations."
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
|
|
|||
|
|
@ -103,7 +103,7 @@ During the call to `symbolic_trace`, the parameter `x` is transformed into a Pro
|
|||
|
||||
If you're doing graph transforms, you can wrap your own Proxy method around a raw Node so that you can use the overloaded operators to add additional things to a Graph.
|
||||
|
||||
## [TorchDynamo](https://pytorch.org/docs/main/compile/technical-overview.html) ##
|
||||
## [TorchDynamo](https://pytorch.org/docs/main/torch.compiler_deepdive.html) ##
|
||||
|
||||
Tracing has limitations in that it can't deal with dynamic control flow and is limited to outputting a single graph at a time, so a better alternative is the new `torch.compile()` infrastructure where you can output multiple subgraphs in either an aten or torch IR using `torch.fx`. [This tutorial](https://colab.research.google.com/drive/1Zh-Uo3TcTH8yYJF-LLo5rjlHVMtqvMdf) gives more context on how this works.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user