lezcano 2024-03-15 15:58:16 +00:00 committed by PyTorch MergeBot
parent 8a5a377190
commit d0d09f5977
6 changed files with 9 additions and 7 deletions

View File

@ -33,7 +33,7 @@ body:
label: Minified repro
description: |
Please run the minifier on your example and paste the minified code below
Learn more here https://pytorch.org/docs/main/compile/troubleshooting.html
Learn more here https://pytorch.org/docs/main/torch.compiler_troubleshooting.html
placeholder: |
env TORCHDYNAMO_REPRO_AFTER="aot" python your_model.py
or

View File

@ -84,7 +84,7 @@ Registration serves two purposes:
* You can pass a string containing your backend function's name to ``torch.compile`` instead of the function itself,
for example, ``torch.compile(model, backend="my_compiler")``.
* It is required for use with the `minifier <https://pytorch.org/docs/main/compile/troubleshooting.html>`__. Any generated
* It is required for use with the `minifier <https://pytorch.org/docs/main/torch.compiler_troubleshooting.html>`__. Any generated
code from the minifier must call your code that registers your backend function, typically through an ``import`` statement.
Custom Backends after AOTAutograd

View File

@ -742,7 +742,7 @@ Optimizations
compile
`torch.compile documentation <https://pytorch.org/docs/main/compile/index.html>`__
`torch.compile documentation <https://pytorch.org/docs/main/torch.compiler.html>`__
Operator Tags
------------------------------------

View File

@ -1817,7 +1817,7 @@ def compile(model: Optional[Callable] = None, *,
- Experimental or debug in-tree backends can be seen with `torch._dynamo.list_backends(None)`
- To register an out-of-tree custom backend: https://pytorch.org/docs/main/compile/custom-backends.html
- To register an out-of-tree custom backend: https://pytorch.org/docs/main/torch.compiler_custom_backends.html
mode (str): Can be either "default", "reduce-overhead", "max-autotune" or "max-autotune-no-cudagraphs"
- "default" is the default mode, which is a good balance between performance and overhead

View File

@ -102,8 +102,10 @@ from torch.utils._triton import has_triton, has_triton_package
counters: DefaultDict[str, Counter[str]] = collections.defaultdict(collections.Counter)
optimus_scuba_log: Dict[str, Any] = {}
troubleshooting_url = "https://pytorch.org/docs/main/compile/troubleshooting.html"
nnmodule_doc_url = "https://pytorch.org/docs/main/compile/nn-module.html"
troubleshooting_url = (
"https://pytorch.org/docs/main/torch.compiler_troubleshooting.html"
)
nnmodule_doc_url = "https://pytorch.org/docs/main/torch.compiler_nn_module.html"
nnmodule_doc_url_msg = f"See {nnmodule_doc_url} for more information and limitations."
log = logging.getLogger(__name__)

View File

@ -103,7 +103,7 @@ During the call to `symbolic_trace`, the parameter `x` is transformed into a Pro
If you're doing graph transforms, you can wrap your own Proxy method around a raw Node so that you can use the overloaded operators to add additional things to a Graph.
## [TorchDynamo](https://pytorch.org/docs/main/compile/technical-overview.html) ##
## [TorchDynamo](https://pytorch.org/docs/main/torch.compiler_deepdive.html) ##
Tracing has limitations in that it can't deal with dynamic control flow and is limited to outputting a single graph at a time, so a better alternative is the new `torch.compile()` infrastructure where you can output multiple subgraphs in either an aten or torch IR using `torch.fx`. [This tutorial](https://colab.research.google.com/drive/1Zh-Uo3TcTH8yYJF-LLo5rjlHVMtqvMdf) gives more context on how this works.