[OpenReg] Fix the docs of Accelerator Intergration (#162826)

----

- Fixed the redirect link about step 1
- Formatted the autoload and added necessary links
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162826
Approved by: https://github.com/albanD
ghstack dependencies: #161917, #161918, #160101
This commit is contained in:
FFFrog 2025-09-13 02:06:26 +08:00 committed by PyTorch MergeBot
parent 29f84b0f61
commit a94ddd9b00
3 changed files with 6 additions and 12 deletions

View File

@ -22,7 +22,7 @@ This tutorial will take **OpenReg** as a new out-of-the-tree device and guide yo
### Entry Point Setup
To enable **Autoload**, register the `_autoload` function as an entry point in `setup.py` file.
To enable **Autoload**, register the `_autoload` function as an entry point in [setup.py](https://github.com/pytorch/pytorch/blob/main/test/cpp_extensions/open_registration_extension/torch_openreg/setup.py) file.
::::{tab-set}
@ -43,19 +43,18 @@ To enable **Autoload**, register the `_autoload` function as an entry point in `
### Backend Setup
Define the initialization hook `_autoload` for backend initialization. This hook will be automatically invoked by PyTorch during startup.
Define the initialization hook `_autoload` for backend initialization in [torch_openreg](https://github.com/pytorch/pytorch/blob/main/test/cpp_extensions/open_registration_extension/torch_openreg/torch_openreg/__init__.py). This hook will be automatically invoked by PyTorch during startup.
::::{tab-set-code}
```{eval-rst}
.. literalinclude:: ../../../test/cpp_extensions/open_registration_extension/torch_openreg/torch_openreg/__init__.py
:language: python
:start-after: LITERALINCLUDE START: AUTOLOAD
:end-before: LITERALINCLUDE END: AUTOLOAD
:linenos:
:emphasize-lines: 10-12
```
::::
## Result
@ -66,9 +65,6 @@ After setting up the entry point and backend, build and install your backend. No
.. grid:: 2
.. grid-item-card:: :octicon:`terminal;1em;` Without Autoload
:class-card: card-prerequisites
::
>>> import torch
>>> import torch_openreg
@ -76,11 +72,9 @@ After setting up the entry point and backend, build and install your backend. No
tensor(1, device='openreg:0')
.. grid-item-card:: :octicon:`terminal;1em;` With Autoload
:class-card: card-prerequisites
::
>>> import torch # Automatically import torch_openreg
>>>
>>> torch.tensor(1, device="openreg")
tensor(1, device='openreg:0')
```

View File

@ -169,7 +169,7 @@ Of course, global fallbacks can also be combined with a blacklist of fallbacks,
### PyTorch STUB
PyTorch also provides another approach for built-in operators: `STUB`. This method is essentially based on the `Step 1<step-one>` approach, but adds secondary scheduling capabilities (for example, scheduling based on CPU characteristics).
PyTorch also provides another approach for built-in operators: `STUB`. This method is essentially based on the {ref}`Step 1<step-one>` approach, but adds secondary scheduling capabilities (for example, scheduling based on CPU characteristics).
```{note}
The `STUB` method currently supports only a limited set of operators. For new accelerator devices, the advantage of the `STUB` method is that it significantly reduces the cost of development at the cost of a small performance overhead. PyTorch currently does not clearly list the set of operators that can be registered through `STUB`. Due to the large number of related operators, only the query method for the supported operator list is provided here.

View File

@ -9,7 +9,6 @@ if sys.platform == "win32":
_load_dll_libraries()
del _load_dll_libraries
# LITERALINCLUDE START: AUTOLOAD
import torch_openreg._C # type: ignore[misc]
import torch_openreg.openreg
@ -19,6 +18,7 @@ torch._register_device_module("openreg", torch_openreg.openreg)
torch.utils.generate_methods_for_privateuse1_backend(for_storage=True)
# LITERALINCLUDE START: AUTOLOAD
def _autoload():
# It is a placeholder function here to be registered as an entry point.
pass