DOC: Convert to markdown: mobile_optimizer.rst, model_zoo.rst, module_tracker.rst, monitor.rst, mps_environment_variables.rst (#155702)

Fixes #155026

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155702
Approved by: https://github.com/sekyondaMeta, https://github.com/svekars

Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
This commit is contained in:
loganthomas 2025-06-11 22:16:04 +00:00 committed by PyTorch MergeBot
parent e1db10e05a
commit 458cc7213b
7 changed files with 94 additions and 80 deletions

View File

@ -0,0 +1,24 @@
---
robots: noindex
---
# torch.utils.mobile_optimizer
PyTorch Mobile is no longer actively supported. Redirecting to [ExecuTorch documentation](https://docs.pytorch.org/executorch).
```{raw} html
<meta http-equiv="Refresh" content="0; url='https://docs.pytorch.org/executorch'" />
```
```{warning}
PyTorch Mobile is no longer actively supported. Please check out
[ExecuTorch](https://pytorch.org/executorch-overview), PyTorch's
all-new on-device inference library. You can also review
documentation on [XNNPACK](https://pytorch.org/executorch/stable/native-delegates-executorch-xnnpack-delegate.html)
and [Vulkan](https://pytorch.org/executorch/stable/native-delegates-executorch-vulkan-delegate.html) delegates.
```
```{eval-rst}
.. currentmodule:: torch.utils.mobile_optimizer
```
```{eval-rst}
.. autofunction:: optimize_for_mobile
```

View File

@ -1,21 +0,0 @@
.. meta::
:robots: noindex
torch.utils.mobile_optimizer
===================================
PyTorch Mobile is no longer actively supported. Redirecting to `ExecuTorch documentation <https://docs.pytorch.org/executorch>`_.
.. raw:: html
<meta http-equiv="Refresh" content="0; url='https://docs.pytorch.org/executorch'" />
.. warning::
PyTorch Mobile is no longer actively supported. Please check out
`ExecuTorch <https://pytorch.org/executorch-overview>`__, PyTorch's
all-new on-device inference library. You can also review
documentation on `XNNPACK <https://pytorch.org/executorch/stable/native-delegates-executorch-xnnpack-delegate.html>`__
and `Vulkan <https://pytorch.org/executorch/stable/native-delegates-executorch-vulkan-delegate.html>`__ delegates.
.. currentmodule:: torch.utils.mobile_optimizer
.. autofunction:: optimize_for_mobile

View File

@ -1,7 +1,10 @@
torch.utils.model_zoo
===================================
# torch.utils.model_zoo
Moved to `torch.hub`.
```{eval-rst}
.. automodule:: torch.utils.model_zoo
```
```{eval-rst}
.. autofunction:: load_url
```

View File

@ -1,8 +1,11 @@
torch.utils.module_tracker
===================================
# torch.utils.module_tracker
```{eval-rst}
.. automodule:: torch.utils.module_tracker
```
This utility can be used to track the current position inside an :class:`torch.nn.Module` hierarchy.
This utility can be used to track the current position inside an {class}`torch.nn.Module` hierarchy.
It can be used within other tracking tools to be able to easily associate measured quantities to user-friendly names. This is used in particular in the FlopCounterMode today.
```{eval-rst}
.. autoclass:: torch.utils.module_tracker.ModuleTracker
```

View File

@ -1,10 +1,9 @@
torch.monitor
=============
# torch.monitor
.. warning::
This module is a prototype release, and its interfaces and functionality may
change without warning in future PyTorch releases.
```{warning}
This module is a prototype release, and its interfaces and functionality may
change without warning in future PyTorch releases.
```
``torch.monitor`` provides an interface for logging events and counters from
PyTorch.
@ -20,34 +19,52 @@ event interface can be directly used.
Event handlers can be registered to handle the events and pass them to an
external event sink.
API Reference
-------------
## API Reference
```{eval-rst}
.. automodule:: torch.monitor
```
```{eval-rst}
.. autoclass:: torch.monitor.Aggregation
:members:
```
```{eval-rst}
.. autoclass:: torch.monitor.Stat
:members:
:special-members: __init__
```
```{eval-rst}
.. autoclass:: torch.monitor.data_value_t
:members:
```
```{eval-rst}
.. autoclass:: torch.monitor.Event
:members:
:special-members: __init__
```
```{eval-rst}
.. autoclass:: torch.monitor.EventHandlerHandle
:members:
```
```{eval-rst}
.. autofunction:: torch.monitor.log_event
```
```{eval-rst}
.. autofunction:: torch.monitor.register_event_handler
```
```{eval-rst}
.. autofunction:: torch.monitor.unregister_event_handler
```
```{eval-rst}
.. autoclass:: torch.monitor.TensorboardEventHandler
:members:
:special-members: __init__
```

View File

@ -0,0 +1,33 @@
(mps_environment_variables)=
# MPS Environment Variables
**PyTorch Environment Variables**
| Variable | Description |
|----------------------------------|-------------|
| `PYTORCH_DEBUG_MPS_ALLOCATOR` | If set to `1`, set allocator logging level to verbose. |
| `PYTORCH_MPS_LOG_PROFILE_INFO` | Set log options bitmask to `MPSProfiler`. See `LogOptions` enum in `aten/src/ATen/mps/MPSProfiler.h`. |
| `PYTORCH_MPS_TRACE_SIGNPOSTS` | Set profile and signpost bitmasks to `MPSProfiler`. See `ProfileOptions` and `SignpostTypes`. |
| `PYTORCH_MPS_HIGH_WATERMARK_RATIO` | High watermark ratio for MPS allocator. Default is 1.7. |
| `PYTORCH_MPS_LOW_WATERMARK_RATIO` | Low watermark ratio for MPS allocator. Default is 1.4 (unified) or 1.0 (discrete). |
| `PYTORCH_MPS_FAST_MATH` | If `1`, enables fast math for MPS kernels. See section 1.6.3 in the [Metal Shading Language Spec](https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf). |
| `PYTORCH_MPS_PREFER_METAL` | If `1`, uses metal kernels instead of MPS Graph APIs. Used for matmul. |
| `PYTORCH_ENABLE_MPS_FALLBACK` | If `1`, falls back to CPU when MPS ops aren't supported. |
```{note}
**high watermark ratio** is a hard limit for the total allowed allocations
- `0.0` : disables high watermark limit (may cause system failure if system-wide OOM occurs)
- `1.0` : recommended maximum allocation size (i.e., device.recommendedMaxWorkingSetSize)
- `>1.0`: allows limits beyond the device.recommendedMaxWorkingSetSize
e.g., value 0.95 means we allocate up to 95% of recommended maximum
allocation size; beyond that, the allocations would fail with OOM error.
**low watermark ratio** is a soft limit to attempt limiting memory allocations up to the lower watermark
level by garbage collection or committing command buffers more frequently (a.k.a, adaptive commit).
Value between 0 to m_high_watermark_ratio (setting 0.0 disables adaptive commit and garbage collection)
e.g., value 0.9 means we 'attempt' to limit allocations up to 90% of recommended maximum
allocation size.
```

View File

@ -1,45 +0,0 @@
.. _mps_environment_variables:
MPS Environment Variables
==========================
**PyTorch Environment Variables**
.. list-table::
:header-rows: 1
* - Variable
- Description
* - ``PYTORCH_DEBUG_MPS_ALLOCATOR``
- If set to ``1``, set allocator logging level to verbose.
* - ``PYTORCH_MPS_LOG_PROFILE_INFO``
- Set log options bitmask to ``MPSProfiler``. See ``LogOptions`` enum in `aten/src/ATen/mps/MPSProfiler.h` for available options.
* - ``PYTORCH_MPS_TRACE_SIGNPOSTS``
- Set profile and signpost bitmasks to ``MPSProfiler``. See ``ProfileOptions`` and ``SignpostTypes`` enums in `aten/src/ATen/mps/MPSProfiler.h` for available options.
* - ``PYTORCH_MPS_HIGH_WATERMARK_RATIO``
- High watermark ratio for MPS allocator. By default, it is set to 1.7.
* - ``PYTORCH_MPS_LOW_WATERMARK_RATIO``
- Low watermark ratio for MPS allocator. By default, it is set to 1.4 if the memory is unified and set to 1.0 if the memory is discrete.
* - ``PYTORCH_MPS_FAST_MATH``
- If set to ``1``, enable fast math for MPS metal kernels. See section 1.6.3 in https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf for precision implications.
* - ``PYTORCH_MPS_PREFER_METAL``
- If set to ``1``, force using metal kernels instead of using MPS Graph APIs. For now this is only used for matmul op.
* - ``PYTORCH_ENABLE_MPS_FALLBACK``
- If set to ``1``, full back operations to CPU when MPS does not support them.
.. note::
**high watermark ratio** is a hard limit for the total allowed allocations
- `0.0` : disables high watermark limit (may cause system failure if system-wide OOM occurs)
- `1.0` : recommended maximum allocation size (i.e., device.recommendedMaxWorkingSetSize)
- `>1.0`: allows limits beyond the device.recommendedMaxWorkingSetSize
e.g., value 0.95 means we allocate up to 95% of recommended maximum
allocation size; beyond that, the allocations would fail with OOM error.
**low watermark ratio** is a soft limit to attempt limiting memory allocations up to the lower watermark
level by garbage collection or committing command buffers more frequently (a.k.a, adaptive commit).
Value between 0 to m_high_watermark_ratio (setting 0.0 disables adaptive commit and garbage collection)
e.g., value 0.9 means we 'attempt' to limit allocations up to 90% of recommended maximum
allocation size.