mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 00:20:18 +01:00
[Fix] fix gramma error in PyTorch docs (#166158)
Fix several gramma errors in PyTorch docs. Pull Request resolved: https://github.com/pytorch/pytorch/pull/166158 Approved by: https://github.com/yewentao256, https://github.com/cyyever, https://github.com/ezyang
This commit is contained in:
parent
c9eabadc5e
commit
1764f3a9c8
|
|
@ -14,7 +14,7 @@ Combining, these building blocks form a research and
|
|||
production ready C++ library for tensor computation and dynamic neural
|
||||
networks with strong emphasis on GPU acceleration as well as fast CPU
|
||||
performance. It is currently in use at Facebook in research and
|
||||
production; we are looking forward to welcome more users of the PyTorch C++ API.
|
||||
production; we are looking forward to welcoming more users of the PyTorch C++ API.
|
||||
|
||||
.. warning::
|
||||
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ users should pay additional attention to:
|
|||
|
||||
- Both guards affects tensor execution process to skip work not related to inference, but ``InferenceMode``
|
||||
also affects tensor creation while ``AutoNonVariableTypeMode`` doesn't. In other words, tensors created
|
||||
inside ``InferenceMode`` are marked as inference tensors so that certain limitation can be applied after
|
||||
inside ``InferenceMode`` are marked as inference tensors so that certain limitations can be applied after
|
||||
exiting ``InferenceMode``.
|
||||
- Enabled/disabled ``InferenceMode`` states can be nested while ``AutoNonVariableTypeMode`` only allows enabled state.
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ restoring the RNG state during each checkpoint.
|
|||
The stashing logic saves and restores the RNG state for CPU and another
|
||||
device type (infer the device type from Tensor arguments excluding CPU
|
||||
tensors by `_infer_device_type`) to the `run_fn`. If there are multiple
|
||||
device, device state will only be saved for devices of a single device type,
|
||||
devices, device state will only be saved for devices of a single device type,
|
||||
and the remaining devices will be ignored. Consequently, if any checkpointed
|
||||
functions involve randomness, this may result in incorrect gradients. (Note
|
||||
that if CUDA devices are among the devices detected, it will be prioritized;
|
||||
|
|
|
|||
|
|
@ -59,14 +59,14 @@ MPI supports CUDA only if the implementation used to build PyTorch supports it.
|
|||
|
||||
### Backends that come with PyTorch
|
||||
|
||||
PyTorch distributed package supports Linux (stable), MacOS (stable), and Windows (prototype).
|
||||
PyTorch distributed package supports Linux (stable), macOS (stable), and Windows (prototype).
|
||||
By default for Linux, the Gloo and NCCL backends are built and included in PyTorch
|
||||
distributed (NCCL only when building with CUDA). MPI is an optional backend that can only be
|
||||
included if you build PyTorch from source. (e.g. building PyTorch on a host that has MPI
|
||||
installed.)
|
||||
|
||||
:::{note}
|
||||
As of PyTorch v1.8, Windows supports all collective communications backend but NCCL,
|
||||
As of PyTorch v1.8, Windows supports all collective communications backends but NCCL,
|
||||
If the `init_method` argument of {func}`init_process_group` points to a file it must adhere
|
||||
to the following schema:
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# torch.mtia
|
||||
|
||||
The MTIA backend is implemented out of the tree, only interfaces are be defined here.
|
||||
The MTIA backend is implemented out of the tree, only interfaces are defined here.
|
||||
|
||||
```{eval-rst}
|
||||
.. automodule:: torch.mtia
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# torch.mtia.memory
|
||||
|
||||
The MTIA backend is implemented out of the tree, only interfaces are be defined here.
|
||||
The MTIA backend is implemented out of the tree, only interfaces are defined here.
|
||||
|
||||
```{eval-rst}
|
||||
.. automodule:: torch.mtia.memory
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user