mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
By implementing some of the functionality used by CUDA we make implementing device agnostic code a lot easier. With this set of changes it's now possible to get FSDP wrap a trivial module. FWD/BWD still TBD. Pull Request resolved: https://github.com/pytorch/pytorch/pull/103172 Approved by: https://github.com/wz337, https://github.com/wanchaol
24 lines
372 B
ReStructuredText
24 lines
372 B
ReStructuredText
torch.cpu
|
|
===================================
|
|
.. automodule:: torch.cpu
|
|
.. currentmodule:: torch.cpu
|
|
|
|
.. autosummary::
|
|
:toctree: generated
|
|
:nosignatures:
|
|
|
|
current_stream
|
|
is_available
|
|
synchronize
|
|
stream
|
|
device_count
|
|
StreamContext
|
|
|
|
Streams and events
|
|
------------------
|
|
.. autosummary::
|
|
:toctree: generated
|
|
:nosignatures:
|
|
|
|
Stream
|