docs: add reset_peak_memory_stats in cuda.rst (#54668)

Summary:
fixes https://github.com/pytorch/pytorch/issues/41808
https://11812999-65600975-gh.circle-artifacts.com/0/docs/cuda.html

One question: does `reset_peak_stats` exist in `torch.cuda` ?
I can't find anywhere.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54668

Reviewed By: ailzhang

Differential Revision: D27328444

Pulled By: zou3519

fbshipit-source-id: 098024d43da98e3249aa9aa71cb10126095504a4
This commit is contained in:
Jeff Yang 2021-03-29 10:00:09 -07:00 committed by Facebook GitHub Bot
parent 12a454788b
commit 84232b762b
2 changed files with 3 additions and 2 deletions

View File

@ -60,6 +60,7 @@ Memory management
.. autofunction:: memory_cached
.. autofunction:: max_memory_cached
.. autofunction:: reset_max_memory_cached
.. autofunction:: reset_peak_memory_stats
NVIDIA Tools Extension (NVTX)
-----------------------------

View File

@ -313,7 +313,7 @@ def max_memory_allocated(device: Union[Device, int] = None) -> int:
device.
By default, this returns the peak allocated memory since the beginning of
this program. :func:`~torch.cuda.reset_peak_stats` can be used to
this program. :func:`~torch.cuda.reset_peak_memory_stats` can be used to
reset the starting point in tracking this metric. For example, these two
functions can measure the peak allocated memory usage of each iteration in a
training loop.
@ -351,7 +351,7 @@ def max_memory_reserved(device: Union[Device, int] = None) -> int:
for a given device.
By default, this returns the peak cached memory since the beginning of this
program. :func:`~torch.cuda.reset_peak_stats` can be used to reset
program. :func:`~torch.cuda.reset_peak_memory_stats` can be used to reset
the starting point in tracking this metric. For example, these two functions
can measure the peak cached memory amount of each iteration in a training
loop.