Commit Graph

15 Commits

Author SHA1 Message Date
Alex Denisov
539286a67b Inductor annotations (#130429)
Add NVTX annotations around training phases and buffer computations

RFC/discussion: https://dev-discuss.pytorch.org/t/rfc-performance-profiling-at-scale-with-details-nvtx-annotations/2224

<img width="2160" alt="Screenshot 2024-07-10 at 11 48 04" src="https://github.com/pytorch/pytorch/assets/1175576/9ade139c-d393-473f-9b68-6c25da367dc4">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130429
Approved by: https://github.com/aorenste, https://github.com/eellison, https://github.com/albanD

Co-authored-by: Cedric GESTES <cedric.gestes@flex.ai>
2024-12-10 08:53:39 +00:00
Xuehai Pan
f3fce597e9 [BE][Easy][17/19] enforce style for empty lines in import segments in torch/[a-c]*/ and torch/[e-n]*/ (#129769)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129769
Approved by: https://github.com/ezyang
2024-08-04 10:24:09 +00:00
Aaron Orenstein
62bcdc0ac9 Flip default value for mypy disallow_untyped_defs [4/11] (#127841)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127841
Approved by: https://github.com/oulgen
2024-06-08 18:36:48 +00:00
Kazuaki Ishizaki
3e2c9410e1 Fix docstring errors in memory.py, nvtx.py (#112751)
Fixes #112590

Fixed docstring errors in `torch/cuda/memory.py` and `torch/cuda/nvtx.py`.

memory.py
Before
```
torch/cuda/memory.py:1 at module level:
        D100: Missing docstring in public module
torch/cuda/memory.py:67 in public function `caching_allocator_alloc`:
        D401: First line should be in imperative mood (perhaps 'Perform', not 'Performs')
torch/cuda/memory.py:103 in public function `caching_allocator_delete`:
        D401: First line should be in imperative mood (perhaps 'Delete', not 'Deletes')
torch/cuda/memory.py:122 in public function `set_per_process_memory_fraction`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:148 in public function `empty_cache`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:148 in public function `empty_cache`:
        D400: First line should end with a period (not 'g')
torch/cuda/memory.py:163 in public function `memory_stats`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:163 in public function `memory_stats`:
        D400: First line should end with a period (not 'a')
torch/cuda/memory.py:163 in public function `memory_stats`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/cuda/memory.py:264 in public function `memory_stats_as_nested_dict`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/cuda/memory.py:272 in public function `reset_accumulated_memory_stats`:
        D401: First line should be in imperative mood (perhaps 'Reset', not 'Resets')
torch/cuda/memory.py:292 in public function `reset_peak_memory_stats`:
        D401: First line should be in imperative mood (perhaps 'Reset', not 'Resets')
torch/cuda/memory.py:311 in public function `reset_max_memory_allocated`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:311 in public function `reset_max_memory_allocated`:
        D400: First line should end with a period (not 'y')
torch/cuda/memory.py:311 in public function `reset_max_memory_allocated`:
        D401: First line should be in imperative mood (perhaps 'Reset', not 'Resets')
torch/cuda/memory.py:338 in public function `reset_max_memory_cached`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:338 in public function `reset_max_memory_cached`:
        D400: First line should end with a period (not 'e')
torch/cuda/memory.py:338 in public function `reset_max_memory_cached`:
        D401: First line should be in imperative mood (perhaps 'Reset', not 'Resets')
torch/cuda/memory.py:365 in public function `memory_allocated`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:365 in public function `memory_allocated`:
        D400: First line should end with a period (not 'n')
torch/cuda/memory.py:365 in public function `memory_allocated`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/cuda/memory.py:383 in public function `max_memory_allocated`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:383 in public function `max_memory_allocated`:
        D400: First line should end with a period (not 'n')
torch/cuda/memory.py:383 in public function `max_memory_allocated`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/cuda/memory.py:405 in public function `memory_reserved`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:405 in public function `memory_reserved`:
        D400: First line should end with a period (not 's')
torch/cuda/memory.py:405 in public function `memory_reserved`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/cuda/memory.py:421 in public function `max_memory_reserved`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:421 in public function `max_memory_reserved`:
        D400: First line should end with a period (not 's')
torch/cuda/memory.py:421 in public function `max_memory_reserved`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/cuda/memory.py:443 in public function `memory_cached`:
        D401: First line should be in imperative mood; try rephrasing (found 'Deprecated')
torch/cuda/memory.py:452 in public function `max_memory_cached`:
        D401: First line should be in imperative mood; try rephrasing (found 'Deprecated')
torch/cuda/memory.py:461 in public function `memory_snapshot`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/cuda/memory.py:474 in public function `memory_summary`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:474 in public function `memory_summary`:
        D400: First line should end with a period (not 'r')
torch/cuda/memory.py:474 in public function `memory_summary`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/cuda/memory.py:612 in public function `list_gpu_processes`:
        D202: No blank lines allowed after function docstring (found 1)
torch/cuda/memory.py:612 in public function `list_gpu_processes`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:612 in public function `list_gpu_processes`:
        D400: First line should end with a period (not 's')
torch/cuda/memory.py:612 in public function `list_gpu_processes`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/cuda/memory.py:648 in public function `mem_get_info`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:648 in public function `mem_get_info`:
        D400: First line should end with a period (not 'n')
torch/cuda/memory.py:648 in public function `mem_get_info`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/cuda/memory.py:684 in private function `_record_memory_history`:
        D202: No blank lines allowed after function docstring (found 1)
torch/cuda/memory.py:684 in private function `_record_memory_history`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:684 in private function `_record_memory_history`:
        D400: First line should end with a period (not 'y')
torch/cuda/memory.py:684 in private function `_record_memory_history`:
        D401: First line should be in imperative mood (perhaps 'Enable', not 'Enables')
torch/cuda/memory.py:742 in private function `_snapshot`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:742 in private function `_snapshot`:
        D401: First line should be in imperative mood (perhaps 'Save', not 'Saves')
torch/cuda/memory.py:818 in private function `_dump_snapshot`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:818 in private function `_dump_snapshot`:
        D401: First line should be in imperative mood (perhaps 'Save', not 'Saves')
torch/cuda/memory.py:849 in public function `get_allocator_backend`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:849 in public function `get_allocator_backend`:
        D400: First line should end with a period (not 'y')
torch/cuda/memory.py:849 in public function `get_allocator_backend`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
torch/cuda/memory.py:894 in public method `__init__`:
        D107: Missing docstring in __init__
torch/cuda/memory.py:904 in public function `change_current_allocator`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:904 in public function `change_current_allocator`:
        D401: First line should be in imperative mood (perhaps 'Change', not 'Changes')
torch/cuda/memory.py:917 in private function `_get_current_allocator`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
58
```
After
```
torch/cuda/memory.py:151 in public function `empty_cache`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:151 in public function `empty_cache`:
        D400: First line should end with a period (not 'g')
torch/cuda/memory.py:439 in public function `memory_cached`:
        D401: First line should be in imperative mood; try rephrasing (found 'Deprecated')
torch/cuda/memory.py:448 in public function `max_memory_cached`:
        D401: First line should be in imperative mood; try rephrasing (found 'Deprecated')
torch/cuda/memory.py:676 in private function `_record_memory_history`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:676 in private function `_record_memory_history`:
        D400: First line should end with a period (not 'y')
torch/cuda/memory.py:841 in public function `get_allocator_backend`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/memory.py:841 in public function `get_allocator_backend`:
        D400: First line should end with a period (not 'y')
8
```

nvtx.py
Before
```
torch/cuda/nvtx.py:1 at module level:
        D100: Missing docstring in public module
torch/cuda/nvtx.py:24 in public function `range_push`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/nvtx.py:24 in public function `range_push`:
        D400: First line should end with a period (not 'd')
torch/cuda/nvtx.py:35 in public function `range_pop`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/nvtx.py:35 in public function `range_pop`:
        D400: First line should end with a period (not 'e')
torch/cuda/nvtx.py:43 in public function `range_start`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/nvtx.py:43 in public function `range_start`:
        D400: First line should end with a period (not 'e')
torch/cuda/nvtx.py:81 in public function `range`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/nvtx.py:81 in public function `range`:
        D400: First line should end with a period (not 'g')
9
```
After
```
torch/cuda/nvtx.py:41 in public function `range_start`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/nvtx.py:41 in public function `range_start`:
        D400: First line should end with a period (not 'e')
torch/cuda/nvtx.py:79 in public function `range`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/nvtx.py:79 in public function `range`:
        D400: First line should end with a period (not 'g')
4
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112751
Approved by: https://github.com/kit1980
2023-11-03 15:19:17 +00:00
Edward Z. Yang
3bf922a6ce Apply UFMT to low traffic torch modules (#106249)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106249
Approved by: https://github.com/Skylion007
2023-07-29 23:37:30 +00:00
Horace He
5bbec680d7 Fix usages of contextmanager without finally (#96170)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96170
Approved by: https://github.com/ngimel, https://github.com/malfet
2023-03-08 20:59:27 +00:00
Aaron Gokaslan
8fce9a09cd [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-07 21:10:56 +00:00
ProGamerGov
357b7d589c Fix docstring inconsistencies: string -> str, boolean -> bool (#82410)
### Description

Throughout the PyTorch docs and codebase, the `string` type in docstrings is referred to by two separate names. This leads to inconsistent docs, like you can see here: https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html#torch.nn.Conv3d

This PR fixes this issue by ensuring that all mentions of the string type in docstrings, are using the same format that Sphinx generates hyperlinks for.

### Testing
No testing should be required for this change

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82410
Approved by: https://github.com/jbschlosser
2022-07-28 21:29:57 +00:00
Louis Feng
83b3b5fb00 [PyTorch] Support NVTX range_start and range_end (#70030)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70030

range_push and range_pop do not support multi-thread. It only works for push and pop range in the same thread.

For process level ranges, we should use range_start and range_end. This is important because PyTorch forward is on one thread, while the autograd is on a different thread.

See NVidia implementation documentation:
cab2dec760/NSight/nvToolsExt.h (L397-L407)

Test Plan:
```
buck test caffe2/test:cuda

Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/8162774391483460
    ✓ ListingSuccess: caffe2/test:cuda - main (19.640)
Summary
  ListingSuccess: 1
If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/8162774391483460
```

Reviewed By: malfet

Differential Revision: D33155244

fbshipit-source-id: c7d5143f6da9b6ef0e0811e2fcae03a3e76f24de
(cherry picked from commit 22134e91b7)
2022-02-07 17:31:57 +00:00
Samuel Marks
e6779d4357 [*.py] Rename "Arguments:" to "Args:" (#49736)
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
Christian Hudon
511f89eaa9 Add nvtx.range() context manager (#42925)
Summary:
Small quality-of-life improvement to NVTX Python bindings, that we're using internally and that would be useful to other folks using NVTX annotations via PyTorch. (And my first potential PyTorch contribution.)

Instead of needing to be careful with try/finally to make sure all your range_push'es are range_pop'ed:

```
nvtx.range_push("Some event")
try:
    # Code here...
finally:
    nvtx.range_pop()
```

you can simply do:

```
with nvtx.range("Some event"):
    # Code here...
```

or even use it as a decorator:

```
class MyModel(nn.Module):

    # Other methods here...

    nvtx.range("MyModel.forward()")
    def forward(self, *input):
        # Forward pass code here...
```

A couple small open questions:

1. I also added the ability to call `msg.format()` inside `range()`, with the intention that, if there is nothing listening to NVTX events, we should skip the string formatting, to lower the overhead in that case. If you like that idea, I could add the actual "skip string formatting if nobody is listening to events" parts. We can also just leave it as is. Or I can remove that if you folks don't like it. (In the first two cases, should we add that to `range_push()` and `mark()` too?) Just let me know which one it is, and I'll update the pull request.

2. I don't think there are many places for bugs to hide in that function, but I can certainly add a quick test, if you folks want.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42925

Reviewed By: gchanan

Differential Revision: D24476977

Pulled By: ezyang

fbshipit-source-id: 874882818d958e167e624052e42d52fae3c4abf1
2020-10-22 19:46:16 -07:00
Nikita Shulga
db78c07ced Enable torch.cuda.nvtx typechecking (#43443)
Summary:
Add pyi file covering torch._C.nvtx submodule

Fixes https://github.com/pytorch/pytorch/issues/43436

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43443

Reviewed By: ezyang

Differential Revision: D23280188

Pulled By: malfet

fbshipit-source-id: 882860cce9feb0b5307c8b7c887f4a2f2c1548a2
2020-08-24 08:20:12 -07:00
Peter Bell
5fc5cf6571 Stop using ctypes to interface with CUDA libraries. (#33678)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/33016, Continuation of https://github.com/pytorch/pytorch/issues/31160
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33678

Differential Revision: D20249187

Pulled By: ezyang

fbshipit-source-id: 172ce4a0fee7fbe01436a421d1af22ef6173b6ed
2020-03-11 07:22:46 -07:00
peterjc123
aa911939a3 Improve Windows Compatibility (for csrc/scripts) (#2941) 2017-11-08 19:51:35 +01:00
Edward Z. Yang
ba690d5607 Add support for NVTX functions. (#1748) 2017-06-10 18:26:58 +02:00