Commit Graph

4 Commits

Author SHA1 Message Date
Lijiang Fang
8888565597 T90561249: Enforce kernel launch checks (#58178)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58178

https://www.internalfb.com/T90561249: change the test to enforce

Test Plan:
buck test //caffe2/test:kernel_launch_checks

before fixing LinearAlgebra.cu and file close: https://www.internalfb.com/intern/testinfra/testconsole/testrun/1970324893386017/

after: https://www.internalfb.com/intern/testinfra/testconsole/testrun/2814749824394650/

Reviewed By: r-barnes

Differential Revision: D28390166

fbshipit-source-id: 8a217ce8c0b204922005c731aa38bc03f70fabcc
2021-05-13 10:41:20 -07:00
Richard Barnes
39a10fb652 Fix check_kernel_launches.py for macros and provide extended context (#49365)
Summary:
`check_kernel_launches.py` currently gives a false positive in instances such as:
```
735:     <<<smallIndexGrid, smallIndexBlock, 0, stream>>>(                                   \
736:       outInfo, selfInfo, indicesInfo,                                                   \
737:       outSelectDim, selfSelectDim, static_cast<TYPE>(sliceSize),                        \
738:       selfSelectDimSize);                                                               \
739:     C10_CUDA_KERNEL_LAUNCH_CHECK();
```
because the newlines after the last `\` are not consumed by the regex. This fixes that.

In addition, the regex is modified to provide greater context for the start of the kernel launch. This changes the context from:
```
157:       (
158:           size, X_strides, Y_dims, X, Y);
```
to
```
157:       <<<M, CAFFE_CUDA_NUM_THREADS, 0, context->cuda_stream()>>>(
158:           size, X_strides, Y_dims, X, Y);
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49365

Test Plan:
```
buck test //caffe2/test:kernel_launch_checks -- --print-passing-details
```

Reviewed By: aakshintala

Differential Revision: D25545402

Pulled By: r-barnes

fbshipit-source-id: 76feac6a002187239853752b892f4517722a77bf
2020-12-14 22:09:33 -08:00
Kurtis David
b824fc4de2 [pytorch] [PR] Rename cuda kernel checks to C10 (#48615)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48615

Convert the macro from `TORCH_CUDA_KERNEL_LAUNCH_CHECK` to `C10_CUDA_KERNEL_LAUNCH_CHECK`, since it is now accessible through c10, not just torch.

Test Plan:
```
buck build //caffe2/caffe2:caffe2_cu
buck build //caffe2/aten:ATen-cu
buck test //caffe2/test:kernel_launch_checks -- --print-passing-details
```

Reviewed By: jianyuh

Differential Revision: D25228727

fbshipit-source-id: 9c65feb3d0ea3fbd31f1dcaecdb88ef0534f9121
2020-12-01 18:19:07 -08:00
Richard Barnes
353e7f940f Ensure kernel launches are checked (#46474)
Summary:
Caffe2 and Torch currently does not have a consistent mechanism for determining if a kernel has launched successfully. The result is difficult-to-detect or silent errors. This diff provides functionality to fix that. Subsequent diffs on the stack fix the identified issues.

Kernel launch errors may arise if invalid launch parameters (number of blocks, number of threads, shared memory, or stream id) are specified incorrectly for the hardware or for other reasons. Interestingly, unless these launch errors are specifically checked for CUDA will silently fail and return garbage answers which can affect downstream computation. Therefore, catching launch errors is important.

Launches are currently checked by placing
```
AT_CUDA_CHECK(cudaGetLastError());
```
somewhere below the kernel launch. This is bad for two reasons.
1. The check may be performed at a site distant to the kernel launch, making debugging difficult.
2. The separation of the launch from the check means that it is difficult for humans and static analyzers to determine whether the check has taken place.

This diff defines a macro:
```
#define TORCH_CUDA_KERNEL_LAUNCH_CHECK() AT_CUDA_CHECK(cudaGetLastError())
```
which clearly indicates the check.

This diff also introduces a new test which analyzes code to identify kernel launches and determines whether the line immediately following the launch contains `TORCH_CUDA_KERNEL_LAUNCH_CHECK();`.

A search of the Caffe2 codebase identifies 104 instances of `AT_CUDA_CHECK(cudaGetLastError());` while the foregoing test identifies 1,467 launches which are not paired with a check. Visual inspection indicates that few of these are false positives, highlighting the need for some sort of static analysis system.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46474

Test Plan:
The new test is run with:
```
buck test //caffe2/test:kernel_launch_checks -- --print-passing-details
```
And should be launched automatically with the other land tests. (TODO: Is it?)

The test is currently set up only to provide warnings but can later be adjusted to require checks.

Otherwise, I rely on the existing test frameworks to ensure that changes resulting from reorganizing existing launch checks don't cause regressions.

Reviewed By: ngimel

Differential Revision: D24309971

Pulled By: r-barnes

fbshipit-source-id: 0dc97984a408138ad06ff2bca86ad17ef2fdf0b6
2020-10-28 09:27:48 -07:00