pytorch/test/cpp/api
richard 382ef1fda7 Autograd graphtask trim unnecessary edges (#82544)
### Introduction
<!-- What did you change and why was it needed? -->

Removing unnecessary weight gradient calculation is very important for applications that need high-order derivatives during training. However, this is not supported by the current Autograd engine.

For more detail: The backward function of a `matmul` operator (e.g., `linear` `addmm` `mm`), has two matmuls, one for `input gradient` and another for `weight gradient`. For a typical neural network (nn) with a few linear layers and activation functions, if the user calls `torch.autograd.grad()` to calculate the derivative of the nn output `y` w.r.t the nn input `x`,  only the `input gradient` of the `matmul` operator is needed, and the `weight gradient` is discarded. However, the current PyTorch autograd engine will always calculate the `weight gradient` if `weight` requires gradient (the calculation of the high-order derivative is performed during training).

The figure attached shows the autograd graph of the following code snippet:
```py
y = torch.nn.functional.linear(x, weight, bias)
y = y.pow(2)
# first order derivative
y__x, = torch.autograd.grad(y, x, grad_outputs=grad_outputs, create_graph=True)
# first order derivative
y__x__x, = torch.autograd.grad(y__x, x, grad_outputs=grad_outputs, create_graph=True)
```
The path with  is not needed when calculating derivatives.

<img width="50%" alt="image" src="https://user-images.githubusercontent.com/9999318/182018117-719c5a23-bcc6-4a63-8e8d-1bca3ebda2e3.png">

### Issue
<!-- Link to Issue ticket or RFP -->
Related issue: https://github.com/pytorch/pytorch/issues/56500

### Method
When calling `torch.autograd.grad`, `exec_info_` is created for each GraphTask, which allows filtering paths on the graph that are not needed. However, when the GraphTask calls into the node, the node still does not know whether the edges are needed or not. In the case of matmul, `weight.requires_grad is True` so the weight gradient is always calculated.

Following https://github.com/pytorch/pytorch/issues/56500#issuecomment-825694656, this PR passes the graph task's thread_local `exec_info_` into the node, so it could trim unnecessary edges during `torch.autograd.grad` calls.

### Benchmark
Benchmark script: https://gist.github.com/yueyericardo/24158433a2021c51eeef9c3e2722df99

Benchmark result:
6 hidden layers, batch size 10000, on A100

FP32 result
| hessian benchmark             | FP32 (before) | FP32 (After)      | FP32 (Functorch v0.1.1) |
| ----------------------------- | ------------- | ----------------- | ----------------------- |
| Linear + ReLU (no backward)   | 55.658 ms     | 29.392 ms (1.90X) | 29.547 ms (1.90X)       |
| Linear + ReLU (with backward) | 81.173 ms     | 54.917 ms (1.47X) | 68.988 ms (1.18X)       |

TF32 result
| hessian benchmark             | TF32 (before) | TF32 (after)      | TF32 (Functorch v0.1.1) |
| ----------------------------- | ------------- | ----------------- | ----------------------- |
| Linear + ReLU (no backward)   | 19.801 ms     | 11.259 ms (1.76X) | 10.754 ms (1.84X)       |
| Linear + ReLU (with backward) | 29.167 ms     | 20.466 ms (1.42X) | 22.784 ms (1.28X)       |

For FP32 result, we could get 1.9X speed up for hessian calculation, and 1.47X speed up during training, which is even faster than functorch `vmap(jacfwd(jacrev` implementation. (functorch has performance regression on v0.2.0, https://github.com/pytorch/functorch/issues/989, so we are using v0.1.1 for benchmark)

@zou3519 does functorch also includes similar optimizations during hessian calculation? If not, what do we need to do so the functorch could also benefit from this PR?

### Testing
<!-- How did you test your change? -->

- [x] we need to figure out a way for unittest

### Thanks
Thanks for the great blog: [How Computational Graphs are Executed in PyTorch | PyTorch](https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/)

cc @zasdfgbnm @albanD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82544
Approved by: https://github.com/soulitzer
2022-08-11 18:50:09 +00:00
..
any.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
autograd.cpp Autograd graphtask trim unnecessary edges (#82544) 2022-08-11 18:50:09 +00:00
CMakeLists.txt [BE] Add append_cxx_flag_if_supported macro (#82883) 2022-08-10 14:32:26 +00:00
dataloader.cpp Build MacOS binaries with -Werror (#83049) 2022-08-10 17:29:44 +00:00
dispatch.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
enum.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
expanding-array.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
fft.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
functional.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
grad_mode.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
imethod.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
inference_mode.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
init_baseline.h [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
init_baseline.py
init.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
integration.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
jit.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
memory.cpp
meta_tensor.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
misc.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
module.cpp Fix //:module_test Conversion_MultiCUDA (#79926) 2022-06-21 23:32:18 +00:00
moduledict.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
modulelist.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
modules.cpp Lower randint default dtype to the C++ API (#81410) 2022-07-21 16:42:49 +00:00
namespace.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
nn_utils.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
operations.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
optim_baseline.h [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
optim_baseline.py
optim.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
ordered_dict.cpp
parallel_benchmark.cpp
parallel.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
parameterdict.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
parameterlist.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
README.md
rnn.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
sequential.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
serialize.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
special.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
static.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
support.cpp
support.h [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor_cuda.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor_flatten.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor_indexing.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor_options_cuda.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor_options.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
torch_include.cpp
transformer.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.