pytorch/benchmarks/static_runtime
Hao Lu 11cda929fb [StaticRuntime] Fix bug in MemoryPlanner (#51342)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51342

There is a subtle bug with the MemoryPlanner with regard to view ops with out variant.

```
  def forward(self, a: Tensor, shape: List[int]):
      b = a.reshape(shape)
      return b + b
```
In this case, if we replace reshape with the out variant, b would be managed by the MemoryPlanner and the storage of its output would have been set to nullptr right after inference by the MemoryPlanner if opts.cleanup_activations is true. Because b is a view of a, the storage of a is also set to nullptr, and this violates the API which promises that a is const.

To fix this bug, I changed the MemoryPlanner so that it puts b in the unmanaged part.

Test Plan:
Add unit test to enforce the constness of inputs

```
buck test //caffe2/benchmarks/static_runtime:static_runtime_cpptest
```

Reviewed By: ajyu

Differential Revision: D26144203

fbshipit-source-id: 2dbacccf7685d0fe0f0b1195166e0510b2069fe3
2021-01-29 21:16:02 -08:00
..
CMakeLists.txt [static runtime] Add _out variants and reuse memory (#44128) 2020-09-25 11:03:06 -07:00
deep_wide_pt_bench.cc [static runtime] Initial memonger (#47759) 2020-11-17 13:55:49 -08:00
deep_wide_pt.cc [static runtime] Initial memonger (#47759) 2020-11-17 13:55:49 -08:00
deep_wide_pt.h Class-based structured kernels, with migration of add to framework (#48718) 2020-12-09 15:39:12 -08:00
test_scripts.h [StaticRuntime] Fix bug in MemoryPlanner (#51342) 2021-01-29 21:16:02 -08:00
test_static_runtime.cc [StaticRuntime] Fix bug in MemoryPlanner (#51342) 2021-01-29 21:16:02 -08:00