pytorch/test/cpp/jit/test_dce.cpp
Meghan Lele 9ce833879f [JIT] Introduce a fake Tensor creation node for IR unit tests (#33914)
Summary:
**Summary**
There is often a need to create a Tensor when writing IR by hand for JIT
optimisation pass unit tests. The only options for this today are real
Tensor creation functions like `aten::ones`. Any test that uses these functions
must also use the same default arguments as the Python/C++ API, which means
that all of the tests have to be updated when the API is updated. This commit
introduces a new primitive, `prim::MakeTestTensor` with schema `() -> Tensor` that
should be used in unit tests instead of real Tensor creation functions. This new
primitive has no public-facing API, so the maintenance burden is much lower.

**Testing**
This commit updates the alias analysis and DCE tests to use `prim::MakeTestTensor` instead of
`aten::rand`, `aten::ones`, and `aten::zeros`.

```
$ ./bin/test_jit
CUDA not available. Disabling CUDA and MultiCUDA tests
Note: Google Test filter = *-*_CUDA:*_MultiCUDA
[==========] Running 75 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 75 tests from JitTest
[ RUN      ] JitTest.ADFormulas
[       OK ] JitTest.ADFormulas (82 ms)
[ RUN      ] JitTest.Attributes
[       OK ] JitTest.Attributes (0 ms)
...
...
...
[ RUN      ] JitTest.LiteInterpreterPrim
[       OK ] JitTest.LiteInterpreterPrim (0 ms)
[ RUN      ] JitTest.LiteInterpreterLoadOrigJit
[       OK ] JitTest.LiteInterpreterLoadOrigJit (2 ms)
[----------] 75 tests from JitTest (150 ms total)

[----------] Global test environment tear-down
[==========] 75 tests from 1 test case ran. (150 ms total)
[  PASSED  ] 75 tests.
```

**Fixes**
This pull request fixes https://github.com/pytorch/pytorch/issues/33500.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33914

Differential Revision: D20150304

Pulled By: SplitInfinity

fbshipit-source-id: c88f5289055a02dc20b7a5dcdf87469f9816d020
2020-03-05 12:42:42 -08:00

53 lines
1.6 KiB
C++

#include <test/cpp/jit/test_base.h>
#include <test/cpp/jit/test_utils.h>
#include <torch/csrc/jit/passes/dead_code_elimination.h>
#include <torch/csrc/jit/testing/file_check.h>
namespace torch {
namespace jit {
void testDCE() {
auto graph = std::make_shared<Graph>();
// Consider the following loop:
// for i in range(3):
// tot += a[0][0]
// b = a[0]
// b[0] += 1
// print(tot)
// We want to check that b[0] and b are properly marked as live and thus not
// DCE'd.
const std::string input =
R"IR(
graph():
%48 : None = prim::Constant()
%50 : bool = prim::Constant[value=1]()
%0 : int = prim::Constant[value=2]()
%12 : int = prim::Constant[value=1]()
%24 : int = prim::Constant[value=3]()
%31 : int = prim::Constant[value=0]()
%2 : int[] = prim::ListConstruct(%0, %0)
%a.1 : Tensor = prim::MakeTestTensor()
%14 : int[] = prim::ListConstruct(%12)
%tot.1 : Tensor = prim::MakeTestTensor()
%tot : Tensor = prim::Loop(%24, %50, %tot.1)
block0(%i : int, %tot.6 : Tensor):
%33 : Tensor = aten::select(%a.1, %31, %31)
%35 : Tensor = aten::select(%33, %31, %31)
# CHECK: add_
%tot.3 : Tensor = aten::add_(%tot.6, %35, %12)
%b.1 : Tensor = aten::select(%a.1, %31, %31)
%44 : Tensor = aten::select(%b.1, %31, %31)
# CHECK: add_
%46 : Tensor = aten::add_(%44, %12, %12)
-> (%50, %tot.3)
return (%tot)
)IR";
script::parseIR(input, graph.get());
EliminateDeadCode(graph);
// Check that dead code elimin
testing::FileCheck().run(input, *graph);
}
} // namespace jit
} // namespace torch