Commit Graph

107 Commits

Author SHA1 Message Date
Mikhail Zolotukhin
6e00b31b15 [TensorExpr] Make CacheReplacer and IndexFlattener mutate stmts/exprs inplace. (#63527)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63527

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D30411411

Pulled By: ZolotukhinM

fbshipit-source-id: efb14ee57b36537fa4fefa89bdd6bafe7151c012
2021-08-18 22:59:31 -07:00
Mikhail Zolotukhin
7fdba4564a [TensorExpr] IRSimplifier: sort terms in polynomials, terms, minterms, maxterms. (#63197)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63197

This solves non-determinism from using hash values in sort methods.
Changes in tests are mostly mechanical.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D30292776

Pulled By: ZolotukhinM

fbshipit-source-id: 74f57b53c3afc9d4be45715fd74781271373e055
2021-08-18 14:49:27 -07:00
Mikhail Zolotukhin
1dc2b52764 [TensorExpr] Add a wrapper for all expr and stmt pointers. (#63195)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63195

This helps us to later switch from using KernelArena with raw pointers
to shared pointers without having to change all our source files at
once.

The changes are mechanical and should not affect any functionality.

With this PR, we're changing the following:
 * `Add*` --> `AddPtr`
 * `new Add(...)` --> `alloc<Add>(...)`
 * `dynamic_cast<Add*>` --> `to<Add>`
 * `static_cast<Add*>` --> `static_to<Add>`

Due to some complications with args forwarding, some places became more
verbose, e.g.:
 * `new Block({})` --> `new Block(std::vector<ExprPtr>())`

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D30292779

Pulled By: ZolotukhinM

fbshipit-source-id: 150301c7d2df56b608b035827b6a9a87f5e2d9e9
2021-08-17 13:44:45 -07:00
Raghavan Raman
e50e8b07d8 [nnc] Updated IRMutator and IRSimplifier to perform in-place mutations. (#63246)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63246

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D30309636

Pulled By: navahgar

fbshipit-source-id: 409ea8d6982888cfee9127e6248044dd2ed9d8d4
2021-08-16 00:09:22 -07:00
Raghavan Raman
59dd12042e [nnc] Removed const from all fields in IR. (#62336)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62336

This PR was generated by removing `const` for all types of nodes in NNC IR, and fixing compilation errors that were the result of this change.

This is the first step in making all NNC mutations in-place.

Test Plan: Imported from OSS

Reviewed By: iramazanli

Differential Revision: D30049829

Pulled By: navahgar

fbshipit-source-id: ed14e2d2ca0559ffc0b92ac371f405579c85dd63
2021-08-03 11:44:36 -07:00
Richard Barnes
ee44d73e59 Modernize override (#61744)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61744

Test Plan: Sandcastle

Reviewed By: malfet

Differential Revision: D29717320

fbshipit-source-id: 6eea4295ee2e5572ab337620be412376fcc2f3cc
2021-07-23 23:04:46 -07:00
Nikita Shulga
a9b0a921d5 Disable avoid-non-const-global-variables lint check (#62008)
Summary:
As GoogleTest `TEST` macro is non-compliant with it as well as `DEFINE_DISPATCH`

All changes but the ones to `.clang-tidy` are generated using following script:
```
for i in `find . -type f -iname "*.c*" -or -iname "*.h"|xargs grep cppcoreguidelines-avoid-non-const-global-variables|cut -f1 -d:|sort|uniq`;  do sed -i "/\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)/d" $i; done
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62008

Reviewed By: driazati, r-barnes

Differential Revision: D29838584

Pulled By: malfet

fbshipit-source-id: 1b2f8602c945bd4ce50a9bfdd204755556e31d13
2021-07-22 18:04:40 -07:00
Bert Maher
b963607d50 [nnc] Insert alloc/free at global scope (#61725)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61725

Alloc/free inside a loop isn't really an optimization, and furthermore
it breaks some attempted optimization in the llvm backend: we use alloca for
small allocations, which is efficient since alloca is on the stack, but there's
no corresponding free, so we leak tons of stack.  I hit this while building an
rfactor buffer inside a very deeply nested loop.
ghstack-source-id: 133627310

Test Plan:
Unit test which simulates use of a temp buffer in a deeply nested
loop.

Reviewed By: navahgar

Differential Revision: D29533364

fbshipit-source-id: c321f4cb05304cfb9146afe32edc4567b623412e
2021-07-16 08:42:24 -07:00
Raghavan Raman
843c42ffd8 [nnc] Refactored test macros and updated compress buffer tests to use them (#61716)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61716

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D29715754

Pulled By: navahgar

fbshipit-source-id: c400a58b7f393c0f93e5a25f118403124f8834b0
2021-07-15 21:17:14 -07:00
Raghavan Raman
d01837081d [nnc] Cleaned up compress buffer tests to use BufHandle instead of Buf (#61715)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61715

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D29715755

Pulled By: navahgar

fbshipit-source-id: 453adac8f5b13263c39d96b6b4086425a01bae54
2021-07-15 21:15:23 -07:00
Raghavan Raman
bd360ebe6f [nnc] Added a new API to distribute loop and all its parents (#61293)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61293

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D29560008

Pulled By: navahgar

fbshipit-source-id: e4e459184f20b1872bc242ba8626d0a6df29e810
2021-07-15 10:28:20 -07:00
Raghavan Raman
76f097466e [nnc] Added a new API to compress all buffers in a given statement (#61087)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61087

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D29506677

Pulled By: navahgar

fbshipit-source-id: 63583fd5a0e42c0096ddf08d5b96bc680ea8a44e
2021-07-15 10:28:18 -07:00
Raghavan Raman
2908d3eb45 [nnc] Modified the semantics of reorder in using permutation (#61085)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61085

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D29506679

Pulled By: navahgar

fbshipit-source-id: f674aedff8175b9947404fd2164a0b4f57a71e93
2021-07-15 10:28:16 -07:00
Raghavan Raman
6d952dbaf0 [nnc] Fixed checking for loop carried dependence while fusing 2D reduction loops (#60609)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60609

Fixes #60310

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D29386144

Pulled By: navahgar

fbshipit-source-id: 230df4f59d6196a250ea57ff649b117d096fcdbc
2021-06-29 14:17:01 -07:00
Hui Guo
d867340c7b [nnc] Add LoopNest::getLoopAt to retrieve a specified inner For-stmt (#60569)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/60569

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D29337767

Pulled By: huiguoo

fbshipit-source-id: e3ae23c1b290739c03d1fa5d7da25de878eb1d4c
2021-06-23 15:53:29 -07:00
Hui Guo
c0d08dc10f [NNC] Add tile transformation in loopnest (fixed #52785) (#57758)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57758

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D28260744

Pulled By: huiguoo

fbshipit-source-id: 6b5591850aaf46455bf3c2d776fa930654839a63
2021-06-23 15:52:19 -07:00
Bin Bao
3dc8112187 [NNC] Handle int64 indices and loop bounds (#59769)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59769

Allow loop bound and tensor indice to be either int32 or int64, and avoid unnecessary cast op.

Test Plan:
```
build/bin/test_tensorexpr
```

Reviewed By: H-Huang

Differential Revision: D29173970

Pulled By: desertfire

fbshipit-source-id: 859a876ddb1b41535b2266089aa1222884295c78
2021-06-17 09:35:59 -07:00
Raghavan Raman
b822928e33 [nnc] Removed setGPUBlockIndex and setGPUThreadIndex methods from LoopNest (#59495)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/59495

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D28915960

Pulled By: navahgar

fbshipit-source-id: 20a4032b031aba6e43d85433ade5f0680c65fbc0
2021-06-15 10:37:46 -07:00
Raghavan Raman
aa163aeff5 [nnc] Made several LoopNest APIs static (#59494)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/59494

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D28915959

Pulled By: navahgar

fbshipit-source-id: bf52e30d893f4d86812219b538a14307f347f10b
2021-06-15 10:36:31 -07:00
Raghavan Raman
b83ac0cc4e [nnc] Added a check to vectorize only those loops that are normalized. (#59423)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/59423

Test Plan: Imported from OSS

Reviewed By: huiguoo

Differential Revision: D28886979

Pulled By: navahgar

fbshipit-source-id: edfc61feaf5efe22d4f367ac718b83b3d0f47cb3
2021-06-11 12:03:34 -07:00
Raghavan Raman
30e24b2d2b [nnc] Modified vectorize API to return bool (#59422)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/59422

Test Plan: Imported from OSS

Reviewed By: huiguoo

Differential Revision: D28886980

Pulled By: navahgar

fbshipit-source-id: 58cc3ecd86564a312a132f8260d836b096505095
2021-06-11 12:02:19 -07:00
Raghavan Raman
dd7bbe1a63 [NNC] Make splitWithMask transform in-place (#58269)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58269

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D28427227

Pulled By: navahgar

fbshipit-source-id: 4e38a436abcf4752fd7ef6ab3666876eec6ea5ba
2021-05-25 11:32:51 -07:00
Raghavan Raman
e2467cc43e [NNC] Make splitWithTail transform in-place (#58268)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58268

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D28427228

Pulled By: navahgar

fbshipit-source-id: 270b62c4e83739ad21dd68f375120e56881b394f
2021-05-25 11:31:14 -07:00
Raghavan Raman
4b859cbca1 [NNC] Do not optimize conditionals when the corresponding loop is not normalized (#57675)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57675

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D28231375

Pulled By: navahgar

fbshipit-source-id: bcbcebca25577744c7190a0aa9fa376f76dea77d
2021-05-18 14:25:53 -07:00
Raghavan Raman
a71b99b50d [NNC] Add a method to check if a loop is normalized (#57674)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57674

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D28231377

Pulled By: navahgar

fbshipit-source-id: 3d92d532f1e1f78c9d94619980340622b73f99ec
2021-05-18 14:25:50 -07:00
Raghavan Raman
3fe72d30dc [NNC] Optimize conditionals that correspond to the form generated for aten::cat op. (#57673)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57673

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D28231374

Pulled By: navahgar

fbshipit-source-id: 1777a63df4e5ebed6d515683bd772a88be465b3a
2021-05-18 14:23:48 -07:00
CodemodService FBSourceClangFormatLinterBot
cbfce376a8 [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D28319469

fbshipit-source-id: 8295597a8ee16b2fef3f7aacdd6c892cb22db988
2021-05-10 03:39:31 -07:00
Nikita Shulga
3a66a1cb99 [clang-tidy] Exclude cppcoreguidelines-avoid-magic-numbers (#57841)
Summary:
Add cppcoreguidelines-avoid-magic-numbers exclusion to clang-tidy
Remove existing nolint warnings using following script:
```
for file in `git ls-files | grep -v \.py`; do gsed '/^ *\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers)/d' -i  $file; done
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57841

Reviewed By: samestep

Differential Revision: D28295045

Pulled By: malfet

fbshipit-source-id: 7c6e8d1213c9593f169ed3df6a916498f1a97163
2021-05-07 20:02:33 -07:00
Raghavan Raman
e795f88d6b [NNC] Make flatten transform in-place (#56629)
Summary:
Partial fix for https://github.com/pytorch/pytorch/issues/56157

This PR updates the `flatten` API in `LoopNest` to perform the flattening transformation in-place. After this transformation, the first loop in the input becomes the flattened loop.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56629

Reviewed By: H-Huang

Differential Revision: D28004787

Pulled By: navahgar

fbshipit-source-id: 7474ae237fae3fff0cd1c64a276a8831dc5b7db0
2021-04-30 09:51:45 -07:00
Nikita Shulga
4cb534f92e Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os

def get_compiled_files_list():
    import json
    with open("build/compile_commands.json") as f:
        data = json.load(f)
    files = [os.path.relpath(node['file']) for node in data]
    for idx, fname in enumerate(files):
        if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
            files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
    return files

def run_clang_tidy(fname):
    check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
    changes = check_output(["git", "ls-files", "-m"])
    if len(changes) == 0:
        return
    check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])

def main():
    git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
    compiled_files = get_compiled_files_list()
    for idx, fname in enumerate(git_files):
        if fname not in compiled_files:
            continue
        if fname.startswith("caffe2/contrib/aten/"):
            continue
        print(f"[{idx}/{len(git_files)}] Processing {fname}")
        run_clang_tidy(fname)

if __name__ == "__main__":
    main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892

Reviewed By: H-Huang

Differential Revision: D27991944

Pulled By: malfet

fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 14:10:25 -07:00
Raghavan Raman
5b7317b562 [NNC] API for Buffer Compression (#55853)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54338

This PR adds the following API in NNC to implement "buffer compression".

```
static void compressBuffer(Buf* buf, Stmt* stmt);
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55853

Reviewed By: ezyang

Differential Revision: D27960986

Pulled By: navahgar

fbshipit-source-id: a69988e607196f3e2db0212313ea5deefb9859ac
2021-04-23 14:12:03 -07:00
Raghavan Raman
d43d6593cd [NNC] Handling conditionals in reorderAxis (#56063)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53093

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56063

Reviewed By: huiguoo

Differential Revision: D27894772

Pulled By: navahgar

fbshipit-source-id: 403b65f20567c27eab73faf670087cfab9885f84
2021-04-21 09:35:17 -07:00
Raghavan Raman
13ac0019ae [NNC] Update loop-carried dependence check to handle all known dependences (#56354)
Summary:
This PR includes:
 * Update to the loop-carried dependence check API to correctly ignore loop-independent dependences and handle all kinds of loop-carried dependences like RAW, WAR and WAW.
 * Fix for the overlap API to look only for conflicting buffer accesses where at least one of them is a Store.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56354

Reviewed By: bertmaher

Differential Revision: D27856202

Pulled By: navahgar

fbshipit-source-id: 206e4ec771fe0f7f2ccf4b11b29e35df7b9b18bc
2021-04-20 17:12:51 -07:00
Raghavan Raman
0d94c04247 [NNC] Change fuseLoops API to return bool flag and not throw any exceptions (#56353)
Summary:
Partial fix for https://github.com/pytorch/pytorch/issues/56357

Changes the `fuseLoops` API to the following form:
```
static bool fuseLoops(const std::vector<For*>& loops, For** fused);
```

Also, adds a new API to check for loop-carried dependences:
```
static bool hasLoopCarriedDependence(For* loop);
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56353

Reviewed By: bertmaher

Differential Revision: D27856214

Pulled By: navahgar

fbshipit-source-id: 443557088692585657faee296602c547a00117dd
2021-04-19 17:20:40 -07:00
Raghavan Raman
b387f7ca47 [NNC] Make normalization transformation in-place (#56158)
Summary:
Partially fixes https://github.com/pytorch/pytorch/issues/56157

This PR changes `normalize` API in `LoopNest` to transform the given `For` statement and not create a new one.

New API:

```
static bool normalize(For* f);
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56158

Reviewed By: agolynski

Differential Revision: D27798361

Pulled By: navahgar

fbshipit-source-id: 57626a5a367bdf94a0efbd9dc8538f5e4e410d6b
2021-04-18 23:54:13 -07:00
Raghavan Raman
29c5cb797d [NNC] Fuse loops that have the same bounds as expressions (#55997)
Summary:
This PR allows fusing loops whose bounds are specified as expressions that are equal.

For example:
```
   for (int j = 0; j < M + N; j++) {
     A[j] = 10 * j;
   }
   for (int k = 0; k < M + N; k++) {
     B[k] = 20 * k;
   }
```
`fuseLoops(j, k)` is possible since the stop bounds of the two loops are equal though they are different `Expr*` and will result in:
```
   for (int j = 0; j < M + N; j++) {
     A[j] = 10 * j;
     B[j] = 20 * j;
   }
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55997

Reviewed By: bertmaher

Differential Revision: D27841270

Pulled By: navahgar

fbshipit-source-id: a64e4503b7f8f28bc0c9823225bc923177bb4c2e
2021-04-18 11:14:26 -07:00
Mikhail Zolotukhin
556dfcb0db [TensorExpr] Re-enable "LoopNest.VectorizeUse" test. (#56094)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56094

Now FunctionCalls are merged with Loads and vectorization for
intermediate values automatically started to work.

Fixes #53553.

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D27781519

Pulled By: ZolotukhinM

fbshipit-source-id: 1ed68ca2399e9bd4598639bd6dd8f369365f0ef0
2021-04-14 21:39:03 -07:00
Mikhail Zolotukhin
7ab654afd7 [TensorExpr] Rename Tensor::call to Tensor::load to be consistent with Buf and Placeholder. (#55826)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55826

It's a mechanical change.

Differential Revision: D27717777

Test Plan: Imported from OSS

Reviewed By: navahgar

Pulled By: ZolotukhinM

fbshipit-source-id: fbc1bb99602250c706cf2c8c2684119c323e4d51
2021-04-13 12:08:53 -07:00
Mikhail Zolotukhin
1263448cb2 [TensorExpr] Remove mask field from Load and Store classes. (#55825)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55825

The mask has never been used (in vectorization we generate an explicit
`IfThenElse` construct when we need to mask out some elements). The PR
removes it and cleans up all its traces from tests.

Differential Revision: D27717776

Test Plan: Imported from OSS

Reviewed By: navahgar

Pulled By: ZolotukhinM

fbshipit-source-id: 41d1feeea4322da75b3999d661801c2a7f82b9db
2021-04-13 12:08:51 -07:00
Raghavan Raman
d805908c34 [NNC] API to reorder multiple loops (#55568)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/52690

This PR adds the following APIs:

```
static bool areLoopsPerfectlyNested(const std::vector<For*>& loops);

static std::vector<For*> reorder(
      const std::vector<For*>& loops,
      const std::vector<size_t>& permutation);
```

The first API checks if the given list of loops are perfectly nested. The second API reorders the given list of loops according to the permutation specified.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55568

Reviewed By: albanD

Differential Revision: D27689734

Pulled By: navahgar

fbshipit-source-id: dc1bffdbee068c3f401188035772b41847cbc7c6
2021-04-12 18:12:24 -07:00
Jeffrey Wan
3f9492c8b3 [Hackathon] Modernize API used in NNC C++ tests (1/3) (#55512)
Summary:
Partially fixes https://github.com/pytorch/pytorch/issues/55203

Fixes issues (1) and (2) in the following tests:
tests in test/cpp/tensorexpr/test_loopnest.cpp from the beginning to LoopNestReorderLongStringFull (including)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55512

Reviewed By: mrshenli

Differential Revision: D27630679

Pulled By: soulitzer

fbshipit-source-id: b581aaea4f5f54b3285f0348aa76e99779418f80
2021-04-08 08:34:25 -07:00
Brian Hirsh
dd2bccafc5 nnc hackathon - use new APIs in tests (#55497)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55497

Migrating some of the NNC API's used in testing, from this issue: https://github.com/pytorch/pytorch/issues/55203

I covered the second half of `test_loopnest.cpp`, and migrated (1) and (2) in the above issue: `LoopNest::getLoopStmtsFor`, `splitWithTail`, and `splitWithMask`

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D27628625

Pulled By: bdhirsh

fbshipit-source-id: ec15efba45fae0bbb442ac3577fb9ca2f8023c2d
2021-04-07 13:03:25 -07:00
Mikhail Zolotukhin
0b75f862c7 [TensorExpr] Nuke FunctionCall. (#54998)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54998

The only reason why we couldn't use Load instead of FunctionCall was
DepTracker. Now this is gone and we finally could replace FunctionCall
with Load.

Test Plan: Imported from OSS

Reviewed By: bertmaher, pbelevich

Differential Revision: D27446412

Pulled By: ZolotukhinM

fbshipit-source-id: 9183ae5541c2618abc9026b1dc4c4c9fab085d47
2021-04-01 19:47:59 -07:00
Mikhail Zolotukhin
688e350725 [TensorExpr] Nuke DepTracker and findAllNeededTensors. (#54997)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54997

DepTracker was used to automatically pull in dependent computations from
output ones. While it seems quite convenient, it's led to several
architectural issues, which are fixed in this stack.

DepTracker worked on Tensors, which is a pair of Buf and Stmt. However,
Stmt could become stale and there was no way to reliably update the
corresponding tensor. We're now using Bufs and Stmts directly and moving
away from using Tensors to avoid these problems.

Removing DepTracker allowed to unify Loads and FunctionCalls, which
essentially were duplicates of each other.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D27446414

Pulled By: ZolotukhinM

fbshipit-source-id: a2a32749d5b28beed92a601da33d126c0a2cf399
2021-04-01 19:46:26 -07:00
Bert Maher
e4d19798f3 [nnc][tests] Convert a bunch of FileCheck to checkIR
Summary:
I added a helper to convert a Stmt to string and FileCheck it, so
started using it in a bunch of places.  I replaced about half the current uses,
got tired, started to write a Perl script to automate it, realized that was
hard, and decided to give up for a bit.  But this cleans up some of the tests a
bit, so seems easy to review and worth landing.

Test Plan: test_tensorexpr --gtest_filter=LoopNest.*

Reviewed By: navahgar

Differential Revision: D27375866

fbshipit-source-id: 15894b9089dec5cf25f340fe17e6e54546a64257
2021-03-26 20:27:50 -07:00
Bert Maher
24f589df44 [nnc] Disabled test case for failure in implementing conv1d (#54756)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54756

We have multiple bugs here, one relating to index flattening and the
other to computeAt.
ghstack-source-id: 125054729

Test Plan: yikes

Reviewed By: ZolotukhinM

Differential Revision: D27354082

fbshipit-source-id: 8b15bac28e3eba4629881ae0f3bd143636f65ad7
2021-03-26 20:27:48 -07:00
Bert Maher
e542e67253 [nnc] Test case for computeAt with reduction (#54755)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54755

As title.  A step on the way to using computeAt to optimize
convolution.
ghstack-source-id: 125054730

Test Plan: new test

Reviewed By: ZolotukhinM

Differential Revision: D27353663

fbshipit-source-id: 930e09d96d1f74169bf148cd30fc195c6759a3e9
2021-03-26 20:25:18 -07:00
Raghavan Raman
601e79200d [NNC] Implementing LoopFusion (#54461)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54337

This PR adds a new API to NNC to perform loop fusion.

```
static For* fuseLoops(const std::vector<For*>& loops);
```

Loop fusion is done only when all the conditions below are satisfied.
  * All the loops have the same parent.
  * There are no statements between these loops in their parent body.
  * The start bounds are the same for all loops.
  * The stop bounds are the same for all loops.
  * Fusing the loops does not violate or add any dependencies.

This PR also adds an API to check for partial overlaps in `buffer_inference.h` and fixes a bug in `mem_dependency_checker.cpp`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54461

Reviewed By: bertmaher

Differential Revision: D27254888

Pulled By: navahgar

fbshipit-source-id: c21b027d738e5022e9cb88f6f72cd9e255bdb15e
2021-03-23 21:20:00 -07:00
Raghavan Raman
4b2abc4b8e [NNC] Adding API to distribute loops (#53865)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53864

This PR adds the following APIs that perform loop distribution to `LoopNest`:
```
static std::vector<For*> distributeLoop(For* loop, const std::unordered_set<Stmt*>& pivots);
static std::vector<For*> distributeLoop(For* loop);
static std::vector<For*> distributeLoopOverInnerLoops(For* loop);
```

* The first method distributes the given loop over its body by splitting after every given pivot stmt.
* The second method distributes the given loop over every stmt in its body.
* The last method distributes the given loop over its body by splitting after every `For` stmt in its body.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53865

Reviewed By: mruberry

Differential Revision: D27075006

Pulled By: navahgar

fbshipit-source-id: 031746aad619fe84c109e78b53387535e7f77cef
2021-03-18 07:27:39 -07:00
Bert Maher
a852fdb6b5 [nnc] Test for using int64 dimensions (#54094)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54094

We should be able to use 64-bit integers for loop boundaries and
buffer/tensor indexing.
ghstack-source-id: 124116846

Test Plan: New tests, disabled

Reviewed By: ZolotukhinM

Differential Revision: D27094934

fbshipit-source-id: a53de21a0ef523ea3560d5dd4707df50624896ef
2021-03-17 10:59:26 -07:00