Commit Graph

137 Commits

Author SHA1 Message Date
Shashank Chaudhry
06d1be2447 [NOOP][clangformat][codemod] Enable CLANGFORMAT for caffe2/caffe2/* (#67624)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67624

Test Plan: Visual inspection. Sandcastle.

Reviewed By: malfet

Differential Revision: D31986628

fbshipit-source-id: c872bded7325997a2945dbf5d4d052628dcb3659
2021-11-02 22:14:04 -07:00
Gary Miguel
9deb602726 [ONNX] Use Reciprocal operator instead of Div(1, x). (#65382) (#67271)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67271

* [ONNX] Use Reciprocal operator instead of Div(1, x).

This is a more readable and perhaps more performant way to export
torch.reciprocal.

* Use Reciprocal in caffe to operator to import onnx

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962519

Pulled By: malfet

fbshipit-source-id: d926e75b1c8312b9a980c9a1207a1a93ba0c71e0

Co-authored-by: take-cheeze <takechi101010@gmail.com>
2021-10-28 08:01:21 -07:00
Nikita Shulga
a9b0a921d5 Disable avoid-non-const-global-variables lint check (#62008)
Summary:
As GoogleTest `TEST` macro is non-compliant with it as well as `DEFINE_DISPATCH`

All changes but the ones to `.clang-tidy` are generated using following script:
```
for i in `find . -type f -iname "*.c*" -or -iname "*.h"|xargs grep cppcoreguidelines-avoid-non-const-global-variables|cut -f1 -d:|sort|uniq`;  do sed -i "/\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)/d" $i; done
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62008

Reviewed By: driazati, r-barnes

Differential Revision: D29838584

Pulled By: malfet

fbshipit-source-id: 1b2f8602c945bd4ce50a9bfdd204755556e31d13
2021-07-22 18:04:40 -07:00
neginraoof
599f5058cf [ONNX] Update ONNX to rel-1.9 (#55889) (#57080)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57080

ONNX optimizer is removed in ONNX 1.9
This PR removes ONNX optimizer from a C++ code path and uses `try-except` block in Python to keep it compatible with both ONNX-1.8 and 1.9.

Test Plan: Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D28467330

Pulled By: malfet

fbshipit-source-id: 5e4669dd0537648898e593f9e253da18d6dc7568

Co-authored-by: neginraoof <neginmr@utexas.edu>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
2021-06-02 08:27:17 -07:00
Nikita Shulga
3a66a1cb99 [clang-tidy] Exclude cppcoreguidelines-avoid-magic-numbers (#57841)
Summary:
Add cppcoreguidelines-avoid-magic-numbers exclusion to clang-tidy
Remove existing nolint warnings using following script:
```
for file in `git ls-files | grep -v \.py`; do gsed '/^ *\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers)/d' -i  $file; done
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57841

Reviewed By: samestep

Differential Revision: D28295045

Pulled By: malfet

fbshipit-source-id: 7c6e8d1213c9593f169ed3df6a916498f1a97163
2021-05-07 20:02:33 -07:00
Nikita Shulga
4cb534f92e Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os

def get_compiled_files_list():
    import json
    with open("build/compile_commands.json") as f:
        data = json.load(f)
    files = [os.path.relpath(node['file']) for node in data]
    for idx, fname in enumerate(files):
        if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
            files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
    return files

def run_clang_tidy(fname):
    check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
    changes = check_output(["git", "ls-files", "-m"])
    if len(changes) == 0:
        return
    check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])

def main():
    git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
    compiled_files = get_compiled_files_list()
    for idx, fname in enumerate(git_files):
        if fname not in compiled_files:
            continue
        if fname.startswith("caffe2/contrib/aten/"):
            continue
        print(f"[{idx}/{len(git_files)}] Processing {fname}")
        run_clang_tidy(fname)

if __name__ == "__main__":
    main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892

Reviewed By: H-Huang

Differential Revision: D27991944

Pulled By: malfet

fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 14:10:25 -07:00
Igor Sugak
51bca2ca4d [caffe2] fix -Wrange-loop-construct in onnx_exporter.cc (#56759)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56759

```
 caffe2/caffe2/onnx/onnx_exporter.cc:415:21: error: loop variable 'it' creates a copy from type 'const std::pair<const std::basic_string<char>, int>' [-Werror,-Wrange-loop-construct]
    for (const auto it : blob_versions) {
                    ^
caffe2/caffe2/onnx/onnx_exporter.cc:415:10: note: use reference type 'const std::pair<const std::basic_string<char>, int> &' to prevent copying
    for (const auto it : blob_versions) {
         ^~~~~~~~~~~~~~~
                    &
```

Reviewed By: yfeldblum

Differential Revision: D27960126

fbshipit-source-id: fd46f37cf1aca9441209de8eb06add204046db95
2021-04-24 13:13:51 -07:00
Sam Estep
5bcbbf5373 Lint trailing newlines (#54737)
Summary:
*Context:* https://github.com/pytorch/pytorch/issues/53406 added a lint for trailing whitespace at the ends of lines. However, in order to pass FB-internal lints, that PR also had to normalize the trailing newlines in four of the files it touched. This PR adds an OSS lint to normalize trailing newlines.

The changes to the following files (made in 54847d0adb9be71be4979cead3d9d4c02160e4cd) are the only manually-written parts of this PR:

- `.github/workflows/lint.yml`
- `mypy-strict.ini`
- `tools/README.md`
- `tools/test/test_trailing_newlines.py`
- `tools/trailing_newlines.py`

I would have liked to make this just a shell one-liner like the other three similar lints, but nothing I could find quite fit the bill. Specifically, all the answers I tried from the following Stack Overflow questions were far too slow (at least a minute and a half to run on this entire repository):

- [How to detect file ends in newline?](https://stackoverflow.com/q/38746)
- [How do I find files that do not end with a newline/linefeed?](https://stackoverflow.com/q/4631068)
- [How to list all files in the Git index without newline at end of file](https://stackoverflow.com/q/27624800)
- [Linux - check if there is an empty line at the end of a file [duplicate]](https://stackoverflow.com/q/34943632)
- [git ensure newline at end of each file](https://stackoverflow.com/q/57770972)

To avoid giving false positives during the few days after this PR is merged, we should probably only merge it after https://github.com/pytorch/pytorch/issues/54967.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54737

Test Plan:
Running the shell script from the "Ensure correct trailing newlines" step in the `quick-checks` job of `.github/workflows/lint.yml` should print no output and exit in a fraction of a second with a status of 0. That was not the case prior to this PR, as shown by this failing GHA workflow run on an earlier draft of this PR:

- https://github.com/pytorch/pytorch/runs/2197446987?check_suite_focus=true

In contrast, this run (after correcting the trailing newlines in this PR) succeeded:

- https://github.com/pytorch/pytorch/pull/54737/checks?check_run_id=2197553241

To unit-test `tools/trailing_newlines.py` itself (this is run as part of our "Test tools" GitHub Actions workflow):
```
python tools/test/test_trailing_newlines.py
```

Reviewed By: malfet

Differential Revision: D27409736

Pulled By: samestep

fbshipit-source-id: 46f565227046b39f68349bbd5633105b2d2e9b19
2021-03-30 13:09:52 -07:00
Jane Xu
71ca600af9 Renaming CAFFE2_API to TORCH_API (#49496)
Summary:
Since caffe2 and torch have been consolidated, CAFFE2_API should be merged with TORCH_API. Addresses a TODO.

Manually edited some references of the removed `CAFFE2_API`:
* `CONTRIBUTING.md`
* `caffe2/proto/CMakeLists.txt`
* `cmake/ProtoBuf.cmake`
* `c10/macros/Export.h`
* `torch/csrc/WindowsTorchApiMacro.h`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49496

Reviewed By: malfet, samestep

Differential Revision: D25600726

Pulled By: janeyx99

fbshipit-source-id: 7e068d959e397ac183c097d7e9a9afeca5ddd782
2020-12-18 10:54:50 -08:00
Hao Lu
53dff784e2 [caffe2] Fix inplace ops in onnx::SsaRewrite (#46134)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46134

Make sure in-place ops stay in-place after SsaRewrite. This seems to break the premise of SSA, but it's necessary to ensure correctness. Note here we only preserve the inplace ops that enforce inplace. Ops like `Relu` don't enforce inplace, they allow inplace.

(Note: this ignores all push blocking failures!)

Reviewed By: yinghai

Differential Revision: D24234957

fbshipit-source-id: 274bd3ad6227fce6a98e615aad7e57cd2696aec3
2020-10-22 13:26:31 -07:00
Yinghai Lu
8850fd1952 Add python inferface to create OfflineTensor (#42516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42516

att. We need it for some scripts.

Reviewed By: houseroad

Differential Revision: D22918112

fbshipit-source-id: 8a1696ceeeda67a34114bc57cb52c925711cfb4c
2020-08-04 01:31:34 -07:00
DeepakVelmurugan
fbb052c2cc BlackList to BlockList (#42279)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41701 blackList convention to blockList convention

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42279

Reviewed By: VitalyFedyunin

Differential Revision: D22843178

Pulled By: malfet

fbshipit-source-id: c9be5a5f084dfd0e46545d4a3d1124ef59277604
2020-07-30 18:06:49 -07:00
Hao Lu
4f163df41a [caffe2] Special handling of If/AsyncIf op in RemoveOpsByType (#42286)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42286

One more bug to fix. Operators such as If and AsyncIf need special treatment not just in `onnx::SsaRewrite`, but also in `RemoveOpsByType`. The solution needs two steps:
1) add external inputs/outputs of the subnets of If/AsyncIf op to the inputs/outputs of the op
2) if the inputs/outputs of the If/AsyncIf op need to be renamed as a result, the same inputs/outputs of the subnets need to be renamed as well.

I also added unit tests to cover this corner case.

Test Plan:
```
buck test //caffe2/caffe2/fb/predictor:black_box_predictor_test

mkdir /tmp/models
rm -rf /tmp/$USER/snntest
rm -rf /tmp/snntest
buck run mode/opt admarket/lib/ranking/prediction_replayer/snntest_replayer_test/tools:snntest_replay_test -- --serving_paradigm=USER_AD_PRECOMPUTATION_DSNN
```

Differential Revision: D22834028

fbshipit-source-id: c070707316cac694f452a96e5c80255abf4014bc
2020-07-30 02:02:20 -07:00
Hao Lu
5336ccc1b2 [BugFix] Fix bug in onnx::SsaRewrite (#42148)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42148

Differential Revision: D22687388

fbshipit-source-id: facf7a186dd48d6f919d0ff5d42f756977c3f9f4
2020-07-28 01:44:47 -07:00
Stephen Chen
f805184165 onnxifi: make it work with AsyncIf
Summary:
the onnxifi path didn't handle the input/output name rewrite for ssa correctly for AsyncIf op. Add support for it.

Also fixed a place where we lose the net type while doing onnxifi transform.

Test Plan: Load 163357582_593 which is a multi feed model that uses AsyncIf. This used to fail with c2 not finding some blobs in workspace. Now it works.

Reviewed By: dhe95

Differential Revision: D21268230

fbshipit-source-id: ce7ec0e952513d0f251df1bfcfb2b0250f51fd94
2020-07-27 18:27:35 -07:00
Hao Lu
39b4701d31 [caffe2][redo] Reimplement RemoveOpsByType with SSA (#41606)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41606

The previous diff (D22220798 (59294fbbb9) and D22220797) was recently reverted (D22492356 (28291d3cf8), D22492355) because of a bug associated with the op AsyncIf. The AsyncIf op has net_defs as args and the SSA rewriting didn't take that into account. It has a special path for the op If, but not for AsyncIf. Several changes I made to fix the bug:
1) Add op AsyncIf to the special path for If op in SSA rewriting
2) clear inputs/outputs of the netdefs that are args in If/AsyncIf ops because they're no longer valid
3) revert renamed inputs/outputs in the arg netdefs that are in the external_outputs in the parent netdef

2) and 3) are existing bugs in the `SsaRewrite` function that were just never exposed before.

The algorithm for `RemoveOpsByType` is the same as in my previous diff D22220798 (59294fbbb9). The only new changes in this diff are in `onnx::SsaRewrite` and a few newly added unit tests.

(Note: this ignores all push blocking failures!)

Reviewed By: yinghai

Differential Revision: D22588652

fbshipit-source-id: ebb68ecd1662ea2bae14d4be8f61a75cd8b7e3e6
2020-07-17 16:06:43 -07:00
Shai Szulanski
0ddaaf6a92 [codemod][caffe2] Run clang-format - 5/7
Summary:
This directory is opted-in to clang-format but is not format-clean. This blocks continuous formatting from being enabled on fbcode, and causes hassle for other codemods that leave inconsistent formatting. This diff runs clang-format, which is widely used and considered safe.

If you are unhappy with the formatting of a particular block, please *accept this diff* and then in a stacked commit undo the change and wrap that code in `// clang-format off` and `// clang-format on`, or `/* clang-format off */` and `/* clang-format on */`.

drop-conflicts

Test Plan: sandcastleit

Reviewed By: jerryzh168

Differential Revision: D22311706

fbshipit-source-id: 1ca59a82e96156a4a5dfad70ba3e64d44c5e762a
2020-06-30 15:45:11 -07:00
Kurt Mohler
f9eb8824f1 Remove datatype from Storage and StorageImpl (#38870)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38870

* Removed dtype data member from StorageImpl
* Removed any methods or method arguments in Storage/StorageImpl that deal with dtypes
* Update all callers of the changed API

Part of issue https://github.com/pytorch/pytorch/issues/33950
Original PR: https://github.com/pytorch/pytorch/pull/38038

Reviewed By: albanD

Differential Revision: D21549645

Pulled By: ezyang

fbshipit-source-id: 4289b356c55ff6b9530376a79343b99b540ee3de
2020-05-21 15:26:08 -07:00
peng
6dd1beaaa8 To fix caffe2 model with Copy OP cannot export to onnx model (#37144)
Summary:
To fix caffe2 model with Copy OP cannot export to onnx model
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37144

Reviewed By: houseroad

Differential Revision: D21252421

Pulled By: yinghai

fbshipit-source-id: 4f1077188f36b0691d199e418880bbb27f11032d
2020-05-04 11:34:09 -07:00
Brian Wignall
f326045b37 Fix typos, via a Levenshtein-type corrector (#31523)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos, with https://github.com/bwignall/typochecker to help automate the checking.

Uses an updated version of the tool used in https://github.com/pytorch/pytorch/pull/30606 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31523

Differential Revision: D19216749

Pulled By: mrshenli

fbshipit-source-id: 7fd489cb9a77cd7e4950c1046f925d57524960ea
2020-01-17 16:03:19 -08:00
Sebastian Messmer
643ca5def2 Replace c10::guts::stuff with std::stuff (#30915)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30915

Since we now have C++14, we don't need these c10::guts helpers anymore
ghstack-source-id: 95777609

Test Plan: waitforsandcastle

Differential Revision: D18869639

fbshipit-source-id: 97716f932297c64c6e814410ac47b444c33d4e2e
2019-12-16 13:57:19 -08:00
Supriya Rao
e42af97349 Add quantized concat conversion (#30887)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30887

Support to convert quantized concat from pytorch to caffe2

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps.test_cat

Imported from OSS

Differential Revision: D18855676

fbshipit-source-id: 5d0cf3f03c61819e168b080afa368b1255d0419c
2019-12-10 15:46:16 -08:00
Supriya Rao
968c0d4a46 Add support for converting quantized AvgPool2d and Reshape operations (#30490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30490

Add symbolic mapping to Int8AvgPool2d and Int8Reshape op in C2

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps

Imported from OSS

Differential Revision: D18740520

fbshipit-source-id: 1606125500c4b549fbc984e7929b7fd5204396a0
2019-12-02 18:15:01 -08:00
Yinghai Lu
c60bf2704a Support Offline Tensors through ONNXIFI layer
Summary:
Previous import was b2ec1a8041879b7be98d81387a14cae895f952f4

Included changes:
- **[97fe555](https://github.com/houseroad/foxi/commit/97fe555)**: Add deferred weight reader pointer when initializing the graph (#15) <Yinghai Lu>
- **[ba2faf7](https://github.com/houseroad/foxi/commit/ba2faf7)**: Add status and timeout to events (#14) <Jack Montgomery>

Test Plan: kicksandcastle

Reviewed By: ipiszy

Differential Revision: D18231697

fbshipit-source-id: 7566e2438d2b57f0feaadcd51f55a03552adeab9
2019-10-31 10:33:42 -07:00
Yinghai Lu
790563b374 Add OfflineTensor (#28855)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28855

Resubmit:
OfflineTensor will be a shell to just carry the shape and dtype. No data will be stored. This should help us plumb through the onnxifi process.

Test Plan:
```
buck test caffe2/caffe2/fb/opt:onnxifi_with_offline_tensor_test
```

Reviewed By: ipiszy, ChunliF

Differential Revision: D18212824

fbshipit-source-id: 5c8aaed2ef11d719dfa2a2901875efd66806ea56
2019-10-29 21:59:57 -07:00
Michael Suo
4045d6c3fa Revert D18187208: Add OfflineTensor
Test Plan: revert-hammer

Differential Revision:
D18187208

Original commit changeset: 57c70f6f9897

fbshipit-source-id: d13b089ceb645b2a9852923cd21a752a2f45a15b
2019-10-29 14:20:46 -07:00
Yinghai Lu
22d70bc1ec Add OfflineTensor
Summary: OfflineTensor will be a shell to just carry the shape and dtype. No data will be stored. This should help us plumb through the onnxifi process.

Test Plan:
```
buck test caffe2/caffe2/fb/opt:onnxifi_with_offline_tensor_test
```

Reviewed By: ChunliF, zrphercule

Differential Revision: D18187208

fbshipit-source-id: 57c70f6f9897a5fc66580c81295db108acd03862
2019-10-29 13:04:00 -07:00
Lu Fang
34662f77c6 Revert D17159707: [pytorch][PR] [ONNX] Fixed Select symbolic to export slice when index = negative one
Test Plan: revert-hammer

Differential Revision:
D17159707

Original commit changeset: 2c3b27542108

fbshipit-source-id: accce910abdbe13270d0f592810a48b1dabe4b01
2019-10-08 01:59:10 -07:00
Negin Raoof
16454095e0 Fixed Select symbolic to export slice when index = negative one (#25273)
Summary:
Exporting torch.select when index = negative one (x[:,-1]) was broken. This PR has the fix in symbolic function for select.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25273

Reviewed By: hl475

Differential Revision: D17159707

Pulled By: houseroad

fbshipit-source-id: 2c3b275421082758f1b63c1c9b6e578f03ca9f76
2019-10-07 14:24:34 -07:00
Zachary DeVito
4a754dc3e3 cleanup warnings
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24133

Test Plan: Imported from OSS

Differential Revision: D16746249

Pulled By: zdevito

fbshipit-source-id: 051f048b03043d6947544cd02ae44288bd439ef9
2019-08-12 16:12:30 -07:00
Bowen Bao
638d0b3705 Support ONNX export Multinomial (#23581)
Summary:
cc bddppq spandantiwari
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23581

Differential Revision: D16584853

Pulled By: bddppq

fbshipit-source-id: 01c066e86a0ad071361cd67b8c3925bfb6b84a4a
2019-08-02 11:06:21 -07:00
BowenBao
a35136dd73 Add support for onnx tensor index export (#21716)
Summary:
Support exporting
* Standard tensor indexing like
```
x = torch.ones(4, 5)
ind = torch.tensor([0, 1])

return x[ind]
```
* [Advanced indexing](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing) like
```
x = torch.ones(4,5,6,7,8)
ind1 = torch.tensor([0, 1])
ind2 = torch.tensor([[3], [2]])
ind3 = torch.tensor([[2, 2], [4, 5]])

return x[2:4, ind1, None, ind2, ind3, :]
```
It would be ideal if ONNX can natively support indexing in future opsets, but for opset <= 10 it will always need this kind of workarounds.

There are still various limitations, such as not supporting advanced indexing with negative indices, not supporting mask indices of rank > 1, etc. My feeling is that these are less common cases that requires great effort to support using current opset, and it's better to not make the index export more cumbersome than it already is.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21716

Reviewed By: zrphercule

Differential Revision: D15902199

Pulled By: houseroad

fbshipit-source-id: 5f1cc687fc9f97da18732f6a2c9dfe8f6fdb34a6
2019-07-23 17:11:28 -07:00
BowenBao
eb5137a5d1 Export torch.arange to ONNX (#22601)
Summary:
Some overlap with https://github.com/pytorch/pytorch/pull/21716 regarding caffe2 nonzero. Will rebase the other one accordingly whichever gets merged first.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22601

Reviewed By: zrphercule

Differential Revision: D16224660

Pulled By: houseroad

fbshipit-source-id: dbfd1b8776cb626601e0bf83b3fcca291806e653
2019-07-22 20:30:39 -07:00
hexiaoting
34536e207a Fix: convert Onnx DynamicSlice operator with 4 inputs to caffe2 fa… (#20846)
Summary:
I reported an issue [https://github.com/pytorch/pytorch/issues/20743](url)
and make this pull request for it
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20846

Reviewed By: zrphercule

Differential Revision: D15569135

Pulled By: houseroad

fbshipit-source-id: 96a2c818ef666a7d79b96decfa347d7154b34d5c
2019-06-19 00:09:15 -07:00
Yinghai Lu
7c40576c61 Save the weight shape info the first time we have chance to extract it (#21233)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21233

It is possible that OnnxifiOp is created in a thread where weights have been cleaned from the workspace, which is legit use case as we can create the backend once and lower all the weights. So we need to extract the weight shape info the first time we create the backend and save it.

Reviewed By: bertmaher, rdzhabarov

Differential Revision: D15587237

fbshipit-source-id: 1f264dc32c0398c42b618e9c41c119eb13e1c9f1
2019-06-01 12:55:29 -07:00
Lu Fang
b3c35e5202 Export randn_like in ONNX exporter (#20093)
Summary:
As a work around for dynamic shape case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20093

Reviewed By: zrphercule

Differential Revision: D15220661

Pulled By: houseroad

fbshipit-source-id: de271fce542be380bd49a3c74032c61f9aed3b67
2019-05-06 14:54:46 -07:00
Rui Zhu
2f73b3d26e Add if ops support for onnxifi and ssa-rewrite (#19585)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19585

Originally we will unroll all If op to many different subnets;
Now we will not unroll it anymore, but just add all external input of its subnet to the If op, and ssa-rewrite all external input/outputs. That would be enough.

Reviewed By: yinghai

Differential Revision: D15038139

fbshipit-source-id: 8532216d8749068acd5558ad0d8cb1d98463a063
2019-04-24 11:01:13 -07:00
bddppq
1989716ae5 Resubmit PR-18512: Improved onnx export for 3 onnx ops (#18571)
Summary:
Fix ROCm CI failure
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18571

Differential Revision: D14669323

Pulled By: bddppq

fbshipit-source-id: 022afe5c20e680295c9cfdfe1ec14650305955a8
2019-03-28 18:12:49 -07:00
Junjie Bai
77280b11e3 Revert D14635130: Improved onnx export for 3 onnx ops.
Differential Revision:
D14635130

Original commit changeset: d54a2b6e2950

fbshipit-source-id: f624e2befdde245cb88435a95508b2a8e6b12e61
2019-03-28 10:26:34 -07:00
Benoit Steiner
eee760dbd3 Improved onnx export for 3 onnx ops. (#18512)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18512

Ceil and Floor have been supported since version 6 of ONNX: export them using the native onnx ops instead of an Aten op.
Similarly, support for the Where op has been added in version 9, so we don't need to wrap these op in an Aten op.

Reviewed By: houseroad

Differential Revision: D14635130

fbshipit-source-id: d54a2b6e295074a6214b5939b21051a6735c9958
2019-03-28 08:55:21 -07:00
Yinghai Lu
a87d475c2f Do not rename net boundary inputs/outputs during ssaRewrite. (#17545)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17545

This diff avoids renaming boundary inputs of net during onnxifi transform.
It also removes adding mappings for the initializer during onnxifi op creation.
Thus gets read of the mapped ws creation during onnxifi op creation.

Reviewed By: zrphercule

Differential Revision: D14243161

fbshipit-source-id: 6eafa920c45f6a6bfacbbb443e8e84cf9778644c
2019-03-06 14:26:58 -08:00
Sebastian Messmer
6706e9af19 Make C10_MOBILE consistent with how feature macros are usually used (#17481)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17481

Usually, feature macros are either defined or undefined and checked accordingly.
C10_MOBILE was a weird special case that was always defined but either defined to 1 or to 0.

This caused a lot of confusion for me when trying to disable something from mobile build and it also disabled it
from the server build (because I was using ifdef). Also, I found a place in the existing code base that made
that wrong assumption and used the macro wrongly, see https://fburl.com/y4icohts

Reviewed By: dzhulgakov

Differential Revision: D14214825

fbshipit-source-id: f3a155b6d43d334e8839e2b2e3c40ed2c773eab6
2019-02-27 17:57:51 -08:00
Tongliang Liao
65ecef1509 Export ElementwiseLinear to ONNX (Mul + Add). (#17411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17411

Reshape-based approach to support dynamic shape.
The first Reshape flatten inner dimensions and the second one recover the actual shape.
No Shape/Reshape will be generated unless necessary.

![image](https://user-images.githubusercontent.com/5203025/52215001-114ace80-28ce-11e9-815f-28ad190d3189.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16716

Reviewed By: zrphercule

Differential Revision: D14094532

Pulled By: houseroad

fbshipit-source-id: bad6a1fbf5963ef3dd034ef4bf440f5a5d6980bc
2019-02-25 08:11:13 -08:00
Lu Fang
3d68a2d6de Add foxi submodule (ONNXIFI facebook extension)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17178

Reviewed By: yinghai

Differential Revision: D14197987

Pulled By: houseroad

fbshipit-source-id: c21d7235e40c2ca4925a10c467c2b4da2f1024ad
2019-02-25 08:00:03 -08:00
Yinghai Lu
1d05d0d848 Improve onnxifi backend init time (#17375)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17375

Previously we create the onnxGraph first and take it to the onnx manager for registration. It doesn't work well in practice. This diff takes "bring your own constructor" approach to reduce the resource spent doing backend compilation.

Reviewed By: kimishpatel, rdzhabarov

Differential Revision: D14173793

fbshipit-source-id: cbc4fe99fc522f017466b2fce88ffc67ae6757cf
2019-02-22 16:58:30 -08:00
Yinghai Lu
db1d61a5c3 Add rule based filtering for ONNXIFI transformation (#17198)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17198

We come to the point that we need to apply some rules to bind certain ops together to avoid un-inferrable intermediate shapes. We either lower them together to backend or neither. This diff adds a pass for us to add rules like this. The first one is to bind `Gather` with `SparseLengthsWeighted*`.

Reviewed By: ipiszy

Differential Revision: D14118326

fbshipit-source-id: 14bc62e1feddae02a3dd8eae93b8f553d52ac951
2019-02-20 12:47:24 -08:00
Dwarak Rajagopal
65d6f1014a Add support of count_include_pad and test end to end test for AveragePool (#17034)
Summary:
Add support of count_include_pad end to end test for AveragePool

We can export AveragePool from PyTorch with count_include_pad attribute. However, we don't directly support it in Caffe2's ONNX backend.
We also want to check whether we can pass the end to end test for average pool operator with count_include_pad attribute (pytorch => onnx => caffe2)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17034

Reviewed By: houseroad

Differential Revision: D14060186

Pulled By: dwarakrajagopal

fbshipit-source-id: 10dae532611c71f8c8cfc3fa701cc7c1c1c02695
2019-02-14 11:48:42 -08:00
Tongliang Liao
a670824fee Support FC (Caffe2) -> Gemm (ONNX) with variable input shape. (#16184)
Summary:
For >2D input, previously the code uses static shape captured during tracing and reshape before/after `Gemm`.
Now we add `-1` to the first `Reshape`, and uses `Shape(X) => Slice(outer) => Concat(with -1 for inner) => Reshape` for the second.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16184

Differential Revision: D14070754

Pulled By: ezyang

fbshipit-source-id: 86c69e9b254945b3406c07e122e57a00dfeba3df
2019-02-13 17:12:34 -08:00
Tongliang Liao
491f2d4cb8 Support conversion from Caffe2 MergeDim to ONNX Reshape + Squeeze. (#16189)
Summary:
`MergeDim` can be done by `Reshape([1, -1, 0, 0, ...]) + Squeeze`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16189

Differential Revision: D14070676

Pulled By: ezyang

fbshipit-source-id: 28d7e9b35cc2c1dcbd4afb3fbdf7383e219b1777
2019-02-13 15:53:38 -08:00
Tongliang Liao
0eee56fff7 Export ReduceMean/ReduceFrontMean/ReduceBackMean (Caffe2) to ReduceMean (ONNX). (#16727)
Summary:
The second input (`lengths`) is not supported.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16727

Differential Revision: D14054105

Pulled By: houseroad

fbshipit-source-id: 36b8d00460f9623696439e1bd2a6bc60b7bb263c
2019-02-12 13:35:32 -08:00