Commit Graph

271 Commits

Author SHA1 Message Date
Chen Lai
b5a834a739 [Pytorch] Build lite interpreter as default for iOS
Summary:
Two changes:
1. Build lite interpreter as default for iOS
2. Switch the previous lite interpreter test to full jit build test

Test Plan: Imported from OSS

Differential Revision: D27698039

Reviewed By: xta0

Pulled By: cccclai

fbshipit-source-id: 022b554f4997ae577681f2b79a9ebe9236ca4f7d
2021-05-17 22:36:05 -07:00
Chen Lai
0c3db1cb33 [Pytorch] Build lite interpreter as default for Android
Summary:
Build lite interpreter as default for android, should wait until https://github.com/pytorch/pytorch/pull/56002 lands
Mainly two changes:
1. Use lite interpreter as default for Android
2. Switch the lite interpreter build test to full jit build test

Test Plan: Imported from OSS

Differential Revision: D27695530

Reviewed By: IvanKobzarev

Pulled By: cccclai

fbshipit-source-id: e1b2c70fee6590accc22c7404b9dd52c7d7c36e2
2021-05-17 14:12:48 -07:00
Sam Estep
2e26976ad3 Disallow versionless Python shebangs (#58275)
Summary:
Some machines don't have a versionless `python` on their PATH, which breaks these existing shebangs.

I'm assuming that all the existing versionless `python` shebangs are meant to be `python3` and not `python2`; please let me know if my assumption was incorrect for any of these.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58275

Test Plan: CI.

Reviewed By: zhouzhuojie

Differential Revision: D28428143

Pulled By: samestep

fbshipit-source-id: 6562be3d12924db72a92a0207b060ef740f61ebf
2021-05-14 08:26:02 -07:00
BowenBao
dc0071dfa5 [ONNX] Special post process for onnx::Cast and onnx::ConstantOfShape shape type inference (#55962) (#57597)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57597

* Special post process for onnx::Cast and onnx::ConstantOfShape
* Update `test_pytorch_onnx_shape_inference.py` to be unit test over shape inference patterns.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393529

Pulled By: SplitInfinity

fbshipit-source-id: fc26032ddb842d4e299447da39564b28049752ed

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-05-13 13:42:44 -07:00
Ilia Cherniavskii
65fad0ebd2 Expand Kineto platform support (ci-all) (#56323)
Summary:
Expanding support to all builds

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56323

Test Plan: CI

Reviewed By: malfet

Differential Revision: D28171478

Pulled By: ilia-cher

fbshipit-source-id: 16bc752d1be3cbaeda5316f5d8a687ae05a83d22
2021-05-05 15:00:01 -07:00
davidriazati@fb.com
4b96fc060b Remove distutils (#57040)
Summary:
[distutils](https://docs.python.org/3/library/distutils.html) is on its way out and will be deprecated-on-import for Python 3.10+ and removed in Python 3.12 (see [PEP 632](https://www.python.org/dev/peps/pep-0632/)). There's no reason for us to keep it around since all the functionality we want from it can be found in `setuptools` / `sysconfig`. `setuptools` includes a copy of most of `distutils` (which is fine to use according to the PEP), that it uses under the hood, so this PR also uses that in some places.

Fixes #56527
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57040

Pulled By: driazati

Reviewed By: nikithamalgifb

Differential Revision: D28051356

fbshipit-source-id: 1ca312219032540e755593e50da0c9e23c62d720
2021-04-29 12:10:11 -07:00
Jane Xu
a90a3acbee Use JIT Plug-in for coverage to cover JIT'd functions and methods (#56310)
Summary:
This PR is step 2 (after https://github.com/pytorch/pytorch/issues/56708) to having JIT coverage--it actually uses the plug-in in CI!

Disclaimer: note that this will mark the entire JIT'd function/method as covered without seeking proof that the
compiled code has been executed. This means that even if the code chunk is merely compiled and not run, it will get
marked as covered.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56310

Test Plan:
We should see coverage improvements in CI after. A file to look out for would be `torch/jit/quantized.py`, which should have more coverage after this PR, which it does!
d3283ccd8c/torch/jit/quantized.py vs https://codecov.io/gh/pytorch/pytorch/src/master/torch/jit/quantized.py

More generally, the whole jit folder got ~3% increase in coverage, I believe.

Reviewed By: walterddr

Differential Revision: D28000672

Pulled By: janeyx99

fbshipit-source-id: 6712979d63a5e1224a92ee9bd9679ec62cf1cbba
2021-04-26 09:19:32 -07:00
Rong Rong (AI Infra)
3fbc15410a Revert D27967517: [pytorch][PR] Use JIT Plug-in for coverage to cover JIT'd functions and methods
Test Plan: revert-hammer

Differential Revision:
D27967517 (88bd0510ef)

Original commit changeset: 53fd8431d772

fbshipit-source-id: 491841dcde629f1e9f8ee38be7366955c03b6e27
2021-04-24 07:53:49 -07:00
Jane Xu
88bd0510ef Use JIT Plug-in for coverage to cover JIT'd functions and methods (#56310)
Summary:
This PR is step 2 (after https://github.com/pytorch/pytorch/issues/56708) to having JIT coverage--it actually uses the plug-in in CI!

Disclaimer: note that this will mark the entire JIT'd function/method as covered without seeking proof that the
compiled code has been executed. This means that even if the code chunk is merely compiled and not run, it will get
marked as covered.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56310

Test Plan:
We should see coverage improvements in CI after. A file to look out for would be `torch/jit/quantized.py`, which should have more coverage after this PR, which it does!
d3283ccd8c/torch/jit/quantized.py vs https://codecov.io/gh/pytorch/pytorch/src/master/torch/jit/quantized.py

More generally, the whole jit folder got ~3% increase in coverage, I believe.

Reviewed By: ezyang

Differential Revision: D27967517

Pulled By: janeyx99

fbshipit-source-id: 53fd8431d772c2447191135c29d1b166ecd42f50
2021-04-23 09:12:21 -07:00
Nikita Shulga
add49e7e4e Enforce PEP263 for PyTorch python codebase (#55346)
Summary:
All python files containing non-ASCII characters should be correctly annotated with `# -*- coding: utf-8 -*-` comment

Delete number of superfluous UTF-8 characters, most commonly UTF-8 opening closing quotation mark U+2019 (’) instead of ascii apostrophe ', for example `Module’s`->`Module's`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55346

Reviewed By: samestep

Differential Revision: D27582044

Pulled By: malfet

fbshipit-source-id: c1cd89655915858ff3a41f675cdfffff795a8e44
2021-04-06 18:31:38 -07:00
Alban Desmaison
f83668b4e5 Update release notes scripts following runbook update (#54594)
Summary:
This adds:
- new categories
- global commit counter
- support for new "Reverted" label on PRs
- new export system to multiple files

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54594

Reviewed By: H-Huang

Differential Revision: D27396011

Pulled By: albanD

fbshipit-source-id: ca1ec3a1b90221ba26fd8b053dfb10f614f05909
2021-04-01 07:55:16 -07:00
Sam Estep
5bcbbf5373 Lint trailing newlines (#54737)
Summary:
*Context:* https://github.com/pytorch/pytorch/issues/53406 added a lint for trailing whitespace at the ends of lines. However, in order to pass FB-internal lints, that PR also had to normalize the trailing newlines in four of the files it touched. This PR adds an OSS lint to normalize trailing newlines.

The changes to the following files (made in 54847d0adb9be71be4979cead3d9d4c02160e4cd) are the only manually-written parts of this PR:

- `.github/workflows/lint.yml`
- `mypy-strict.ini`
- `tools/README.md`
- `tools/test/test_trailing_newlines.py`
- `tools/trailing_newlines.py`

I would have liked to make this just a shell one-liner like the other three similar lints, but nothing I could find quite fit the bill. Specifically, all the answers I tried from the following Stack Overflow questions were far too slow (at least a minute and a half to run on this entire repository):

- [How to detect file ends in newline?](https://stackoverflow.com/q/38746)
- [How do I find files that do not end with a newline/linefeed?](https://stackoverflow.com/q/4631068)
- [How to list all files in the Git index without newline at end of file](https://stackoverflow.com/q/27624800)
- [Linux - check if there is an empty line at the end of a file [duplicate]](https://stackoverflow.com/q/34943632)
- [git ensure newline at end of each file](https://stackoverflow.com/q/57770972)

To avoid giving false positives during the few days after this PR is merged, we should probably only merge it after https://github.com/pytorch/pytorch/issues/54967.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54737

Test Plan:
Running the shell script from the "Ensure correct trailing newlines" step in the `quick-checks` job of `.github/workflows/lint.yml` should print no output and exit in a fraction of a second with a status of 0. That was not the case prior to this PR, as shown by this failing GHA workflow run on an earlier draft of this PR:

- https://github.com/pytorch/pytorch/runs/2197446987?check_suite_focus=true

In contrast, this run (after correcting the trailing newlines in this PR) succeeded:

- https://github.com/pytorch/pytorch/pull/54737/checks?check_run_id=2197553241

To unit-test `tools/trailing_newlines.py` itself (this is run as part of our "Test tools" GitHub Actions workflow):
```
python tools/test/test_trailing_newlines.py
```

Reviewed By: malfet

Differential Revision: D27409736

Pulled By: samestep

fbshipit-source-id: 46f565227046b39f68349bbd5633105b2d2e9b19
2021-03-30 13:09:52 -07:00
Eli Uriegas
67f765328b scripts: Change promote pypi to be more flexible (#53774)
Summary:
Promotion to PyPI should be more flexible to allow any package to be
promoted to PyPI.

After we re-added a version suffix to cuda 10.2 it means that this
script needs to have the flexibility to designate which platform and
which version suffix will actually be uploaded to PyPI

Should coincide with https://github.com/pytorch/builder/pull/678

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53774

Reviewed By: jbschlosser

Differential Revision: D27052347

Pulled By: seemethere

fbshipit-source-id: 71129cc5afbd7de448c970ef721bc979c3420586
2021-03-15 13:30:21 -07:00
BowenBao
57d1df071f [ONNX] Support inplace operations on inplace indexing (#52063) (#53306)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53306

* [ONNX] Fix for sequence of mutations in blocks (#51577)

Fixes consecutive mutations in a tensor inside blocks.
Also, support append and pop in blocks.

* Support inplace operations + indexing

* Clean up old pass for remove mutations

* Add loop test

* Fixes for set attr in loops

* Removing the new jit API flag

* [ONNX] Redesign onnx pass to enable shape type dependent pattern conversion - cont (#51795)

With the introduction of ONNX shape inference, shape and type are inferred on the fly as operators get converted from ATen to ONNX when running symbolic function. This resolves the shape/type requirement for the symbolic functions. The pre-onnx passes however, can not be supported by shape inference, since at that stage the operators in the graph are still ATen operators.

This PR is to update the design of ONNX pass, to enable a mechanism of capturing subgraphs of ATen operators of certain patterns, and convert them later, when shape/type information of upstream operators are available.

The new design will require pre-onnx passes that need shape/type to be written in two parts, encapsulation and conversion.

    The encapsulation part will find the nodes of patterns, like how pre-onnx passes were written previously. But instead of converting the nodes, it will encapsulate them into a sub-block of a new placeholder node. This part is called before onnx pass, so it runs before calling symbolic functions.

    The conversion part will be called inside the onnx pass. In onnx pass, run_symbolic_func will be called for each node in topological order. When it reaches the placeholder node, the conversion part will be invoked. It will convert the nodes inside the sub-block based on pattern. By that time, it will have shape/type of upstream operators available. After the conversion is complete, the placeholder node will be removed, and nodes inside its sub-block converted. Run_symbolic_func will be called for these nodes, and they will be converted from ATen operator to ONNX operator.

This PR includes several other fixes, listed below.
* ~~replace helper.cpp with onnx_utils.cpp for holding utility functions.~~
* fix EraseNumberTypes on Bool type, the code was outdated that back then Bool type doesn't exist.
* ~~enable onnx shape inference in export with parameter/initializer data.~~
* other code clean ups.
* fix insertion of identity nodes for loop opset 13 sequence output.

~~PR depends on #51603~~

* Fix after merge

* clang

* Fix clang

* Fix clang

* Fix warning message.

* Fixes for non-model param attributes

* Fix for caffe2

* Additional test

* clang

* Skip test for lower opsets

* fix clang-tidy

* Update init.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Fix for clang formatting

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922416

Pulled By: SplitInfinity

fbshipit-source-id: e7108620b39b6404c594910786c4d275fee59d84

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-03-12 02:49:11 -08:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
Eli Uriegas
07ae4e9309 scripts: Add script to prep wheels for pypi (#53056)
Summary:
Adds a script so that we can take wheels directly from
download.pytorch.org and publish them to pypi

This is currently mainly used to prep windows binaries for publication to PyPI

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53056

Reviewed By: H-Huang

Differential Revision: D26738642

Pulled By: seemethere

fbshipit-source-id: 96777ed6c3f3454bddb4bc13121f727074312816
2021-03-01 16:46:44 -08:00
Chen Lai
14f7bf0629 [PyTorch] update CMake to build libtorch lite (#51419)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51419

## Summary

1. Add an option `BUILD_LITE_INTERPRETER` in `caffe2/CMakeLists.txt` and set `OFF` as default.
2. Update 'build_android.sh' with an argument to swtich `BUILD_LITE_INTERPRETER`, 'OFF' as default.
3. Add a mini demo app `lite_interpreter_demo` linked with `libtorch` library, which can be used for quick test.

## Test Plan
Built lite interpreter version of libtorch and test with Image Segmentation demo app ([android version](https://github.com/pytorch/android-demo-app/tree/master/ImageSegmentation)/[ios version](https://github.com/pytorch/ios-demo-app/tree/master/ImageSegmentation))

### Android
1. **Prepare model**: Prepare the lite interpreter version of model by run the script below to generate the scripted model `deeplabv3_scripted.pt` and `deeplabv3_scripted.ptl`
```
import torch

model = torch.hub.load('pytorch/vision:v0.7.0', 'deeplabv3_resnet50', pretrained=True)
model.eval()

scripted_module = torch.jit.script(model)
# Export full jit version model (not compatible lite interpreter), leave it here for comparison
scripted_module.save("deeplabv3_scripted.pt")
# Export lite interpreter version model (compatible with lite interpreter)
scripted_module._save_for_lite_interpreter("deeplabv3_scripted.ptl")

```
2. **Build libtorch lite for android**: Build libtorch for android for all 4 android abis (armeabi-v7a, arm64-v8a, x86, x86_64) `BUILD_LITE_INTERPRETER=1 ./scripts/build_pytorch_android.sh`. This pr is tested on Pixel 4 emulator with x86, so use cmd `BUILD_LITE_INTERPRETER=1 ./scripts/build_pytorch_android.sh x86` to specify abi to save built time. After the build finish, it will show the library path:
```
...
BUILD SUCCESSFUL in 55s
134 actionable tasks: 22 executed, 112 up-to-date
+ find /Users/chenlai/pytorch/android -type f -name '*aar'
+ xargs ls -lah
-rw-r--r--  1 chenlai  staff    13M Feb 11 11:48 /Users/chenlai/pytorch/android/pytorch_android/build/outputs/aar/pytorch_android-release.aar
-rw-r--r--  1 chenlai  staff    36K Feb  9 16:45 /Users/chenlai/pytorch/android/pytorch_android_torchvision/build/outputs/aar/pytorch_android_torchvision-release.aar
```
3. **Use the PyTorch Android libraries built from source in the ImageSegmentation app**: Create a folder 'libs' in the path, the path from repository root will be `ImageSegmentation/app/libs`. Copy `pytorch_android-release` to the path `ImageSegmentation/app/libs/pytorch_android-release.aar`. Copy 'pytorch_android_torchvision` (downloaded from [here](https://oss.sonatype.org/#nexus-search;quick~torchvision_android)) to the path `ImageSegmentation/app/libs/pytorch_android_torchvision.aar` Update the `dependencies` part of `ImageSegmentation/app/build.gradle` to
```
dependencies {
    implementation 'androidx.appcompat:appcompat:1.2.0'
    implementation 'androidx.constraintlayout:constraintlayout:2.0.2'
    testImplementation 'junit:junit:4.12'
    androidTestImplementation 'androidx.test.ext:junit:1.1.2'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'

    implementation(name:'pytorch_android-release', ext:'aar')
    implementation(name:'pytorch_android_torchvision', ext:'aar')

    implementation 'com.android.support:appcompat-v7:28.0.0'
    implementation 'com.facebook.fbjni:fbjni-java-only:0.0.3'
}
```
Update `allprojects` part in `ImageSegmentation/build.gradle` to
```

allprojects {
    repositories {
        google()
        jcenter()
        flatDir {
            dirs 'libs'
        }
    }
}
```
4. **Update model loader api**: Update `ImageSegmentation/app/src/main/java/org/pytorch/imagesegmentation/MainActivity.java` by
4.1 Add new import: `import org.pytorch.LiteModuleLoader;`
4.2 Replace the way to load pytorch lite model
```
//            mModule = Module.load(MainActivity.assetFilePath(getApplicationContext(), "deeplabv3_scripted.pt"));
            mModule = LiteModuleLoader.load(MainActivity.assetFilePath(getApplicationContext(), "deeplabv3_scripted.ptl"));
```
5. **Test app**: Build and run the ImageSegmentation app in Android Studio,
![image](https://user-images.githubusercontent.com/16430979/107696279-9cea5900-6c66-11eb-8286-4d1d68abff61.png)

### iOS
1. **Prepare model**: Same as Android.
2. **Build libtorch lite for ios** `BUILD_PYTORCH_MOBILE=1 IOS_PLATFORM=SIMULATOR BUILD_LITE_INTERPRETER=1   ./scripts/build_ios.sh`
3. **Remove Cocoapods from the project**: run `pod deintegrate`
4. **Link ImageSegmentation demo app with the custom built library**:
Open your project in XCode, go to your project Target’s **Build Phases - Link Binaries With Libraries**, click the **+** sign and add all the library files located in `build_ios/install/lib`. Navigate to the project **Build Settings**, set the value **Header Search Paths** to `build_ios/install/include` and **Library Search Paths** to `build_ios/install/lib`.
In the build settings, search for **other linker flags**. Add a custom linker flag below
```
-all_load
```
Finally, disable bitcode for your target by selecting the Build Settings, searching for Enable Bitcode, and set the value to No.
**

5. Update library and api**
5.1 Update `TorchModule.mm``
To use the custom built libraries the project, replace `#import <LibTorch/LibTorch.h>` (in `TorchModule.mm`) which is needed when using LibTorch via Cocoapods with the code below:

```
//#import <LibTorch/LibTorch.h>
#include "ATen/ATen.h"
#include "caffe2/core/timer.h"
#include "caffe2/utils/string_utils.h"
#include "torch/csrc/autograd/grad_mode.h"
#include "torch/script.h"
#include <torch/csrc/jit/mobile/function.h>
#include <torch/csrc/jit/mobile/import.h>
#include <torch/csrc/jit/mobile/interpreter.h>
#include <torch/csrc/jit/mobile/module.h>
#include <torch/csrc/jit/mobile/observer.h>
```
5.2 Update `ViewController.swift`
```
//        if let filePath = Bundle.main.path(forResource:
//            "deeplabv3_scripted", ofType: "pt"),
//            let module = TorchModule(fileAtPath: filePath) {
//            return module
//        } else {
//            fatalError("Can't find the model file!")
//        }
        if let filePath = Bundle.main.path(forResource:
            "deeplabv3_scripted", ofType: "ptl"),
            let module = TorchModule(fileAtPath: filePath) {
            return module
        } else {
            fatalError("Can't find the model file!")
        }
```

### Unit test
Add `test/cpp/lite_interpreter`, with one unit test `test_cores.cpp` and a light model `sequence.ptl` to test `_load_for_mobile()`, `bc.find_method()` and `bc.forward()` functions.

### Size:
**With the change:**
Android:
x86: `pytorch_android-release.aar` (**13.8 MB**)

IOS:
`pytorch/build_ios/install/lib` (lib: **66 MB**):
```
(base) chenlai@chenlai-mp lib % ls -lh
total 135016
-rw-r--r--  1 chenlai  staff   3.3M Feb 15 20:45 libXNNPACK.a
-rw-r--r--  1 chenlai  staff   965K Feb 15 20:45 libc10.a
-rw-r--r--  1 chenlai  staff   4.6K Feb 15 20:45 libclog.a
-rw-r--r--  1 chenlai  staff    42K Feb 15 20:45 libcpuinfo.a
-rw-r--r--  1 chenlai  staff    39K Feb 15 20:45 libcpuinfo_internals.a
-rw-r--r--  1 chenlai  staff   1.5M Feb 15 20:45 libeigen_blas.a
-rw-r--r--  1 chenlai  staff   148K Feb 15 20:45 libfmt.a
-rw-r--r--  1 chenlai  staff    44K Feb 15 20:45 libpthreadpool.a
-rw-r--r--  1 chenlai  staff   166K Feb 15 20:45 libpytorch_qnnpack.a
-rw-r--r--  1 chenlai  staff   384B Feb 15 21:19 libtorch.a
-rw-r--r--  1 chenlai  staff    **60M** Feb 15 20:47 libtorch_cpu.a
```
`pytorch/build_ios/install`:
```
(base) chenlai@chenlai-mp install % du -sh *
 14M	include
 66M	lib
2.8M	share
```

**Master (baseline):**
Android:
x86: `pytorch_android-release.aar` (**16.2 MB**)

IOS:
`pytorch/build_ios/install/lib` (lib: **84 MB**):
```
(base) chenlai@chenlai-mp lib % ls -lh
total 172032
-rw-r--r--  1 chenlai  staff   3.3M Feb 17 22:18 libXNNPACK.a
-rw-r--r--  1 chenlai  staff   969K Feb 17 22:18 libc10.a
-rw-r--r--  1 chenlai  staff   4.6K Feb 17 22:18 libclog.a
-rw-r--r--  1 chenlai  staff    42K Feb 17 22:18 libcpuinfo.a
-rw-r--r--  1 chenlai  staff   1.5M Feb 17 22:18 libeigen_blas.a
-rw-r--r--  1 chenlai  staff    44K Feb 17 22:18 libpthreadpool.a
-rw-r--r--  1 chenlai  staff   166K Feb 17 22:18 libpytorch_qnnpack.a
-rw-r--r--  1 chenlai  staff   384B Feb 17 22:19 libtorch.a
-rw-r--r--  1 chenlai  staff    78M Feb 17 22:19 libtorch_cpu.a
```
`pytorch/build_ios/install`:
```
(base) chenlai@chenlai-mp install % du -sh *
 14M	include
 84M	lib
2.8M	share
```

Test Plan: Imported from OSS

Reviewed By: iseeyuan

Differential Revision: D26518778

Pulled By: cccclai

fbshipit-source-id: 4503ffa1f150ecc309ed39fb0549e8bd046a3f9c
2021-02-21 01:43:54 -08:00
albanD
e8ee35a666 Add script to compare namespace content for release cleanup (#51685)
Summary:
Usage explanation will be in the release note runbook.

This allows to generate diffs like:
```
Processing torch.nn
Things that were added:
{'quantizable', 'ChannelShuffle', 'LazyConvTranspose2d', 'LazyConv2d', 'LazyConvTranspose3d', 'LazyConv1d', 'GaussianNLLLoss', 'LazyConv3d', 'PixelUnshuffle', 'UninitializedParameter', 'LazyLinear', 'LazyConvTranspose1d'}

Things that were removed:
set()
```

This can then be shared with module owners along with the commits to help them validate that the namespace changes for their submodule is as expected.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51685

Reviewed By: zhangguanheng66

Differential Revision: D26260258

Pulled By: albanD

fbshipit-source-id: 40e40f86314e17246899d01ffa4b2631e93b52f7
2021-02-05 07:54:00 -08:00
BowenBao
586c2e8d62 [ONNX] Fix graph sequence output from loop node (#51305) (#51521)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51521

* Add loop & if node to the list of nodes that could produce sequence type output.
* Switch from `[]` to `at()` to avoid segfault of out of range access.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203112

Pulled By: SplitInfinity

fbshipit-source-id: e990eeed933124b195be0be159271e33fb485063
2021-02-04 12:44:17 -08:00
BowenBao
3f185ac18e [ONNX] Export get/set attribute nodes (#50768) (#51517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51517

Fix get/set attributes when getting/setting a model parameter.
This PR also fixes inplace ops in If blocks.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203116

Pulled By: SplitInfinity

fbshipit-source-id: bed6ee6dd92b5b43febc8c584a6872290f8fe33f
2021-02-04 12:43:59 -08:00
BowenBao
68034197e8 [ONNX] Support gelu for fp16 export (#50487) (#50911)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50911

Need to replace dtype of export created scalars from float to double. (In torch implicit conversion logic, python numbers are double)

Test case skipped in CI due to that current CI job env does not have CUDA support.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050889

Pulled By: SplitInfinity

fbshipit-source-id: 1fdde23a68d4793e6b9a82840acc213e5c3aa760
2021-01-27 17:49:02 -08:00
neginraoof
137f2a385a [ONNX] Handle sequence output for models (#50599)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/46542

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50599

Reviewed By: SplitInfinity

Differential Revision: D25928897

Pulled By: bzinodev

fbshipit-source-id: a898cef7b2d15a287aedd9798ce1423cebf378d4
2021-01-21 15:36:41 -08:00
Brian Vaughan
a9db2f8e7a Revert D24924236: [pytorch][PR] [ONNX] Handle sequence output shape and type inference
Test Plan: revert-hammer

Differential Revision:
D24924236 (adc65e7c8d)

Original commit changeset: 506e70a38cfe

fbshipit-source-id: 78069a33fb3df825af1cb482da06a07f7b26ab48
2021-01-15 05:58:35 -08:00
Negin Raoof
adc65e7c8d [ONNX] Handle sequence output shape and type inference (#46542)
Summary:
Handle sequence output shape and type inference.

This PR fixes value type of sequence outputs. Prior to this, all model sequence type outputs were unfolded for ONNX models.
This PR also enable shape inference for sequence outputs to represent the dynamic shape of these values.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46542

Reviewed By: ezyang

Differential Revision: D24924236

Pulled By: bzinodev

fbshipit-source-id: 506e70a38cfe31069191d7f40fc6375239c6aafe
2021-01-14 21:12:35 -08:00
Spandan Tiwari
aeefe2ce31 [ONNX] ONNX dev branch merge 01-06-2021 (#50163)
Summary:
[ONNX] ONNX dev branch merge 01-06-2021
- [ONNX] Support onnx if/loop sequence output in opset 13 - (https://github.com/pytorch/pytorch/issues/49270)
- Symbolic function for torch.square (https://github.com/pytorch/pytorch/issues/49446)
- [ONNX] Add checks in ONNXSetDynamicInputShape (https://github.com/pytorch/pytorch/issues/49783) …
- [ONNX] Enable export af aten::__derive_index (https://github.com/pytorch/pytorch/issues/49514) …
- [ONNX] Update symbolic for unfold (https://github.com/pytorch/pytorch/issues/49378) …
- [ONNX] Update the sequence of initializers in exported graph so that it is as same as inputs. (https://github.com/pytorch/pytorch/issues/49798)
- [ONNX] Enable opset 13 ops (https://github.com/pytorch/pytorch/issues/49612) …
- [ONNX] Improve error message for supported model input types in ONNX export API. (https://github.com/pytorch/pytorch/issues/50119)
- [ONNX] Add a post-pass for If folding (https://github.com/pytorch/pytorch/issues/49410)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50163

Reviewed By: pbelevich

Differential Revision: D25821059

Pulled By: SplitInfinity

fbshipit-source-id: 9f511a93d9d5812d0ab0a49d61ed0fa5f8066948
2021-01-13 13:51:21 -08:00
Thomas Zhang
d78b638a31 Convert string => raw strings so char classes can be represented in Python regex (#50239)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50239

Convert regex strings that have character classes (e.g. \d, \s, \w, \b, etc) into raw strings so they won't be interpreted as escape characters.

References:
Python RegEx - https://www.w3schools.com/python/python_regex.asp
Python Escape Chars - https://www.w3schools.com/python/gloss_python_escape_characters.asp
Python Raw String - https://www.journaldev.com/23598/python-raw-string
Python RegEx Docs - https://docs.python.org/3/library/re.html
Python String Tester - https://www.w3schools.com/python/trypython.asp?filename=demo_string_escape
Python Regex Tester - https://regex101.com/

Test Plan: To find occurrences of regex strings with the above issue in VS Code, search using the regex \bre\.[a-z]+\(['"], and under 'files to include', use /data/users/your_username/fbsource/fbcode/caffe2.

Reviewed By: r-barnes

Differential Revision: D25813302

fbshipit-source-id: df9e23c0a84c49175eaef399ca6d091bfbeed936
2021-01-08 11:17:17 -08:00
Richard Barnes
5acb1cc1df Drop unused imports from scripts (#49956)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49956

From
```
./python/libcst/libcst codemod remove_unused_imports.RemoveUnusedImportsWithGlean --no-format caffe2/
```

Test Plan: Standard sandcastle tests

Reviewed By: xush6528

Differential Revision: D25727347

fbshipit-source-id: 74d0a08aa0cfd0f492688a2b8278a0c65fd1deba
2021-01-04 16:08:28 -08:00
Jane Xu
52fe73a39e Enable Python code coverage for onnx runs (#47387)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/44120

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47387

Reviewed By: heitorschueroff

Differential Revision: D24737378

Pulled By: janeyx99

fbshipit-source-id: 79e3d0b62f7da0617330f312fb1ed548c6be2a3b
2020-11-09 20:52:14 -08:00
Ksenija Stanojevic
7a599870b0 [ONNX] Update peephole pass for prim::ListUnpack (#46264)
Summary:
Update pass that handles prim::ListUnpack in peephole file, so that it also covers the case when input to the node is of ListType.

Fixes https://github.com/pytorch/pytorch/issues/45816

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46264

Reviewed By: mrshenli

Differential Revision: D24566070

Pulled By: bzinodev

fbshipit-source-id: 32555487054f6a7fe02cc17c66bcbe81ddf9623e
2020-11-05 09:42:24 -08:00
Alban Desmaison
68954fe897 Add release note scripts (#47360)
Summary:
First commit contains the initial code from Richard's branch.
Second commit are the changes that I made during the writing process
Third commit is the update to support category/topic pair for each commit

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47360

Reviewed By: ejguan

Differential Revision: D24741003

Pulled By: albanD

fbshipit-source-id: d0fcc6765968dc1732d8a515688d11372c7e653d
2020-11-05 06:43:24 -08:00
Jane Xu
4189c3ca76 Fix onnx test-reports path in CI (#47315)
Summary:
Currently, no test reports are uploaded to CI because the paths for the `onnx` runs are incorrect. This PR attempts to change that.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47315

Reviewed By: malfet

Differential Revision: D24727607

Pulled By: janeyx99

fbshipit-source-id: f6d91698fdb15a39e01ef812032d4cd30621f864
2020-11-04 10:30:52 -08:00
Tao Xu
bf1ea14fbc [CI][IOS] Add a arm64 ios job for Metal (#46646)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46646

Test Plan: Imported from OSS

Reviewed By: seemethere, linbinyu

Differential Revision: D24459597

Pulled By: xta0

fbshipit-source-id: e93a3a26897614c66768804c71658928cd26ede7
2020-10-22 16:54:46 -07:00
Tao Xu
04e5fcc0ed [GPU] Introduce USE_PYTORCH_METAL (#46383)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46383

The old `USE_METAL` is actually being used by Caffe2. Here we introduce a new macro to enable metal in pytorch.
ghstack-source-id: 114499392

Test Plan:
- Circle CI
- The Person Segmentation model works

Reviewed By: linbinyu

Differential Revision: D24322018

fbshipit-source-id: 4e5548afba426b49f314366d89b18ba0c7e745ca
2020-10-16 18:19:32 -07:00
Tao Xu
a277c097ac [iOS][GPU] Add Metal/MPSCNN support on iOS (#46112)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46112

### Summary

This PR adds the support of running torchscript models on iOS GPU via Metal (Inference only). The feature is currently in prototype state, API changes are expected. The tutorial and the documents will be added once it goes to beta.

allow-large-files

- Users API

```
  auto module = torch::jit::load(model);
  module.eval();
  at::Tensor input = at::ones({1,3,224,224}, at::ScalarType::Float).metal();
  auto output = module.forward({input}).toTensor().cpu();
```
- Supported Models
    - Person Segmentation v106 (FB Internal)
    - Mobilenetv2

- Supported Operators
    - aten::conv2d
    - aten::addmm
    - aten::add.Tensor
    - aten::sub.Tensor
    - aten::mul.Tensor
    - aten::relu
    - aten::hardtanh
    - aten::hardtanh_
    - aten::sigmoid
    - aten::max_pool2d
    - aten::adaptive_avg_pool2d
    - aten::reshape
    - aten::t
    - aten::view
    - aten::log_softmax.int
    - aten::upsample_nearest2d.vec

- Supported Devices
    - Apple A9 and above
    - iOS 10.2 and above

- CMake scripts
    - `IOS_ARCH=arm64 ./scripts/build_ios.sh -DUSE_METAL=ON`

### Test Plan

- Circle CI

ghstack-source-id: 114155638

Test Plan:
1. Sandcastle CI
2. Circle CI

Reviewed By: dreiss

Differential Revision: D23236555

fbshipit-source-id: 98ffc48b837e308bc678c37a9a5fd8ae72d11625
2020-10-13 01:46:56 -07:00
Tao Xu
0de5824f36 [iOS][CI] Upgrade xcode version to 12.0 (#45677)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45677

Test Plan: Imported from OSS

Reviewed By: husthyc

Differential Revision: D24065647

Pulled By: xta0

fbshipit-source-id: f2535b1d93e58cf79e7075bf56b0613a3ded16eb
2020-10-01 16:53:18 -07:00
BowenBao
3da4cea658 [ONNX] Add dim_param support in export with onnx shape inference (#44920)
Summary:
* Support propagating `dim_param` in ONNX by encoding as `ShapeSymbol` in `SymbolicShape` of outputs. If export is called with `dynamic_axes` provided, shape inference will start with these axes set as dynamic.
* Add new test file `test_pytorch_onnx_shape_inference.py`, reusing all test cases from `test_pytorch_onnx_onnxruntime.py`, but focus on validating shape for all nodes in graph. Currently this is not enabled in the CI, since there are still quite some existing issues and corner cases to fix. The test is default to run only at opset 12.
* Bug fixes, such as div, _len, and peephole.cpp passes for PackPadded, and LogSoftmaxCrossEntropy.
* This PR depends on existing PR such as 44332.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44920

Reviewed By: eellison

Differential Revision: D23958398

Pulled By: bzinodev

fbshipit-source-id: 00479d9bd19c867d526769a15ba97ec16d56e51d
2020-09-30 21:56:24 -07:00
Bugra Akyildiz
27c7158166 Remove __future__ imports for legacy Python2 supports (#45033)
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:

```2to3 -f future -w caffe2```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033

Reviewed By: seemethere

Differential Revision: D23808648

Pulled By: bugra

fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
2020-09-23 17:57:02 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Eli Uriegas
d62994a94d ci: Add anaconda pruning to CI pipeline (#44651)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44651

Adds pruning for our anaconda channels (pytorch-nightly, pytorch-test)
into our CI pipeline so that it gets run on a more consistent basis.

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Test Plan: Imported from OSS

Reviewed By: walterddr

Differential Revision: D23692851

Pulled By: seemethere

fbshipit-source-id: fa69b506b73805bf2ffbde75d221aef1ee3f753e
2020-09-15 10:51:05 -07:00
Rong Rong
105132b891 Move ONNX circle ci build to torch and remove all caffe2 CI job/workflows (#44595)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44595

Reviewed By: seemethere

Differential Revision: D23670280

Pulled By: walterddr

fbshipit-source-id: b32633912f6c8b4606be36b90f901e636567b355
2020-09-14 09:50:13 -07:00
Alex
208ad45b4b fix scripts (#44464)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44464

Reviewed By: agolynski

Differential Revision: D23624921

Pulled By: colesbury

fbshipit-source-id: 72bed69edcf467a99eda9a3b97e894015c992dce
2020-09-10 08:13:48 -07:00
Jiakai Liu
3a0e35c9f2 [pytorch] deprecate static dispatch (#43564)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43564

Static dispatch was originally introduced for mobile selective build.

Since we have added selective build support for dynamic dispatch and
tested it in FB production for months, we can deprecate static dispatch
to reduce the complexity of the codebase.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D23324452

Pulled By: ljk53

fbshipit-source-id: d2970257616a8c6337f90249076fca1ae93090c7
2020-08-27 14:52:48 -07:00
Nikita Shulga
38580422bb Allow specifying PYTHON executable to build_android (#41927)
Summary:
build_android.sh should check PYTHON environment variable before trying to use default python executable.
Even in that case, try to pick python3 over python2 when available.

Closes https://github.com/pytorch/pytorch/issues/41795

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41927

Reviewed By: seemethere

Differential Revision: D22696850

Pulled By: malfet

fbshipit-source-id: be236c2baf54a1cd111e55ee7743cdc93cb6b9d7
2020-07-24 18:34:42 -07:00
Kimish Patel
d6feb6141f [Vec256][neon] Add neon backend for vec256 (#39341)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39341

This PR introduces neon backend for vec256 class for float datatype.
For now only aarch64 is enabled due to few issues with enabling in
aarch32 bit.

Test Plan:
vec256_test

Imported from OSS

Differential Revision: D21822399

fbshipit-source-id: 3851c4336d93d1c359c85b38cf19904f82bc7b8d
2020-07-09 16:25:09 -07:00
Kimish Patel
bddba1e336 Add benchmark for add op. (#40059)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40059

This benchmark is added specifically for mobile to see if compiler is
autovectorizing and thus we have no advantage of neon backend for vec256
for add op.

Test Plan:
CI

Imported from OSS

Differential Revision: D22055146

fbshipit-source-id: 43ba6c4ae57c6f05d84887c2750ce21ae1b0f0b5
2020-07-09 16:22:55 -07:00
David Reiss
b7e044f0e5 Re-apply PyTorch pthreadpool changes
Summary:
This re-applies D21232894 (b9d3869df3) and D22162524, plus updates jni_deps in a few places
to avoid breaking host JNI tests.

Test Plan: `buck test @//fbandroid/mode/server //fbandroid/instrumentation_tests/com/facebook/caffe2:host-test`

Reviewed By: xcheng16

Differential Revision: D22199952

fbshipit-source-id: df13eef39c01738637ae8cf7f581d6ccc88d37d5
2020-06-23 19:26:21 -07:00
Kate Mormysh
92d3182c11 Revert D21232894: Unify PyTorch mobile's threadpool usage.
Test Plan: revert-hammer

Differential Revision:
D21232894 (b9d3869df3)

Original commit changeset: 8b3de86247fb

fbshipit-source-id: e6517cfec08f7dd0f4f8877dab62acf1d65afacd
2020-06-23 17:09:14 -07:00
Ashkan Aliabadi
b9d3869df3 Unify PyTorch mobile's threadpool usage. (#37243)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37243

*** Why ***

As it stands, we have two thread pool solutions concurrently in use in PyTorch mobile: (1) the open source pthreadpool library under third_party, and (2) Caffe2's implementation of pthreadpool under caffe2/utils/threadpool.  Since the primary use-case of the latter has been to act as a drop-in replacement for the third party version so as to enable integration and usage from within NNPACK and QNNPACK, Caffe2's implementation is intentionally written to the exact same interface as the third party version.

The original argument in favor of C2's implementation has been improved performance as a result of using spin locks, as opposed to relinquishing the thread's time slot and putting it to sleep - a less expensive operation up to a point.  That seems to have given C2's implementation the upper hand in performance, hence justifying the added maintenance complexity, until the third party version improved in parallel surpassing the efficiency of C2's implementation as I have verified in benchmarks.  With that advantage gone, there is no reason to continue using C2's implementation in PyTorch mobile either from the perspective of performance or code hygiene.  As a matter of fact, there is considerable performance benefit to be had as a result of using the third party version as it currently stands.

This is a tricky change though, mainly because in order to avoid potential performance regressions, of which I have witnessed none but just in abundance of caution, we have decided to continue using the internal C2's implementation whenever building for Caffe2.  Again, this is mainly to avoid potential performance regressions in production C2 use cases even if doing so results in reduced performance as far as I can tell.

So to summarize, today, and as it currently stands, we are using C2's implementation for (1) NNPACK, (2) PyTorch QNNPACK, and (3) ATen parallel_for on mobile builds, while using the third party version of pthreadpool for XNNPACK as XNNPACK does not provide any build options to link against an external implementation unlike NNPACK and QNNPACK do.

The goal of this PR then, is to unify all usage on mobile to the third party implementation both for improved performance and better code hygiene.  This applies to PyTorch's use of NNPACK, QNNPACK, XNNPACK, and mobile's implementation of ATen parallel_for, all getting routed to the
exact same third party implementation in this PR.

Considering that NNPACK, QNNPACK, and XNNPACK are not mobile specific, these benefits carry over to non-mobile builds of PyTorch (but not Caffe2) as well.  The implementation of ATen parallel_for on non-mobile builds remains unchanged.

*** How ***

This is where things get tricky.

A good deal of the build system complexity in this PR arises from our desire to maintain C2's implementation intact for C2's use.

pthreadpool is a C library with no concept of namespaces, which means two copies of the library cannot exist in the same binary or symbol collision will occur violating ODR.  This means that somehow, and based on some condition, we must decide on the choice of a pthreadpool implementation.  In practice, this has become more complicated as a result of all the possible combinations that USE_NNPACK, USE_QNNPACK, USE_PYTORCH_QNNPACK, USE_XNNPACK, USE_SYSTEM_XNNPACK, USE_SYSTEM_PTHREADPOOL and other variables can result in.  Having said that, I have done my best in this PR to surgically cut through this complexity in a way that minimizes the side effects, considering the significance of the performance we are leaving on the table, yet, as a result of this combinatorial explosion explained above I cannot guarantee that every single combination will work as expected on the first try.  I am heavily relying on CI to find any issues as local testing can only go that far.

Having said that, this PR provides a simple non mobile-specific C++ thread pool implementation on top of pthreadpool, namely caffe2::PThreadPool that automatically routes to C2's implementation or the third party version depending on the build configuration.  This simplifies the logic at the cost of pushing the complexity to the build scripts.  From there on, this thread pool is used in aten parallel_for, and NNPACK and family, again, routing all usage of threading to C2 or third party pthreadpool depending on the build configuration.

When it is all said or done, the layering will look like this:

a) aten::parallel_for, uses
b) caffe2::PThreadPool, which uses
c) pthreadpool C API, which delegates to
    c-1) third_party implementation of pthreadpool if that's what the build has requested, and the rabbit hole ends here.
    c-2) C2's implementation of pthreadpool if that's what the build has requested, which itself delegates to
    c-2-1) caffe2::ThreadPool, and the rabbit hole ends here.

NNPACK, and (PyTorch) QNNPACK directly hook into (c). They never go through (b).

Differential Revision: D21232894

Test Plan: Imported from OSS

Reviewed By: dreiss

Pulled By: AshkanAliabadi

fbshipit-source-id: 8b3de86247fbc3a327e811983e082f9d40081354
2020-06-23 16:34:51 -07:00
Ivan Kobzarev
c1dfc05cc9 [android][test_app][reland] test_app example linking to pytorch_android aar content (#40313)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40313

Test Plan: Imported from OSS

Differential Revision: D22147079

Pulled By: IvanKobzarev

fbshipit-source-id: c70a0a9dda8834376ed304a461318d4c6ef84582
2020-06-20 07:34:42 -07:00
Ilia Cherniavskii
cdbf78fba0 Revert D22118945: [android] test_app example linking to pytorch_android aar content
Test Plan: revert-hammer

Differential Revision:
D22118945 (52a2adb3f4)

Original commit changeset: 31c54b49b1f2

fbshipit-source-id: 0c4929d4441572debbbc49f8674b9fc49b726599
2020-06-19 12:16:18 -07:00