Commit Graph

247 Commits

Author SHA1 Message Date
Taylor Robie
24bc3be146 [Profiler] Clean up profiler includes. (#69421)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69421

I've hit a lot of build issues in D32671972, and I've come to realize that a lot of it boils down to header hygene. `function.h` includes `profiler.h` *solely* to transitively include `record_function.h` which winds up leaking the profiler symbols. Moreover several files are relying on transitive includes to get access to `getTime`. As long as I have to touch all the places that use `getTime`, I may as well also move them to the new namespace.

Test Plan: Unit tests and CI.

Reviewed By: aaronenyeshi, albanD

Differential Revision: D32865907

fbshipit-source-id: f87d6fd5afb784dca2146436e72c69e34623020e
2021-12-15 12:50:24 -08:00
Ivan Kobzarev
f74779e403 [android] Lite interpreter naming for android nightly publishing (#68651)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68651

Test Plan: Imported from OSS

Reviewed By: linbinyu

Differential Revision: D32564796

Pulled By: IvanKobzarev

fbshipit-source-id: 57847bfb2778433cfb02ad7a5a79ae30a6b438c1
2021-11-19 10:56:13 -08:00
Ivan Kobzarev
d71092f668 [android][fbjni] Update fbjni to 0.2.2 (#68400)
Summary:
ghstack-source-id: caeb8df3a1
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68400

Fixes #{issue number}

CI-all check:
https://github.com/pytorch/pytorch/pull/68497

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68495

Reviewed By: linbinyu

Differential Revision: D32481451

Pulled By: IvanKobzarev

fbshipit-source-id: b19ce05ff9d63b3f701d718eefbf1e9d66e11639
2021-11-17 16:54:22 -08:00
Nikita Shulga
27eca2c6fd Revert D32467139: [pytorch][PR] [android][fbjni] Update fbjni to 0.2.2
Test Plan: revert-hammer

Differential Revision:
D32467139 (04056df475)

Original commit changeset: 49e155989d2d

fbshipit-source-id: ce03be3c6f209a6e9969660bd823d5343a7f0615
2021-11-16 13:50:50 -08:00
Ivan Kobzarev
04056df475 [android][fbjni] Update fbjni to 0.2.2 (#68400)
Summary:
ghstack-source-id: caeb8df3a1
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68400

Fixes #{issue number}

Updates fbjni version to 0.2.2

ci-all PR: https://github.com/pytorch/pytorch/pull/68401

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68402

Reviewed By: linbinyu

Differential Revision: D32467139

Pulled By: IvanKobzarev

fbshipit-source-id: 49e155989d2dbafedd5b2df77e089e25e8b4f8f8
2021-11-16 11:34:46 -08:00
Shashank Chaudhry
89c4e8c22b [NOOP][clangformat][codemod] Enable CLANGFORMAT for some folders in caffe2/* (#67746)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67746

Test Plan: Visual inspection. Sandcastle.

Reviewed By: zertosh

Differential Revision: D31986646

fbshipit-source-id: 91885c20c3cead3853c49abb9fe0a94a67f33cc8
2021-11-03 12:23:14 -07:00
Scott Wolchok
82f7f8d471 [PyTorch] Adopt IValue::toTupleRef() where obvious (#65505)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65505

Generated with

`fastmod -m 'toTuple\(\)(\s*)->' 'toTupleRef()${1}.'`

, followed by

`fastmod '(std::move\(.*)toTupleRef\(\).' '${1}toTuple()->'`

to unbreak 2 callsites.
ghstack-source-id: 142065835

Test Plan: CI

Reviewed By: gchanan

Differential Revision: D31131025

fbshipit-source-id: 54457ae5bbeb38db9c7f196d469b98521c3d3f34
2021-11-02 10:22:18 -07:00
Richard Barnes
0a07488ed2 use irange for loops 1 (#66741)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66741

Modified loops in files under fbsource/fbcode/caffe2/ from the format

`for(TYPE var=x0;var<x_max;x++)`

to the format

`for(const auto var: irange(xmax))`

This was achieved by running r-barnes's loop upgrader script (D28874212) with some modification to exclude all files under /torch/jit and a number of reversions or unused variable suppression warnings added by hand.

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D31705360

fbshipit-source-id: 7115f76e381ad2d98584eb534961c3cbb957ebaa
2021-10-19 03:28:51 -07:00
Xue Li
2f099c7555 Revert D30652629: use irange for loops
Test Plan: revert-hammer

Differential Revision:
D30652629 (687c2267d4)

Original commit changeset: 0ae6c4bbbb55

fbshipit-source-id: 5c4f067b584a021c8c9656454d1ee60999600fb3
2021-10-15 15:23:10 -07:00
Richard Barnes
687c2267d4 use irange for loops (#66234)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66234

Modified loops in files under fbsource/fbcode/caffe2/ from the format

`for(TYPE var=x0;var<x_max;x++)`

to the format

`for(const auto var: irange(xmax))`

This was achieved by running r-barnes's loop upgrader script (D28874212) with some modification to exclude all files under /torch/jit and a number of reversions or unused variable suppression warnings added by hand.

bypass_size_limit
allow-large-files

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D30652629

fbshipit-source-id: 0ae6c4bbbb554bad42e372792a6430e1acf15e3e
2021-10-15 13:50:33 -07:00
Nikita Shulga
2c7df1360a Bump torch version to 1.11 (#65435)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65435

Reviewed By: zhouzhuojie

Differential Revision: D31099045

Pulled By: malfet

fbshipit-source-id: 6ae6ca8a4b652fc51ee3138c800d067e144acbaa
2021-09-22 07:07:16 -07:00
Linbin Yu
1b5b210f2c [Android] print type name for IValues (#64602)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64602

print type name in error message for easier debugging.

Test Plan:
Example:
java.lang.IllegalStateException: Expected IValue type Tensor, actual type TensorList

Reviewed By: beback4u

Differential Revision: D30782318

fbshipit-source-id: 60d88a659e7b4bb2b574b12c7652a28f0d5ad0d2
2021-09-09 17:06:15 -07:00
Ivan Kobzarev
c982f13a80 [android][vulkan] Fix model loading for Vulkan backend (#63402)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63402

Test Plan: Imported from OSS

Reviewed By: SS-JIA

Differential Revision: D30370692

Pulled By: IvanKobzarev

fbshipit-source-id: 73311b9b767fe9ed3ae390db59d6aa2c4a98f06d
2021-08-17 10:20:32 -07:00
Kimish Patel
38c185189c [Pytorch Edge] Enable kineto profiler on mobile via EdgeKinetoProfiler (#62419)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62419

This diff adds support for cpu only kineto profiler on mobile. Thus
enabling chrome trace generation on mobile. This bring cpp API for
mobile profiling on part with Torchscript.
This is done via:
1. Utilizating debug handle annotations in KinetoEvent.
2. Adding post processing capability, via callbacks, to
KinetoThreadLocalState
3. Creating new RAII stype profiler, KinetoEdgeCPUProfiler, which can be
used in surrounding scope of model execution. This will write chrome
trace to the location specified in profiler constructor.

Test Plan:
MobileProfiler.ModuleHierarchy

Imported from OSS

Reviewed By: raziel

Differential Revision: D29993660

fbshipit-source-id: 0b44f52f9e9c5f5aff81ebbd9273c254c3c03299
2021-08-13 21:40:19 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
Zsolt Dollenstein
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
Ivan Kobzarev
06a3b23971 [android] Lite interpreter module to load from assets (#61609)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61609

Test Plan: Imported from OSS

Reviewed By: cccclai

Differential Revision: D29688641

Pulled By: IvanKobzarev

fbshipit-source-id: 7857bad51e91eae7c90a1218d463f3767f4fae15
2021-07-23 17:51:18 -07:00
zhouzhuojie
6107cf3750 Add --jobs 0 for git submodule update (#61311)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61311

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61152

Some related docs about `submodule.fetchJobs`
https://git-scm.com/docs/git-config#Documentation/git-config.txt-submodulefetchJobs

```
time git submodule update --init --recursive
________________________________________________________
Executed in  243.20 secs    fish           external
   usr time   49.64 secs  213.00 micros   49.64 secs
   sys time   29.27 secs  795.00 micros   29.27 secs
```

```
time git submodule update --init --recursive --jobs 4
________________________________________________________
Executed in  143.04 secs    fish           external
   usr time   51.06 secs  246.00 micros   51.06 secs
   sys time   30.96 secs  742.00 micros   30.96 secs
```

```
time git submodule update --init --recursive --jobs 8
________________________________________________________
Executed in  124.64 secs    fish           external
   usr time   51.76 secs  264.00 micros   51.76 secs
   sys time   30.49 secs  739.00 micros   30.49 secs

```

```
time git submodule update --init --recursive --jobs 0 # use all online cpus
 ________________________________________________________
Executed in  129.75 secs    fish           external
   usr time   51.64 secs  181.00 micros   51.64 secs
   sys time   31.49 secs  781.00 micros   31.49 secs

```

Test Plan: Imported from OSS

Reviewed By: 1ntEgr8

Differential Revision: D29560875

Pulled By: zhouzhuojie

fbshipit-source-id: 556027dffe744c66428075a8a1bf64683930aaaf
2021-07-07 16:28:18 -07:00
Nikita Shulga
f1ce7f4b7f Update PyTorch version to 0.10.0a (#59345)
Summary:
Also fix `TestProducerVersion` by removing assumption that major and minor are single digit

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59345

Reviewed By: robieta

Differential Revision: D28853720

Pulled By: malfet

fbshipit-source-id: 4b6d03c6b0c9d652a5aef792aaa84eaa522d10e8
2021-06-03 07:55:44 -07:00
Nikita Shulga
dfe85d6fd7 Revert D28840199: [pytorch][PR] Update version to 1.10
Test Plan: revert-hammer

Differential Revision:
D28840199 (3453aa44c1)

Original commit changeset: acc5a93e12a3

fbshipit-source-id: a41eb7c882fe0bf8f9a35ef180e99a7e72f6857d
2021-06-02 16:25:51 -07:00
Nikita Shulga
3453aa44c1 Update version to 1.10 (#59325)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59325

Reviewed By: jbschlosser, seemethere

Differential Revision: D28840199

Pulled By: malfet

fbshipit-source-id: acc5a93e12a3db47d6103ea064bec9e40320f708
2021-06-02 15:00:33 -07:00
H1Gdev
6edd49a8e8 [Android]Removed dependency with AppCompat. (#58527)
Summary:
I build using [Bazel](https://bazel.build/).

When I use `pytorch_android` in latest Android app, I get the following error due to dependencies:

```
$ bazel build //app/src/main:app
WARNING: API level 30 specified by android_ndk_repository 'androidndk' is not available. Using latest known API level 29
INFO: Analyzed target //app/src/main:app (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
ERROR: /home/H1Gdev/android-bazel-app/app/src/main/BUILD.bazel:3:15: Merging manifest for //app/src/main:app failed: (Exit 1): ResourceProcessorBusyBox failed: error executing command bazel-out/k8-opt-exec-2B5CBBC6/bin/external/bazel_tools/src/tools/android/java/com/google/devtools/build/android/ResourceProcessorBusyBox --tool MERGE_MANIFEST -- --manifest ... (remaining 11 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox ResourceProcessorBusyBox failed: error executing command bazel-out/k8-opt-exec-2B5CBBC6/bin/external/bazel_tools/src/tools/android/java/com/google/devtools/build/android/ResourceProcessorBusyBox --tool MERGE_MANIFEST -- --manifest ... (remaining 11 argument(s) skipped)

Use --sandbox_debug to see verbose messages from the sandbox
Error: /home/H1Gdev/.cache/bazel/_bazel_H1Gdev/29e18157a4334967491de4cc9a879dc0/sandbox/linux-sandbox/914/execroot/__main__/app/src/main/AndroidManifest.xml:19:18-86 Error:
	Attribute application@appComponentFactory value=(androidx.core.app.CoreComponentFactory) from [maven//:androidx_core_core] AndroidManifest.xml:19:18-86
	is also present at [maven//:com_android_support_support_compat] AndroidManifest.xml:19:18-91 value=(android.support.v4.app.CoreComponentFactory).
	Suggestion: add 'tools:replace="android:appComponentFactory"' to <application> element at AndroidManifest.xml:5:5-19:19 to override.
May 19, 2021 10:45:03 AM com.google.devtools.build.android.ManifestMergerAction main
SEVERE: Error during merging manifests
com.google.devtools.build.android.AndroidManifestProcessor$ManifestProcessingException: Manifest merger failed : Attribute application@appComponentFactory value=(androidx.core.app.CoreComponentFactory) from [maven//:androidx_core_core] AndroidManifest.xml:19:18-86
	is also present at [maven//:com_android_support_support_compat] AndroidManifest.xml:19:18-91 value=(android.support.v4.app.CoreComponentFactory).
	Suggestion: add 'tools:replace="android:appComponentFactory"' to <application> element at AndroidManifest.xml:5:5-19:19 to override.
	at com.google.devtools.build.android.AndroidManifestProcessor.mergeManifest(AndroidManifestProcessor.java:186)
	at com.google.devtools.build.android.ManifestMergerAction.main(ManifestMergerAction.java:217)
	at com.google.devtools.build.android.ResourceProcessorBusyBox$Tool$5.call(ResourceProcessorBusyBox.java:93)
	at com.google.devtools.build.android.ResourceProcessorBusyBox.processRequest(ResourceProcessorBusyBox.java:233)
	at com.google.devtools.build.android.ResourceProcessorBusyBox.main(ResourceProcessorBusyBox.java:177)

Warning:
See http://g.co/androidstudio/manifest-merger for more information about the manifest merger.
Target //app/src/main:app failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 2.221s, Critical Path: 1.79s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully
```

This is due to conflict between `AndroidX` and `Support Library` on which `pytorch_android_torch` depends.
(In the case of `Gradle`, it is avoided by `android.useAndroidX`.)

I created [Android application](https://github.com/H1Gdev/android-bazel-app) for comparison.

At first, I updated `AppCompat` from `Support Library` to `AndroidX`, but `pytorch_android` and `pytorch_android_torchvision` didn't seem to need any dependencies, so I removed dependencies.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58527

Reviewed By: xta0

Differential Revision: D28585234

Pulled By: IvanKobzarev

fbshipit-source-id: 78aa6b1525543594ae951a6234dd88a3fdbfc062
2021-05-20 15:49:19 -07:00
Chen Lai
0c3db1cb33 [Pytorch] Build lite interpreter as default for Android
Summary:
Build lite interpreter as default for android, should wait until https://github.com/pytorch/pytorch/pull/56002 lands
Mainly two changes:
1. Use lite interpreter as default for Android
2. Switch the lite interpreter build test to full jit build test

Test Plan: Imported from OSS

Differential Revision: D27695530

Reviewed By: IvanKobzarev

Pulled By: cccclai

fbshipit-source-id: e1b2c70fee6590accc22c7404b9dd52c7d7c36e2
2021-05-17 14:12:48 -07:00
Ailing Zhang
16710e5d93 Add reasons in TODO for the unblocked AVNTM -> InferenceMode cases. (#56823)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56823

Test Plan: CI

Reviewed By: bhosmer

Differential Revision: D27975596

fbshipit-source-id: 1d5681852163cd24ae245a6d90e44a34a0909145
2021-04-26 15:58:34 -07:00
Ailing Zhang
0c544ebd24 Revert to ANVTM in jni_lite due to Oculus failure.
Test Plan: FanW123 verified on her Oculus device

Reviewed By: FanW123

Differential Revision: D27943428

fbshipit-source-id: ac1c1ca6b47937f8839ba23c9e3af0843ea086a3
2021-04-22 11:49:01 -07:00
Ailing Zhang
f096245610 AutoNonVariableTypeMode->InferenceMode in OSS. (#56421)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56421

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D27866609

Pulled By: ailzhang

fbshipit-source-id: 040991a031c5511501b03cfe21a4a636586e120e
2021-04-19 18:07:41 -07:00
Ivan Kobzarev
7eed077406 [android] Fix headers publishing in aar (#56068)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56068

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D27776655

Pulled By: IvanKobzarev

fbshipit-source-id: 75e07b56dab8f7ff2ab501d0ddc4566ef2378fcf
2021-04-15 09:54:08 -07:00
CodemodService FBSourceGoogleJavaFormatLinterBot
37ac271089 [AutoAccept][Codemod][FBSourceGoogleJavaFormatLinter] Daily arc lint --take GOOGLEJAVAFORMAT
Reviewed By: zertosh

Differential Revision: D27731676

fbshipit-source-id: 9402fa9f19b9186a2f38e56c110800254a8e8d91
2021-04-13 04:15:35 -07:00
Ivan Kobzarev
ee2de8ae3a [android] Module load extraFiles (#55644)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55644

Testing:
Prepare the model with extra:

```
    extra_files = {}
    extra_files["model/live.spec.json"] = '{ "spec": "spec_value"}'
    torch.jit.save(script_model, "extra.pt", _extra_files=extra_files)
    script_model._save_for_lite_interpreter("extra.ptl"), _extra_files=extra_files)
```

Change android/test_app/app/src/main/java/org/pytorch/testapp/MainActivity.java
1. Full jit
```
Map<String, String> map = new HashMap<>();
map.put("model/live.spec.json", "");
mModule = Module.load(assetFilePath(this, BuildConfig.MODULE_ASSET_NAME), map, Device.CPU);
android.util.Log.i("XXX", "map:" + map);
```
`gradle -p android test_app:installMnetLocalBaseDebug -PABI_FILTERS=arm64-v8a`

2. Lite
```
Map<String, String> map = new HashMap<>();
map.put("model/live.spec.json", "");
mModule = LiteModuleLoader.load(assetFilePath(this, BuildConfig.MODULE_ASSET_NAME), map, Device.CPU);
android.util.Log.i("XXX", "map:" + map);
```
`BUILD_LITE_INTERPRETER=1 gradle -p android test_app:installMnetLocalBaseDebug -PABI_FILTERS=arm64-v8a`

Check logcat

Test Plan: Imported from OSS

Reviewed By: ljk53

Differential Revision: D27663624

Pulled By: IvanKobzarev

fbshipit-source-id: 0dcd93b2fddaacd221db0306d18afee2584fcb85
2021-04-09 14:52:21 -07:00
Ailing Zhang
eb52e36460 Revert D27469727: [pytorch][PR] [android] fbjni from prefab dependency 0.2.2
Test Plan: revert-hammer

Differential Revision:
D27469727 (507b46f23e)

Original commit changeset: 2ab22879e81c

fbshipit-source-id: d656463b81a02fbf870dded5d3868bb33e016fe0
2021-03-31 17:21:30 -07:00
Ivan Kobzarev
7f87358840 [android] bump nigtlies version to 1.9.0 (#55076)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55076

Test Plan: Imported from OSS

Reviewed By: seemethere

Differential Revision: D27474604

Pulled By: IvanKobzarev

fbshipit-source-id: b0b694333464485fd20036e9d5ef982877b1aa19
2021-03-31 14:37:59 -07:00
Ivan Kobzarev
507b46f23e [android] fbjni from prefab dependency 0.2.2 (#55066)
Summary:
Switching pytorch android to use fbjni from prefab dependencies
Bumping version of fbjni to 0.2.2
soloader version to 0.10.1

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55066

Reviewed By: dreiss

Differential Revision: D27469727

Pulled By: IvanKobzarev

fbshipit-source-id: 2ab22879e81c9f2acf56807c6a133b0ca20bb40a
2021-03-31 14:12:18 -07:00
Alexander Golynski
09756e7280 Revert D27370295: [android] fbjni android use prefab dependency, version 0.2.2
Test Plan: revert-hammer

Differential Revision:
D27370295 (2bee09a577)

Original commit changeset: bde881a8d4ed

fbshipit-source-id: 2fcc8f522fb08d4f8299f7e824341be32afb184a
2021-03-31 06:13:26 -07:00
Ivan Kobzarev
2bee09a577 [android] fbjni android use prefab dependency, version 0.2.2 (#54792)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54792

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D27370295

Pulled By: IvanKobzarev

fbshipit-source-id: bde881a8d4edd4636aa4ec7cecbe770b5b65bb1f
2021-03-30 20:26:36 -07:00
Sam Estep
5bcbbf5373 Lint trailing newlines (#54737)
Summary:
*Context:* https://github.com/pytorch/pytorch/issues/53406 added a lint for trailing whitespace at the ends of lines. However, in order to pass FB-internal lints, that PR also had to normalize the trailing newlines in four of the files it touched. This PR adds an OSS lint to normalize trailing newlines.

The changes to the following files (made in 54847d0adb9be71be4979cead3d9d4c02160e4cd) are the only manually-written parts of this PR:

- `.github/workflows/lint.yml`
- `mypy-strict.ini`
- `tools/README.md`
- `tools/test/test_trailing_newlines.py`
- `tools/trailing_newlines.py`

I would have liked to make this just a shell one-liner like the other three similar lints, but nothing I could find quite fit the bill. Specifically, all the answers I tried from the following Stack Overflow questions were far too slow (at least a minute and a half to run on this entire repository):

- [How to detect file ends in newline?](https://stackoverflow.com/q/38746)
- [How do I find files that do not end with a newline/linefeed?](https://stackoverflow.com/q/4631068)
- [How to list all files in the Git index without newline at end of file](https://stackoverflow.com/q/27624800)
- [Linux - check if there is an empty line at the end of a file [duplicate]](https://stackoverflow.com/q/34943632)
- [git ensure newline at end of each file](https://stackoverflow.com/q/57770972)

To avoid giving false positives during the few days after this PR is merged, we should probably only merge it after https://github.com/pytorch/pytorch/issues/54967.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54737

Test Plan:
Running the shell script from the "Ensure correct trailing newlines" step in the `quick-checks` job of `.github/workflows/lint.yml` should print no output and exit in a fraction of a second with a status of 0. That was not the case prior to this PR, as shown by this failing GHA workflow run on an earlier draft of this PR:

- https://github.com/pytorch/pytorch/runs/2197446987?check_suite_focus=true

In contrast, this run (after correcting the trailing newlines in this PR) succeeded:

- https://github.com/pytorch/pytorch/pull/54737/checks?check_run_id=2197553241

To unit-test `tools/trailing_newlines.py` itself (this is run as part of our "Test tools" GitHub Actions workflow):
```
python tools/test/test_trailing_newlines.py
```

Reviewed By: malfet

Differential Revision: D27409736

Pulled By: samestep

fbshipit-source-id: 46f565227046b39f68349bbd5633105b2d2e9b19
2021-03-30 13:09:52 -07:00
Brian Vaughan
9c60fc9cd9 Fix broken javadoc URL in README (#54434)
Summary:
The link in the README was broken

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54434

Reviewed By: ailzhang

Differential Revision: D27328733

Pulled By: nairbv

fbshipit-source-id: 12ebb6f66983f9348a90b9738fbd9f3f2660c2d1
2021-03-25 11:28:23 -07:00
generatedunixname89002005325674
05c8ddfe05 [AutoAccept][Codemod][FBSourceGoogleJavaFormatLinter] Daily arc lint --take GOOGLEJAVAFORMAT
Reviewed By: zertosh

Differential Revision: D27288729

fbshipit-source-id: 84c9f4cffdabd3c1967e3279ec123867d8eded00
2021-03-24 04:18:23 -07:00
Ivan Kobzarev
345b26ca08 [android][utils] Support ChannelsLast in TensorImageUtils (#48990)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48990

Introducing TensorImageUtils methods to prepare tensors in channelsLast MemoryFormat.
ChannlesLast is preferred for performance.

Not to introduce api breaking changes, adding additional parameter MemoryFormat which is CONTIGUOUS by default.

Testing by checking test_app that uses this call
```
gradle -p android installMnetLocalBaseDebug -PABI_FILTERS=arm64-v8a
```

Test Plan: Imported from OSS

Reviewed By: jeffxtang

Differential Revision: D27173940

Pulled By: IvanKobzarev

fbshipit-source-id: 27788082d2c8b190323eadcf18de25d2c3b5e1f1
2021-03-23 14:54:36 -07:00
Ivan Kobzarev
5658ab5f77 [andoid] publishing to maven central (#53568)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53568

Bintray, JCenter are going to be unavailable on May 1st

https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocenter-and-chartcenter/

Migrating publishing to Maven Central the same way as other fb oss projects, reference PR https://github.com/pytorch/pytorch/pull/53568/files

to publish
```
./android/gradlew -p android publish
```

<img width="697" alt="Screen Shot 2021-03-09 at 3 14 08 PM" src="https://user-images.githubusercontent.com/6638825/110551387-3e3fc000-80ea-11eb-9604-4e69d6e6d085.png">

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D26928884

Pulled By: IvanKobzarev

fbshipit-source-id: 8754c93a2542405870e2621be5b3f14e3d0081b9
2021-03-10 10:42:23 -08:00
Ivan Kobzarev
05e0ea9661 [android] bump gradle version to 6.8.3 (#53567)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53567

Updating gradle to version 6.8.3
Proper zip was uploaded to aws.

Successful CI check: https://github.com/pytorch/pytorch/pull/53619

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D26928885

Pulled By: IvanKobzarev

fbshipit-source-id: b1081052967d9080cd6934fd48c4dbe933630e49
2021-03-10 10:40:28 -08:00
Chen Lai
30dd15e778 [PyTorch] Add doc string for lite interpreter related api in Android (#53136)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53136

As title, doc string in ios, c++ and python is ready.

As a reference, the doc string for other lite interpreter related apis
[_load_for_mobile](https://www.internalfb.com/intern/diffusion/FBS/browsefile/master/fbcode/caffe2/torch/csrc/jit/mobile/import.h?commit=c95d12f9d67ee198aa4b5aafec980e9048de1702&lines=16-43)
[_save_for_lite_interpreter](https://www.internalfb.com/intern/diffusion/FBS/browsefile/master/fbcode/caffe2/torch/jit/_script.py?commit=b1d7f0ba6001beed6ba3b0a69a225abab4ed3866&lines=496-509)
ghstack-source-id: 122936777

Test Plan: CI

Reviewed By: IvanKobzarev, iseeyuan

Differential Revision: D26742092

fbshipit-source-id: 76464b5e4ceafe71348b58ba2af98c3debdaae63
2021-03-02 23:17:54 -08:00
Chen Lai
14f7bf0629 [PyTorch] update CMake to build libtorch lite (#51419)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51419

## Summary

1. Add an option `BUILD_LITE_INTERPRETER` in `caffe2/CMakeLists.txt` and set `OFF` as default.
2. Update 'build_android.sh' with an argument to swtich `BUILD_LITE_INTERPRETER`, 'OFF' as default.
3. Add a mini demo app `lite_interpreter_demo` linked with `libtorch` library, which can be used for quick test.

## Test Plan
Built lite interpreter version of libtorch and test with Image Segmentation demo app ([android version](https://github.com/pytorch/android-demo-app/tree/master/ImageSegmentation)/[ios version](https://github.com/pytorch/ios-demo-app/tree/master/ImageSegmentation))

### Android
1. **Prepare model**: Prepare the lite interpreter version of model by run the script below to generate the scripted model `deeplabv3_scripted.pt` and `deeplabv3_scripted.ptl`
```
import torch

model = torch.hub.load('pytorch/vision:v0.7.0', 'deeplabv3_resnet50', pretrained=True)
model.eval()

scripted_module = torch.jit.script(model)
# Export full jit version model (not compatible lite interpreter), leave it here for comparison
scripted_module.save("deeplabv3_scripted.pt")
# Export lite interpreter version model (compatible with lite interpreter)
scripted_module._save_for_lite_interpreter("deeplabv3_scripted.ptl")

```
2. **Build libtorch lite for android**: Build libtorch for android for all 4 android abis (armeabi-v7a, arm64-v8a, x86, x86_64) `BUILD_LITE_INTERPRETER=1 ./scripts/build_pytorch_android.sh`. This pr is tested on Pixel 4 emulator with x86, so use cmd `BUILD_LITE_INTERPRETER=1 ./scripts/build_pytorch_android.sh x86` to specify abi to save built time. After the build finish, it will show the library path:
```
...
BUILD SUCCESSFUL in 55s
134 actionable tasks: 22 executed, 112 up-to-date
+ find /Users/chenlai/pytorch/android -type f -name '*aar'
+ xargs ls -lah
-rw-r--r--  1 chenlai  staff    13M Feb 11 11:48 /Users/chenlai/pytorch/android/pytorch_android/build/outputs/aar/pytorch_android-release.aar
-rw-r--r--  1 chenlai  staff    36K Feb  9 16:45 /Users/chenlai/pytorch/android/pytorch_android_torchvision/build/outputs/aar/pytorch_android_torchvision-release.aar
```
3. **Use the PyTorch Android libraries built from source in the ImageSegmentation app**: Create a folder 'libs' in the path, the path from repository root will be `ImageSegmentation/app/libs`. Copy `pytorch_android-release` to the path `ImageSegmentation/app/libs/pytorch_android-release.aar`. Copy 'pytorch_android_torchvision` (downloaded from [here](https://oss.sonatype.org/#nexus-search;quick~torchvision_android)) to the path `ImageSegmentation/app/libs/pytorch_android_torchvision.aar` Update the `dependencies` part of `ImageSegmentation/app/build.gradle` to
```
dependencies {
    implementation 'androidx.appcompat:appcompat:1.2.0'
    implementation 'androidx.constraintlayout:constraintlayout:2.0.2'
    testImplementation 'junit:junit:4.12'
    androidTestImplementation 'androidx.test.ext:junit:1.1.2'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'

    implementation(name:'pytorch_android-release', ext:'aar')
    implementation(name:'pytorch_android_torchvision', ext:'aar')

    implementation 'com.android.support:appcompat-v7:28.0.0'
    implementation 'com.facebook.fbjni:fbjni-java-only:0.0.3'
}
```
Update `allprojects` part in `ImageSegmentation/build.gradle` to
```

allprojects {
    repositories {
        google()
        jcenter()
        flatDir {
            dirs 'libs'
        }
    }
}
```
4. **Update model loader api**: Update `ImageSegmentation/app/src/main/java/org/pytorch/imagesegmentation/MainActivity.java` by
4.1 Add new import: `import org.pytorch.LiteModuleLoader;`
4.2 Replace the way to load pytorch lite model
```
//            mModule = Module.load(MainActivity.assetFilePath(getApplicationContext(), "deeplabv3_scripted.pt"));
            mModule = LiteModuleLoader.load(MainActivity.assetFilePath(getApplicationContext(), "deeplabv3_scripted.ptl"));
```
5. **Test app**: Build and run the ImageSegmentation app in Android Studio,
![image](https://user-images.githubusercontent.com/16430979/107696279-9cea5900-6c66-11eb-8286-4d1d68abff61.png)

### iOS
1. **Prepare model**: Same as Android.
2. **Build libtorch lite for ios** `BUILD_PYTORCH_MOBILE=1 IOS_PLATFORM=SIMULATOR BUILD_LITE_INTERPRETER=1   ./scripts/build_ios.sh`
3. **Remove Cocoapods from the project**: run `pod deintegrate`
4. **Link ImageSegmentation demo app with the custom built library**:
Open your project in XCode, go to your project Target’s **Build Phases - Link Binaries With Libraries**, click the **+** sign and add all the library files located in `build_ios/install/lib`. Navigate to the project **Build Settings**, set the value **Header Search Paths** to `build_ios/install/include` and **Library Search Paths** to `build_ios/install/lib`.
In the build settings, search for **other linker flags**. Add a custom linker flag below
```
-all_load
```
Finally, disable bitcode for your target by selecting the Build Settings, searching for Enable Bitcode, and set the value to No.
**

5. Update library and api**
5.1 Update `TorchModule.mm``
To use the custom built libraries the project, replace `#import <LibTorch/LibTorch.h>` (in `TorchModule.mm`) which is needed when using LibTorch via Cocoapods with the code below:

```
//#import <LibTorch/LibTorch.h>
#include "ATen/ATen.h"
#include "caffe2/core/timer.h"
#include "caffe2/utils/string_utils.h"
#include "torch/csrc/autograd/grad_mode.h"
#include "torch/script.h"
#include <torch/csrc/jit/mobile/function.h>
#include <torch/csrc/jit/mobile/import.h>
#include <torch/csrc/jit/mobile/interpreter.h>
#include <torch/csrc/jit/mobile/module.h>
#include <torch/csrc/jit/mobile/observer.h>
```
5.2 Update `ViewController.swift`
```
//        if let filePath = Bundle.main.path(forResource:
//            "deeplabv3_scripted", ofType: "pt"),
//            let module = TorchModule(fileAtPath: filePath) {
//            return module
//        } else {
//            fatalError("Can't find the model file!")
//        }
        if let filePath = Bundle.main.path(forResource:
            "deeplabv3_scripted", ofType: "ptl"),
            let module = TorchModule(fileAtPath: filePath) {
            return module
        } else {
            fatalError("Can't find the model file!")
        }
```

### Unit test
Add `test/cpp/lite_interpreter`, with one unit test `test_cores.cpp` and a light model `sequence.ptl` to test `_load_for_mobile()`, `bc.find_method()` and `bc.forward()` functions.

### Size:
**With the change:**
Android:
x86: `pytorch_android-release.aar` (**13.8 MB**)

IOS:
`pytorch/build_ios/install/lib` (lib: **66 MB**):
```
(base) chenlai@chenlai-mp lib % ls -lh
total 135016
-rw-r--r--  1 chenlai  staff   3.3M Feb 15 20:45 libXNNPACK.a
-rw-r--r--  1 chenlai  staff   965K Feb 15 20:45 libc10.a
-rw-r--r--  1 chenlai  staff   4.6K Feb 15 20:45 libclog.a
-rw-r--r--  1 chenlai  staff    42K Feb 15 20:45 libcpuinfo.a
-rw-r--r--  1 chenlai  staff    39K Feb 15 20:45 libcpuinfo_internals.a
-rw-r--r--  1 chenlai  staff   1.5M Feb 15 20:45 libeigen_blas.a
-rw-r--r--  1 chenlai  staff   148K Feb 15 20:45 libfmt.a
-rw-r--r--  1 chenlai  staff    44K Feb 15 20:45 libpthreadpool.a
-rw-r--r--  1 chenlai  staff   166K Feb 15 20:45 libpytorch_qnnpack.a
-rw-r--r--  1 chenlai  staff   384B Feb 15 21:19 libtorch.a
-rw-r--r--  1 chenlai  staff    **60M** Feb 15 20:47 libtorch_cpu.a
```
`pytorch/build_ios/install`:
```
(base) chenlai@chenlai-mp install % du -sh *
 14M	include
 66M	lib
2.8M	share
```

**Master (baseline):**
Android:
x86: `pytorch_android-release.aar` (**16.2 MB**)

IOS:
`pytorch/build_ios/install/lib` (lib: **84 MB**):
```
(base) chenlai@chenlai-mp lib % ls -lh
total 172032
-rw-r--r--  1 chenlai  staff   3.3M Feb 17 22:18 libXNNPACK.a
-rw-r--r--  1 chenlai  staff   969K Feb 17 22:18 libc10.a
-rw-r--r--  1 chenlai  staff   4.6K Feb 17 22:18 libclog.a
-rw-r--r--  1 chenlai  staff    42K Feb 17 22:18 libcpuinfo.a
-rw-r--r--  1 chenlai  staff   1.5M Feb 17 22:18 libeigen_blas.a
-rw-r--r--  1 chenlai  staff    44K Feb 17 22:18 libpthreadpool.a
-rw-r--r--  1 chenlai  staff   166K Feb 17 22:18 libpytorch_qnnpack.a
-rw-r--r--  1 chenlai  staff   384B Feb 17 22:19 libtorch.a
-rw-r--r--  1 chenlai  staff    78M Feb 17 22:19 libtorch_cpu.a
```
`pytorch/build_ios/install`:
```
(base) chenlai@chenlai-mp install % du -sh *
 14M	include
 84M	lib
2.8M	share
```

Test Plan: Imported from OSS

Reviewed By: iseeyuan

Differential Revision: D26518778

Pulled By: cccclai

fbshipit-source-id: 4503ffa1f150ecc309ed39fb0549e8bd046a3f9c
2021-02-21 01:43:54 -08:00
Eli Uriegas
9653161fb4 bump nightlies to 1.9.0 (#51891)
Summary:
similar to https://github.com/pytorch/pytorch/pull/45696

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51891

Reviewed By: izdeby

Differential Revision: D26318646

Pulled By: seemethere

fbshipit-source-id: 757194845c758a24eed2d0550866ba890e7a0b58
2021-02-10 20:30:57 -08:00
H1Gdev
8ab22a080b Build pytorch_android using Gradle wrapper. (#51067)
Summary:
[Here](https://docs.gradle.org/current/userguide/gradle_wrapper.html), there is the following description.
`The recommended way to execute any Gradle build is with the help of the Gradle Wrapper`

I took a little time to prepare Gradle for `pytorch_android` build. (version etc.)

I think using Gradle wrapper will make `pytorch_android` build more seamless.

Gradle wrapper version: 4.10.3

250c71121b/.circleci/scripts/build_android_gradle.sh (L13)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51067

Reviewed By: izdeby

Differential Revision: D26315718

Pulled By: IvanKobzarev

fbshipit-source-id: f8077d7b28dc0b03ee48bcdac2f5e47d9c1f04d9
2021-02-09 03:09:08 -08:00
Ivan Kobzarev
dbfaf966b0 [android] turn on USE_VULKAN for android builds by default (#51291)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51291

Turning on USE_VULKAN for android builds
Remove standalone android vulkan build

Testing all ci jobs (for master): https://github.com/pytorch/pytorch/pull/51292

Test Plan: Imported from OSS

Reviewed By: AshkanAliabadi

Differential Revision: D26141891

Pulled By: IvanKobzarev

fbshipit-source-id: e8e1a4ab612c0786ce09217ab9370fd75a71eb00
2021-01-29 11:58:21 -08:00
Ivan Kobzarev
c908ebd4a1 [android] fix yuv conversion - remove define (#50951)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50951

Test Plan: Imported from OSS

Reviewed By: fmassa

Differential Revision: D26021488

Pulled By: IvanKobzarev

fbshipit-source-id: 6d295762bb1160a3ed8bafac08e03e1eeb07d688
2021-01-22 11:30:57 -08:00
Ivan Kobzarev
98e2914614 [android] Fix YUV camera image to tensor (#50871)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50871

Issue: https://discuss.pytorch.org/t/trouble-with-yuv420-to-float-tensor-conversion/106721/3
Decoding was wrong and the result image had artifacts.

Testing:
Patch test_app with:
[input_tensor_to_bitmap.txt](https://github.com/pytorch/pytorch/files/5847553/input_tensor_to_bitmap.txt)

gradle -p android test_app:installMnetLocalCameraDebug -PABI_FILTERS=arm64-v8a

Before fix:
![before_yuv_fix](https://user-images.githubusercontent.com/6638825/105317604-63a35980-5b90-11eb-9609-2ed5818130bd.png)

After fix:
![after_yuv_fix](https://user-images.githubusercontent.com/6638825/105317643-70c04880-5b90-11eb-88b7-92dd90db8ed2.png)

Test Plan: Imported from OSS

Reviewed By: fmassa

Differential Revision: D25992519

Pulled By: IvanKobzarev

fbshipit-source-id: 4a46ed39c1cd70f8987fcc1023520e9659ae5d59
2021-01-21 13:53:57 -08:00
Richard Barnes
09eefec627 Clean up some type annotations in android (#49944)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49944

Upgrades type annotations from Python2 to Python3

Test Plan: Sandcastle tests

Reviewed By: xush6528

Differential Revision: D25717539

fbshipit-source-id: c621e2712e87eaed08cda48eb0fb224f6b0570c9
2021-01-07 15:42:55 -08:00
Ashkan Aliabadi
fcb69d2eba Add android.permission.INTERNET permission to Android test_app. (#49996)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49996

According to section 5.2.1 of Snapdragon Profiler User Guide
(https://developer.qualcomm.com/qfile/30580/snapdragon_profiler_user_guide_reva.pdf)
OpenGL ES, Vulkan, and OpenCL apps must include
android.permission.INTERNET in the app's AndroidManifest.xml to enable
API tracing and GPU metrics.

Test Plan: Imported from OSS

Reviewed By: SS-JIA

Differential Revision: D25809555

Pulled By: AshkanAliabadi

fbshipit-source-id: c4d88a7ea98d9166efbc4157df7d822d99ba0df9
2021-01-06 12:58:28 -08:00
Riley Dulin
62f9b03b7c [lint] Apply whitespace linter to all gradle files
Summary: Run whitespace and license linters on gradle build files.

Reviewed By: zertosh

Differential Revision: D25687355

fbshipit-source-id: 44330daac7582fed6c05680bffc74e855a9b1dbc
2020-12-22 17:01:51 -08:00
Scott Wolchok
900aa4ee97 [PyTorch] remove convenience RecordFunctionCallback interface (#48620)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48620

In preparation for storing bare function pointer (8 bytes)
instead of std::function (32 bytes).
ghstack-source-id: 118568242

Test Plan: CI

Reviewed By: ezyang

Differential Revision: D25132183

fbshipit-source-id: 3790cfb5d98479a46cf665b14eb0041a872c13da
2020-12-14 20:03:15 -08:00
Ivan Kobzarev
21ba48fe49 [vulkan] test_app for mobilenetV2 on vulkan api (#48924)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48924

Test Plan: Imported from OSS

Reviewed By: SS-JIA

Differential Revision: D25365000

Pulled By: IvanKobzarev

fbshipit-source-id: 79295b5781d2494681dbb4e4a741de49ff9c058c
2020-12-07 08:44:43 -08:00
Elijah Rippeth
aaf6582d02 fix issue by which pytorch_jni is not bundled in libtorch (#46466)
Summary:
Fixes issue with pytorch_jni.dll not being installed correctly in libtorch.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46466

Reviewed By: zhangguanheng66

Differential Revision: D25247564

Pulled By: ezyang

fbshipit-source-id: a509476ec4a0863fd67da3258e9300a9527d4f3b
2020-12-01 11:38:56 -08:00
Elijah Rippeth
b5479737d7 Add windows JNI support (#44257)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/32516

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44257

Reviewed By: malfet

Differential Revision: D24332820

Pulled By: ezyang

fbshipit-source-id: 1dd97e9c8140129a02a9078623b190b33f30d5b0
2020-10-15 10:48:45 -07:00
Eli Uriegas
a052597e6c Bump nightlies to 1.8.0 (#45696)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45696

Similar to https://github.com/pytorch/pytorch/pull/40519

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Test Plan: Imported from OSS

Reviewed By: samestep

Differential Revision: D24064381

Pulled By: seemethere

fbshipit-source-id: 1484b9c4fc5fa8cfa7be591a0a5d4b6e05968589
2020-10-02 11:10:34 -07:00
generatedunixname89002005325674
592b398e82 [AutoAccept][Codemod][FBSourceGoogleJavaFormatLinter] Daily arc lint --take GOOGLEJAVAFORMAT
Reviewed By: zertosh

Differential Revision: D24044052

fbshipit-source-id: 50ac5b7480ed65af94617bf8b014252ea7b27c4f
2020-10-01 05:19:37 -07:00
Ivan Kobzarev
17be7c6e5c [vulkan][android][test_app] Add test_app variant that runs module on Vulkan (#44897)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44897

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D23763770

Pulled By: IvanKobzarev

fbshipit-source-id: 6ad16b7271c745313a71da64a629a764258bbc85
2020-09-29 10:00:46 -07:00
Ivan Kobzarev
2c300fd74c [android][vulkan] Module load argument to specify device cpu/vulkan (#44896)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44896

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D23763771

Pulled By: IvanKobzarev

fbshipit-source-id: 990a386ad13c704f03345dbe09e180281af913c9
2020-09-29 09:58:22 -07:00
Ivan Kobzarev
8ec6bc7292 [pytorch][vulkan][jni] LiteModuleLoader load argument to use vulkan device
Summary:
### Java, CPP
Introducing additional parameter `device` to LiteModuleLoader to specify device on which the `forward` will work.

On the java side this is enum that contains CPU and VULKAN, passing as jint to jni side and storing it as a member field on the same level as module.

On pytorch_jni_lite.cpp - for all input tensors converting them to vulkan.

On pytorch_jni_common.cpp (also goes to OSS) - if result Tensor is not cpu - call cpu. (Not Cpu at the moment is only Vulkan).

### BUCK
Introducing `pytorch_jni_lite_with_vulkan` target, that depends on `pytorch_jni_lite_with_vulkan` and adds `aten_vulkan`

In that case `pytorch_jni_lite_with_vulkan` can be used along with `pytorch_jni_lite_with_vulkan`.

Test Plan:
After the following diff with aidemo segmentation:
```
buck install -r aidemos-android
```
{F296224521}

Reviewed By: dreiss

Differential Revision: D23198335

fbshipit-source-id: 95328924e398901d76718c4d828f96e112dfa1b0
2020-09-16 18:35:22 -07:00
Ann Shan
a61318a535 [pytorch] Replace mobile run_method with get_method and operator() (#44202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44202

In preparation for changing mobile run_method() to be variadic, this diff:

* Implements get_method() for mobile Module, which is similar to find_method but expects the method to exist.
* Replaces calls to the current nonvariadic implementation of run_method() by calling get_method() and then invoking the operator() overload on Method objects.
ghstack-source-id: 111848222

Test Plan: CI, and all the unit tests which currently contain run_method that are being changed.

Reviewed By: iseeyuan

Differential Revision: D23436351

fbshipit-source-id: 4655ed7182d8b6f111645d69798465879b67a577
2020-09-11 10:23:06 -07:00
generatedunixname89002005287564
28b1360d24 [Codemod][FBSourceGoogleJavaFormatLinter] Daily arc lint --take GOOGLEJAVAFORMAT
Reviewed By: zertosh

Differential Revision: D23536088

fbshipit-source-id: d4c6c26ed5bad4e8c1b80ac1c05bd86b36cb6aaa
2020-09-04 07:30:50 -07:00
Ivan Kobzarev
c40e3f9f98 [android][jni] Support Tensor MemoryFormat in java wrappers (#40785)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40785

The main goal of this change is to support creating Tensors specifying blob in NHWC (ChannelsLast) format.

ChannelsLast is supported only for 4-dim tensors, this is enforced on LibTorch side, I have not added asserts on java side in case that this limitation will be changed in future and not to have double asserts.

Additional changes in `aten/src/ATen/templates/Functions.h`:

`from_blob` creates `at::empty({0}, options)` tensor first and sets it Storage with sizes and strides afterwards.

But as ChannelsLast is only for 4-dim tensors - it fails on that creation, as dim==1.

I've added `zero_sizes()` function that returns `{0, 0, 0, 0}` for ChannelsLast and ChannelsLast3d.

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D22396244

Pulled By: IvanKobzarev

fbshipit-source-id: 02582d748a554e0f859aefe71cd2c1e321fb8979
2020-09-03 17:01:35 -07:00
David Reiss
844d469ae7 Remove proprietary notices
Summary:
These were added accidentally (probably by an IDE) during a refactor.
These files have always been Open Source.

Test Plan: CI

Reviewed By: xcheng16

Differential Revision: D23250761

fbshipit-source-id: 4974430c0e28dd3269424d38edb36f4f71508157
2020-08-20 20:14:59 -07:00
Jiakai Liu
ff17b83fd8 [pytorch][ci] add custom selective build flow for android build (#40199)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40199

Mobile custom selective build has already been covered by `test/mobile/custom_build/build.sh`.
It builds a CLI binary with host-toolchain and runs on host machine to
check correctness of the result.

But that custom build test doesn't cover the android/gradle build part.
And we cannot use it to measure and track the in-APK size of custom
build library.

So this PR adds the selective build test coverage for android NDK build.
Also integrate with the CI to upload the custom build size to scuba.

TODO:
Ideally it should build android/test_app and measure the in-APK size.
But the test_app hasn't been covered by any CI yet and is currently
broken, so build & measure AAR instead (which can be inaccurate as we
plan to pack C++ header files into AAR soon).

Sample result: https://fburl.com/scuba/pytorch_binary_size/skxwb1gh
```

+---------------------+-------------+-------------------+-----------+----------+
|     build_mode      |    arch     |        lib        | Build Num |   Size   |
+---------------------+-------------+-------------------+-----------+----------+
| custom-build-single | armeabi-v7a | libpytorch_jni.so |   5901579 | 3.68 MiB |
| prebuild            | armeabi-v7a | libpytorch_jni.so |   5901014 | 6.23 MiB |
| prebuild            | x86_64      | libpytorch_jni.so |   5901014 | 7.67 MiB |
+---------------------+-------------+-------------------+-----------+----------+
```

Test Plan: Imported from OSS

Differential Revision: D22111115

Pulled By: ljk53

fbshipit-source-id: 11d24efbc49a85f851ecd0e481d14123f405b3a9
2020-07-02 21:11:01 -07:00
Ivan Kobzarev
a0569ad8f8 [android][readme] Aar native linking add fbjni (#40578)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40578

Test Plan: Imported from OSS

Differential Revision: D22239286

Pulled By: IvanKobzarev

fbshipit-source-id: 7a4160b621af8cfcc3b3d9e6da1a75c8afefba27
2020-07-01 08:09:17 -07:00
Martin Yuan
dfc7e71d13 [Selective Build] Apply query-based on instrumentation_tests
Summary:
1. Modularize some bzl files to break circular buck load
2. Use query-based on instrumentation_tests

(Note: this ignores all push blocking failures!)

Test Plan: CI

Reviewed By: kwanmacher

Differential Revision: D22188728

fbshipit-source-id: affbabd333c51c8b1549af6602c6bb79fabb7236
2020-06-26 08:05:53 -07:00
Eli Uriegas
fab412a8f3 Bump nightlies to 1.7.0 (#40519)
Summary:
edit: apparently we hardcode a lot more versions that I would've anticipated.

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40519

Differential Revision: D22221280

Pulled By: seemethere

fbshipit-source-id: ba15a910a6755ec08c10f7783ed72b1e06e6b570
2020-06-25 22:36:33 -07:00
David Reiss
b7e044f0e5 Re-apply PyTorch pthreadpool changes
Summary:
This re-applies D21232894 (b9d3869df3) and D22162524, plus updates jni_deps in a few places
to avoid breaking host JNI tests.

Test Plan: `buck test @//fbandroid/mode/server //fbandroid/instrumentation_tests/com/facebook/caffe2:host-test`

Reviewed By: xcheng16

Differential Revision: D22199952

fbshipit-source-id: df13eef39c01738637ae8cf7f581d6ccc88d37d5
2020-06-23 19:26:21 -07:00
Kate Mormysh
92d3182c11 Revert D21232894: Unify PyTorch mobile's threadpool usage.
Test Plan: revert-hammer

Differential Revision:
D21232894 (b9d3869df3)

Original commit changeset: 8b3de86247fb

fbshipit-source-id: e6517cfec08f7dd0f4f8877dab62acf1d65afacd
2020-06-23 17:09:14 -07:00
Ivan Kobzarev
2e6da36298 [android][ci] Fix CI packaging headers to aar (#40442)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40442

Problem:
Nightly builds do not include libtorch headers as local build.
The reason is that on docker images path is different than local path when building with `scripts/build_pytorch_android.sh`

Solution:
Introducing gradle property to be able to specify it and add its specification to gradle build job and snapshots publishing job which run on the same docker image.

Test:
ci-all jobs check https://github.com/pytorch/pytorch/pull/40443
checking that gradle build will result with headers inside aar

Test Plan: Imported from OSS

Differential Revision: D22190955

Pulled By: IvanKobzarev

fbshipit-source-id: 9379458d8ab024ee991ca205a573c21d649e5f8a
2020-06-23 16:41:12 -07:00
Ashkan Aliabadi
b9d3869df3 Unify PyTorch mobile's threadpool usage. (#37243)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37243

*** Why ***

As it stands, we have two thread pool solutions concurrently in use in PyTorch mobile: (1) the open source pthreadpool library under third_party, and (2) Caffe2's implementation of pthreadpool under caffe2/utils/threadpool.  Since the primary use-case of the latter has been to act as a drop-in replacement for the third party version so as to enable integration and usage from within NNPACK and QNNPACK, Caffe2's implementation is intentionally written to the exact same interface as the third party version.

The original argument in favor of C2's implementation has been improved performance as a result of using spin locks, as opposed to relinquishing the thread's time slot and putting it to sleep - a less expensive operation up to a point.  That seems to have given C2's implementation the upper hand in performance, hence justifying the added maintenance complexity, until the third party version improved in parallel surpassing the efficiency of C2's implementation as I have verified in benchmarks.  With that advantage gone, there is no reason to continue using C2's implementation in PyTorch mobile either from the perspective of performance or code hygiene.  As a matter of fact, there is considerable performance benefit to be had as a result of using the third party version as it currently stands.

This is a tricky change though, mainly because in order to avoid potential performance regressions, of which I have witnessed none but just in abundance of caution, we have decided to continue using the internal C2's implementation whenever building for Caffe2.  Again, this is mainly to avoid potential performance regressions in production C2 use cases even if doing so results in reduced performance as far as I can tell.

So to summarize, today, and as it currently stands, we are using C2's implementation for (1) NNPACK, (2) PyTorch QNNPACK, and (3) ATen parallel_for on mobile builds, while using the third party version of pthreadpool for XNNPACK as XNNPACK does not provide any build options to link against an external implementation unlike NNPACK and QNNPACK do.

The goal of this PR then, is to unify all usage on mobile to the third party implementation both for improved performance and better code hygiene.  This applies to PyTorch's use of NNPACK, QNNPACK, XNNPACK, and mobile's implementation of ATen parallel_for, all getting routed to the
exact same third party implementation in this PR.

Considering that NNPACK, QNNPACK, and XNNPACK are not mobile specific, these benefits carry over to non-mobile builds of PyTorch (but not Caffe2) as well.  The implementation of ATen parallel_for on non-mobile builds remains unchanged.

*** How ***

This is where things get tricky.

A good deal of the build system complexity in this PR arises from our desire to maintain C2's implementation intact for C2's use.

pthreadpool is a C library with no concept of namespaces, which means two copies of the library cannot exist in the same binary or symbol collision will occur violating ODR.  This means that somehow, and based on some condition, we must decide on the choice of a pthreadpool implementation.  In practice, this has become more complicated as a result of all the possible combinations that USE_NNPACK, USE_QNNPACK, USE_PYTORCH_QNNPACK, USE_XNNPACK, USE_SYSTEM_XNNPACK, USE_SYSTEM_PTHREADPOOL and other variables can result in.  Having said that, I have done my best in this PR to surgically cut through this complexity in a way that minimizes the side effects, considering the significance of the performance we are leaving on the table, yet, as a result of this combinatorial explosion explained above I cannot guarantee that every single combination will work as expected on the first try.  I am heavily relying on CI to find any issues as local testing can only go that far.

Having said that, this PR provides a simple non mobile-specific C++ thread pool implementation on top of pthreadpool, namely caffe2::PThreadPool that automatically routes to C2's implementation or the third party version depending on the build configuration.  This simplifies the logic at the cost of pushing the complexity to the build scripts.  From there on, this thread pool is used in aten parallel_for, and NNPACK and family, again, routing all usage of threading to C2 or third party pthreadpool depending on the build configuration.

When it is all said or done, the layering will look like this:

a) aten::parallel_for, uses
b) caffe2::PThreadPool, which uses
c) pthreadpool C API, which delegates to
    c-1) third_party implementation of pthreadpool if that's what the build has requested, and the rabbit hole ends here.
    c-2) C2's implementation of pthreadpool if that's what the build has requested, which itself delegates to
    c-2-1) caffe2::ThreadPool, and the rabbit hole ends here.

NNPACK, and (PyTorch) QNNPACK directly hook into (c). They never go through (b).

Differential Revision: D21232894

Test Plan: Imported from OSS

Reviewed By: dreiss

Pulled By: AshkanAliabadi

fbshipit-source-id: 8b3de86247fbc3a327e811983e082f9d40081354
2020-06-23 16:34:51 -07:00
generatedunixname89002005287564
08ae7d3a71 [Codemod][FBSourceGoogleJavaFormatLinter] Daily arc lint --take GOOGLEJAVAFORMAT
Reviewed By: zertosh

Differential Revision: D22183348

fbshipit-source-id: afd4f7e8c18587c6ce1e1d6e76c8eeb9c558de15
2020-06-23 05:26:55 -07:00
Ivan Kobzarev
9e5d62582c [android][gradle] packaging headers in aars for publishing (#40392)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40392

Test Plan: Imported from OSS

Differential Revision: D22167757

Pulled By: IvanKobzarev

fbshipit-source-id: 363319c64933382c0b0ddce65624fe5a4602da26
2020-06-22 16:56:39 -07:00
Ivan Kobzarev
c1dfc05cc9 [android][test_app][reland] test_app example linking to pytorch_android aar content (#40313)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40313

Test Plan: Imported from OSS

Differential Revision: D22147079

Pulled By: IvanKobzarev

fbshipit-source-id: c70a0a9dda8834376ed304a461318d4c6ef84582
2020-06-20 07:34:42 -07:00
Ilia Cherniavskii
cdbf78fba0 Revert D22118945: [android] test_app example linking to pytorch_android aar content
Test Plan: revert-hammer

Differential Revision:
D22118945 (52a2adb3f4)

Original commit changeset: 31c54b49b1f2

fbshipit-source-id: 0c4929d4441572debbbc49f8674b9fc49b726599
2020-06-19 12:16:18 -07:00
Nikita Shulga
a11870b45d Revert D22118971: [android] gradle version update
Test Plan: revert-hammer

Differential Revision:
D22118971 (262ad8e6ab)

Original commit changeset: 566e45e8f6f7

fbshipit-source-id: 74cfec0c978b724d84460a6d0c98f97b389811f7
2020-06-19 08:48:21 -07:00
Ivan Kobzarev
262ad8e6ab [android] gradle version update (#40176)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40176

Test Plan: Imported from OSS

Differential Revision: D22118971

Pulled By: IvanKobzarev

fbshipit-source-id: 566e45e8f6f7aa357c98976ad9981c76d4c66a7f
2020-06-18 16:28:34 -07:00
Ivan Kobzarev
52a2adb3f4 [android] test_app example linking to pytorch_android aar content (#39587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39587

Example of using direct linking to pytorch_jni library from aar and updating android/README.md with the tutorial how to do it.

Adding `nativeBuild` dimension to `test_app`, using direct aar dependencies, as headers packaging is not landed yet, excluding `nativeBuild` from building by default for CI.

Additional change to `scripts/build_pytorch_android.sh`:

Skipping clean task here as android gradle plugin 3.3.2 exteralNativeBuild has problems with it when abiFilters are specified.

Will be returned back in the following diffs with upgrading of gradle and android gradle plugin versions.

Test Plan: Imported from OSS

Differential Revision: D22118945

Pulled By: IvanKobzarev

fbshipit-source-id: 31c54b49b1f262cbe5f540461d3406f74851db6c
2020-06-18 16:26:25 -07:00
generatedunixname89002005287564
c1958de49d [Codemod][FBSourceGoogleJavaFormatLinter] Daily arc lint --take GOOGLEJAVAFORMAT
Reviewed By: zertosh

Differential Revision: D22112813

fbshipit-source-id: 18ec732d7fc9752d5ed84e1cbb1e455e39e65d1e
2020-06-18 15:36:44 -07:00
Ivan Kobzarev
0891764e80 [android] ANDROID_STL=c++_shared (#39588)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39588

Before this diff we used c++_static linking.
Users will dynamically link to libpytorch_jni.so and have at least one more their own shared library that probably uses stl library.

We must have not more than one stl per app. ( https://developer.android.com/ndk/guides/cpp-support#one_stl_per_app )

To have only one stl per app changing ANDROID_STL way to  c++_shared, that will add libc++_shared.so to packaging.

Test Plan: Imported from OSS

Differential Revision: D22118031

Pulled By: IvanKobzarev

fbshipit-source-id: ea1e5085ae207a2f42d1fa9f6ab8ed0a21768e96
2020-06-18 13:50:05 -07:00
Ivan Kobzarev
d3b786afdb [android] Add libtorch headers to pytorch_android aar (#39507)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39507

Adding gradle task that will be run after `assemble` to add `headers` folder to the aar.

Headers are choosed for the first specified abi, they should be the same for all abis.

Adding headers works through temporary unpacking into gradle `$buildDir`, copying headers to it, zipping aar with headers.

Test Plan: Imported from OSS

Differential Revision: D22118009

Pulled By: IvanKobzarev

fbshipit-source-id: 52e5b1e779eb42d977c67dba79e278f1922b8483
2020-06-18 13:47:18 -07:00
Ivan Kobzarev
a71aefe857 [android][test_app] cleanup (#40136)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40136

Test Plan: Imported from OSS

Differential Revision: D22084170

Pulled By: IvanKobzarev

fbshipit-source-id: f8d2d0494b3ac4f7fe2118238d621155d697d2c4
2020-06-17 11:07:44 -07:00
Ivan Kobzarev
bf544c4a7b [android][fbjni] Test_app and Readme update with the recent fbjni dep state (#40058)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40058

Test Plan: Imported from OSS

Differential Revision: D22054574

Pulled By: IvanKobzarev

fbshipit-source-id: 751e5bd5103aa869702356fc181f458fe4fcfc83
2020-06-16 18:42:56 -07:00
Jiakai Liu
bcb44796ba [pytorch] consolidate android gradle build scripts (#39999)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39999

Cleaned up the android build scripts. Consolidated common functions into
common.sh. Also made a few minor fixes:

- We should trust build_android.sh doing right about reusing existing
  `build_android_$abi` directory;

- We should clean up `pytorch_android/src/main/jniLibs/` to remove
  broken symbolic links in case custom abi list changes since last build;

Test Plan: Imported from OSS

Differential Revision: D22036926

Pulled By: ljk53

fbshipit-source-id: e93915ee4f195111b6171cdabc667fa0135d5195
2020-06-15 23:55:21 -07:00
Ivan Kobzarev
84d8a42fdb [android] Remove android fbjni subproject (#39691)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39691

After switching on using fbjni-java-only dependency, we do not need to have gradle subproject fbjni.

Test Plan: Imported from OSS

Differential Revision: D22054575

Pulled By: IvanKobzarev

fbshipit-source-id: 331478a57dd0d0aa06a5ce96278b6c897cb0ac78
2020-06-15 15:58:18 -07:00
Ivan Kobzarev
928e99b9bb [vulkan] jni build support USE_VULKAN (#39188)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39188

Extracting Vulkan_LIBS and Vulkan_INCLUDES setup from `cmake/Dependencies.cmake` to `cmake/VulkanDependencies.cmake` and reuse it in android/pytorch_android/CMakeLists.txt

Adding control to build with Vulkan setting env variable `USE_VULKAN` for `scripts/build_android.sh` `scripts/build_pytorch_android.sh`

We do not use Vulkan backend in pytorch_android, but with this build option we can track android aar change with `USE_VULKAN` added.

Currently it is 88Kb.

Test Plan: Imported from OSS

Differential Revision: D21770892

Pulled By: IvanKobzarev

fbshipit-source-id: a39433505fdcf43d3b524e0fe08062d5ebe0d872
2020-05-28 15:39:02 -07:00
Ilia Cherniavskii
2d708cefcc Move RecordFunction into ATen (#37548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37548

Moving RecordFunction from torch::autograd::profiler into at namespace

Test Plan:
CI

Imported from OSS

Differential Revision: D21315852

fbshipit-source-id: 4a4dbabf116c162f9aef0da8606590ec3f3847aa
2020-05-07 14:52:39 -07:00
Ilia Cherniavskii
800d5617c0 Recording of TorchScript functions (#34710)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34710

Extending RecordFunction API to support new recording scopes (such as TorchScript functions), as well as giving more flexibility to set sampling rate.

Test Plan: unit test (test_misc.cpp/testRecordFunction)

Reviewed By: gdankel, dzhulgakov

Differential Revision: D20158523

fbshipit-source-id: a9e0819d21cc06f4952d92d43246587c36137582
2020-03-31 00:33:23 -07:00
Nikita Shulga
b9adbb5002 Fix/relax CMake linter rules (#35574)
Summary:
Ignore mixed upper-case/lower-case style for now
Fix space between function and its arguments violation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35574

Test Plan: CI

Differential Revision: D20712969

Pulled By: malfet

fbshipit-source-id: 0012d430aed916b4518599a0b535e82d15721f78
2020-03-27 16:52:33 -07:00
Eli Uriegas
ff71a4192d Bump base version to 1.6.0a0 (#35495)
Summary:
Since we've done the branch cut for 1.5.0 we should bump nightlies to 1.6.0

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35495

Differential Revision: D20697043

Pulled By: seemethere

fbshipit-source-id: 3646187a5e729994138bf2c68625f25f11430b3a
2020-03-27 12:14:49 -07:00
Ivan Kobzarev
f9cddff25a [android] Preload module actions do only once (#32313)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32313

`torch::autograd::profiler::pushCallback()`, `torch::jit::setPrintHandler` should be called only once, not before every loading

`JITCallGuard guard;` not needed before loading module and has no effect

Test Plan: Imported from OSS

Differential Revision: D20559676

Pulled By: IvanKobzarev

fbshipit-source-id: 70cce5d2dda20a00b378639725294cb3c440bad2
2020-03-20 20:06:25 -07:00
Ivan Kobzarev
ea41bf3100 [android] Maven publishing license fix (#32474)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32474

Test Plan: Imported from OSS

Differential Revision: D20559815

Pulled By: IvanKobzarev

fbshipit-source-id: 69a4fe951d331eb311bf821f94b372ccecdf1fd6
2020-03-20 12:27:08 -07:00
Jiakai Liu
6e47e7bf52 [pytorch][mobile] fixed AutoGradMode/AutoNonVariableTypeMode uses for mobile callsites
Summary:
There are three guards related to mobile build:
* AutoGradMode
* AutoNonVariableTypeMode
* GraphOptimizerEnabledGuard

Today we need set some of these guards before calling libtorch APIs because we customized mobile build to only support inference (for both OSS and most FB use cases) to optimize binary size.

Several changes were made since 1.3 release so there are already inconsistent uses of these guards in the codebase. I did a sweep of all mobile related model loading & forward() call sites, trying to unify the use of these guards:

Full JIT: still set all three guards. More specifically:
* OSS: Fixed a bug of not setting the guard at model load time correctly in Android JNI.
* FB: Not covered by this diff (as we are using mobile interpreter for most internal builds).

Lite JIT (mobile interpreter): only needs AutoNonVariableTypeMode guard. AutoGradMode doesn't seem to be relevant (so removed from a few places) and GraphOptimizerEnabledGuard definitely not relevant (only full JIT has graph optimizer). More specifically:
* OSS: At this point we are not committed to support Lite-JIT. For Android it shares the same code with FB JNI callsites.
* FB:
** JNI callsites: Use the unified LiteJITCallGuard.
** For iOS/C++: manually set AutoNonVariableTypeMode for _load_for_mobile() & forward() callsites.

Ideally we should avoid having to set AutoNonVariableTypeMode for mobile interpreter. It's currently needed for dynamic dispatch + inference-only mobile build (where variable kernels are not registered) - without the guard it will try to run `variable_fallback_kernel` and crash (PR #34038). The proper fix will take some time so using this workaround to unblock selective BUCK build which depends on dynamic dispatch.

PS. The current status (of having to set AutoNonVariableTypeMode) should not block running FL model + mobile interpreter - if all necessary variable kernels are registered then it can call _load_for_mobile()/forward() against the FL model without setting the AutoNonVariableTypeMode guard. It's still inconvenient for JAVA callsites as it's set unconditionally inside JNI methods.

Test Plan: - CI

Reviewed By: xta0

Differential Revision: D20498017

fbshipit-source-id: ba6740f66839a61790873df46e8e66e4e141c728
2020-03-18 17:19:35 -07:00
generatedunixname89002005287564
14c1ab049d [Codemod][FBSourceGoogleJavaFormatLinter] Daily arc lint --take GOOGLEJAVAFORMAT
Reviewed By: zertosh

Differential Revision: D20415422

fbshipit-source-id: 860f8dd9dce0a2420792bafb7d3e58bd883ab7e4
2020-03-13 06:27:03 -07:00
Michael Suo
c235be42dd [jit] kill script namespace (#34515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34515

Once upon a time we thought this was necessary. In reality it is not, so
removing it.

For backcompat, our public interface (defined in `api/`) still has
typedefs to the old `script::` names.

There was only one collision: `Pass` as a `Stmt` and `Pass` as a graph
transform. I renamed one of them.

Test Plan: Imported from OSS

Differential Revision: D20353503

Pulled By: suo

fbshipit-source-id: 48bb911ce75120a8c9e0c6fb65262ef775dfba93
2020-03-11 23:32:48 -07:00
Jiakai Liu
7aca9afdfb [pytorch] remove boilerplate setQEngine() from PyTorch mobile predictors (#34556)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34556

According to
https://github.com/pytorch/pytorch/pull/34012#discussion_r388581548,
this `at::globalContext().setQEngine(at::QEngine::QNNPACK);` call isn't
really necessary for mobile.

In Context.cpp it selects the last available QEngine if the engine isn't
set explicitly. For OSS mobile prebuild it should only include QNNPACK
engine so the default behavior should already be desired behavior.

It makes difference only when USE_FBGEMM is set - but it should be off
for both OSS mobile build and internal mobile build.

Test Plan: Imported from OSS

Differential Revision: D20374522

Pulled By: ljk53

fbshipit-source-id: d4e437a03c6d4f939edccb5c84f02609633a0698
2020-03-11 00:55:14 -07:00
Jiakai Liu
9a5e9d8cec [pytorch][mobile] change mobile build scripts to build PyTorch by default (#34203)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34203

Currently cmake and mobile build scripts still build libcaffe2 by
default. To build pytorch mobile users have to set environment variable
BUILD_PYTORCH_MOBILE=1 or set cmake option BUILD_CAFFE2_MOBILE=OFF.

PyTorch mobile has been released for a while. It's about time to change
CMake and build scripts to build libtorch by default.

Changed caffe2 CI job to build libcaffe2 by setting BUILD_CAFFE2_MOBILE=1
environment variable. Only found android CI for libcaffe2 - do we ever
have iOS CI for libcaffe2?

Test Plan: Imported from OSS

Differential Revision: D20267274

Pulled By: ljk53

fbshipit-source-id: 9d997032a599c874d62fbcfc4f5d4fbf8323a12e
2020-03-05 23:40:47 -08:00
Michael Suo
dbe850af5b [jit] do the code reorg (#33851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33851

Rationale and context described in #33828.

Script to reproduce the move:
https://gist.github.com/suo/16cbefaaeb67ca5a7c6caffd49b7f6e9
ghstack-source-id: 99079645

Test Plan: Make sure CI passes

Reviewed By: jamesr66a

Differential Revision: D20133869

fbshipit-source-id: 390e9241a9c85366d9005c492ac31f10aa96488e
2020-02-27 13:02:51 -08:00
Ashkan Aliabadi
6aecfd1e80 Mobile Backend: NHWC memory layout + XNNPACK integration. (#33722)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33722

In order to improve CPU performance on floating-point models on mobile, this PR introduces a new CPU backend for mobile that implements the most common mobile operators with NHWC memory layout support through integration with XNNPACK.

XNNPACK itself, and this codepath, are currently only included in the build, but the actual integration is gated with USE_XNNPACK preprocessor guards.  This preprocessor symbol is intentionally not passed on to the compiler, so as to enable this rollout in multiple stages in follow up PRs.  This changeset will build XNNPACK as part of the build if the identically named USE_XNNPACK CMAKE variable, defaulted to ON, is enabled, but will not actually expose or enable this code path in any other way.

Furthermore, it is worth pointing out that in order to efficiently map models to these operators, some front-end method of exposing this backend to the user is needed.  The less efficient implementation would be to hook these operators into their corresponding native implementations, granted that a series of XNNPACK-specific conditions are met, much like how NNPACK is integrated with PyTorch today for instance.

Having said that, while the above implementation is still expected to outperform NNPACK based on the benchmarks I ran, the above integration would be leave a considerable gap between the performance achieved and the maximum performance potential XNNPACK enables, as it does not provide a way to compute and factor out one-time operations out of the inner most forward() loop.

The more optimal solution, and one we will  decide on soon, would involve either providing a JIT pass that maps nn operators onto these newly introduced operators, while allowing one-time calculations to be factored out, much like quantized mobile models.  Alternatively, new eager-mode modules can also be introduced that would directly call into these implementations either through c10 or some other mechanism, also allowing for decoupling of op creation from op execution.

This PR does not include any of the front end changes  mentioned above.  Neither does it include the mobile threadpool unification present in the original https://github.com/pytorch/pytorch/issues/30644.  Furthermore, this codepath seems to be faster than NNPACK in a good number of use cases, which can potentially allow us to remove NNPACK from aten to make the codebase a little simpler, granted that there is widespread support for such a move.

Regardless, these changes will be introduced gradually and in a more controlled way in subsequent PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/32509

Test Plan:
Build: CI
Functionality: Not exposed

Reviewed By: dreiss

Differential Revision: D20069796

Pulled By: AshkanAliabadi

fbshipit-source-id: d46c1c91d4bea91979ea5bd46971ced5417d309c
2020-02-24 21:58:56 -08:00
Ashkan Aliabadi
039dc90854 Revert D19521853: [pytorch][PR] Mobile Backend: NHWC memory layout + XNNPACK integration.
Test Plan: revert-hammer

Differential Revision:
D19521853

Original commit changeset: 99a1fab31d0e

fbshipit-source-id: 76dfc1f481797ba2386997533cf19957637687d6
2020-02-23 22:07:19 -08:00