Summary:
The central fbjni repository is now public, so point to it and
take the latest version, which includes support for host builds
and some condensed syntax.
Test Plan: CI
Differential Revision: D18217840
fbshipit-source-id: 454e3e081f7e3155704fed692506251c4018b2a1
Summary:
The Java and Python code were updated, but the test currently fails
because the model was not regenerated.
Test Plan: Ran test.
Reviewed By: xcheng16
Differential Revision: D18217841
fbshipit-source-id: 002eb2d3ed0eaa14b3d7b087b621a6970acf1378
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27664
When ANDROID_ABI is not set, find libtorch headers and libraries from
the LIBTORCH_HOME build variable (which must be set by hand), place
output under a "host" directory, and use dynamic linking instead of
static.
This doesn't actually work without some local changes to fbjni, but I
want to get the changes landed to avoid unnecessary merge conflicts.
Test Plan: Imported from OSS
Differential Revision: D18210315
Pulled By: dreiss
fbshipit-source-id: 685a62de3c2a0a52bec7fd6fb95113058456bac8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27663
CMake sets CMAKE_BINARY_DIR and creates it automatically. Using this
allows us to use the -B command-line flag to CMake to specify an
alternate output directory.
Test Plan: Imported from OSS
Differential Revision: D18210316
Pulled By: dreiss
fbshipit-source-id: ba2f6bd4b881ddd00de73fe9c33d82645ad5495d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27662
This adds a new gradle subproject at pytorch_android/host and tweaks
the top-level build.gradle to only run some Android bits on the other
projects.
Referencing Java sources from inside the host directory feels a bit
hacky, but getting host and Android Gradle builds to coexist in the same
directory hit several roadblocks. We can try a bigger refactor to
separate the Android-specific and non-Android-specific parts of the
code, but that seems overkill at this point for 4 Java files.
This doesn't actually run without some local changes to fbjni, but I
want to get the files landed to avoid unnecessary merge conflicts.
Test Plan: Imported from OSS
Differential Revision: D18210317
Pulled By: dreiss
fbshipit-source-id: dafb54dde06a5a9a48fc7b7065d9359c5c480795
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28399
This is also to address issue #26764
Turns out it's incorrect to wrap the entire forward() call with
NonVariableTypeMode guard as some JIT passes has is_variable() check and
can be triggered within forward() call, e.g.:
jit/passes/constant_propagation.cpp
Since now we are toggling NonVariableTypeMode per method/op call, we can
remove the guard around forward() now.
Test Plan: - With stacked PRs, verified it can load and run previously failed models.
Differential Revision: D18055850
Pulled By: ljk53
fbshipit-source-id: 3074d0ed3c6e05dbfceef6959874e5916aea316c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26767
Now that we have tagged ivalues, we can accurately recover the type with
`ivalue.type()`. This reomoves the other half-implemented pathways that
were created because we didn't have tags.
Test Plan: Imported from OSS
Differential Revision: D17561191
Pulled By: zdevito
fbshipit-source-id: 26aaa134099e75659a230d8a5a34a86dc39a3c5c
Summary:
All of the test cases move into a base class that is extended by the
intrumentation test and a new "HostTests" class that can be run in
normal Java. (Some changes to the build script and dependencies are
required before the host test can actually run.)
ghstack-source-id: fe1165b513241b92c5f4a81447f5e184b3bfc75e
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27453
Test Plan: Imported from OSS
Reviewed By: IvanKobzarev
Differential Revision: D17800410
fbshipit-source-id: 1184f0caebdfa219f4ccd1464c67826ac0220181
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26995
Fix current setup, exclude fbjni - we can not use independently pytorch_android:package, for example for testing `gradle pytorch_android:cAT`
But for publishing it works as pytorch_android has dep on fbjni that will be also published
For other cases - we have 2 fbjni.so - one from native build (CMakeLists.txt does add_subdirectory(fbjni_dir)), and from dependency ':fbjni'
We need both of them as ':fbjni' also contains java classes
As a fix: keep excluding for publishing tasks (bintrayUpload, uploadArchives), but else - pickFirst (as we have 2 sources of fbjni.so)
# Testing
gradle cAT works, fbjni.so included
gradle bintrayUpload (dryRun==true) - no fbjni.so
Test Plan: Imported from OSS
Differential Revision: D17637775
Pulled By: IvanKobzarev
fbshipit-source-id: edda56ba555678272249fe7018c1f3a8e179947c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26565
For OSS mobile build we should keep QNNPACK off and PYTORCH_QNNPACK on
as we don't include caffe2 ops that use third_party/QNNPACK.
Update android/iOS build script to include new libraries accordingly.
Test Plan: - CI build
Differential Revision: D17508918
Pulled By: ljk53
fbshipit-source-id: 0483d45646d4d503b4e5c1d483e4df72cffc6c68
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26525
Create a util function to avoid boilerplate code as we are adding more
libraries.
Test Plan: - build CI;
Differential Revision: D17495394
Pulled By: ljk53
fbshipit-source-id: 9e19f96ede4867bdff5157424fa68b71e6cff8bf
Summary:
USE_STATIC_DISPATCH needs to be exposed as we don't hide header files
containing it for iOS (yet). Otherwise it's error-prone to request all
external projects to set the macro correctly on their own.
Also remove redundant USE_STATIC_DISPATCH definition from other places.
Test Plan:
- build android gradle to confirm linker can still strip out dead code;
- integrate with demo app to confirm inference can run without problem;
Differential Revision: D17484260
Pulled By: ljk53
fbshipit-source-id: 653f597acb2583761b723eff8026d77518007533
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26477
- At inference time we need turn off autograd mode and turn on no-variable
mode since we strip out these modules for inference-only mobile build.
- Both flags are stored in thread-local variables so we cannot simply
set them to false glboally.
- Add "autograd/grad_mode.h" header to all-in-one header 'torch/script.h'
to reduce friction for iOS engs who might need do this manually in their
project.
P.S. I tried to hide AutoNonVariableTypeMode in codegen but figured it's not
very trivial (e.g. there are manually written part not covered by codegen).
Might try it again later.
Test Plan: - Integrate with Android demo app to confirm inference runs correctly.
Differential Revision: D17484259
Pulled By: ljk53
fbshipit-source-id: 06887c8b527124aa0cc1530e8e14bb2361acef31
Summary:
At the moment it includes https://github.com/pytorch/pytorch/pull/26219 changes. That PR is landing at the moment, afterwards this PR will contain only javadocs.
Applied all dreiss comments from previous version.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26149
Differential Revision: D17490720
Pulled By: IvanKobzarev
fbshipit-source-id: f340dee660d5ffe40c96b43af9312c09f85a000b
Summary:
fbjni is used during linking `libpytorch.so` and is specified in `pytorch_android/CMakeLists.txt` and as a result its included as separate `libfbjni.so` and is included to `pytorch_android.aar`
We also have java part of fbjni and its connected to pytorch_android as gradle dependency which contains `libfbjni.so`
As a result when we specify gradle dep `'org.pytorch:pytorch_android'` (it has libjni.so) and it has transitive dep `'org.pytorch:pytorch_android_fbjni'` that has `libfbjni.so` and we will have gradle ambiguity error about this
Fix - excluding libfbjni.so from pytorch_android.aar packaging, using `libfbjni.so` from gradle dep `'org.pytorch:pytorch_android_fbjni'`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26382
Differential Revision: D17468723
Pulled By: IvanKobzarev
fbshipit-source-id: fcad648cce283b0ee7e8b2bab0041a2e079002c6
Summary:
After offline discussion with dzhulgakov :
- In future we will introduce creation of byte signed and byte unsigned dtype tensors, but java has only signed byte - we will have to add some separation for it in method names ( java types and tensor types can not be clearly mapped) => Returning type in method names
- fixes in error messages
- non-static method Tensor.numel()
- Change Tensor toString() to be more consistent with python
Update on Sep 16:
Type renaming on java side to uint8, int8, int32, float32, int64, float64
```
public abstract class Tensor {
public static final int DTYPE_UINT8 = 1;
public static final int DTYPE_INT8 = 2;
public static final int DTYPE_INT32 = 3;
public static final int DTYPE_FLOAT32 = 4;
public static final int DTYPE_INT64 = 5;
public static final int DTYPE_FLOAT64 = 6;
```
```
public static Tensor newUInt8Tensor(long[] shape, byte[] data)
public static Tensor newInt8Tensor(long[] shape, byte[] data)
public static Tensor newInt32Tensor(long[] shape, int[] data)
public static Tensor newFloat32Tensor(long[] shape, float[] data)
public static Tensor newInt64Tensor(long[] shape, long[] data)
public static Tensor newFloat64Tensor(long[] shape, double[] data)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26219
Differential Revision: D17406467
Pulled By: IvanKobzarev
fbshipit-source-id: a0d7d44dc8ce8a562da1a18bd873db762975b184
Summary:
Applying dzhulgakov review comments
org.pytorch.Tensor:
- dims renamed to shape
- typeCode to dtype
- numElements to numel
newFloatTensor, newIntTensor... to newTensor(...)
Add support of dtype=long, double
Resorted in code byte,int,float,long,double
For if conditions order float,int,byte,long,double as I expect that float and int branches will be used more often
Tensor.toString() does not have data, only numel (data buffer capacity)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26183
Differential Revision: D17374332
Pulled By: IvanKobzarev
fbshipit-source-id: ee93977d9c43c400b6c054b6286080321ccb81bc
Summary:
The main part is to switch at::Tensor creation from usage of `torch::empty(torch::IntArrayRef(...))->ShareExternalPointer(...) to torch::from_blob(...)`
Removed explicit set of `device CPU` as `at::TensorOptions` by default `device CPU`
And renaming of local variables removing `input` prefix to make them shorter
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25973
Differential Revision: D17356837
Pulled By: IvanKobzarev
fbshipit-source-id: 679e099b8aebd787dbf8ed422dae07a81243e18f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25984
Link static libtorch libraries into pytorch.so (API library for android)
with "-Wl,--gc-sections" flag to remove unused symbols in libtorch.
Test Plan:
- full gradle CI with stacked PR;
- will check final artifacts.tgz size change;
Differential Revision: D17312859
Pulled By: ljk53
fbshipit-source-id: 99584d15922867a7b3c3d661ba238a6f99f43db5
Summary:
Gradle tasks for publishing to bintray and jcenter, mavencentral; snapshot buidls go to oss.sonatype.org
Those gradle changes adds tasks:
bintrayUpload - publishing on bintray, in 'facebook' org
uploadArchives - uploading to maven repos
Gradle tasks are copied from facebook open sourced libraries like https://github.com/facebook/litho, https://github.com/facebookincubator/spectrum
To do the publishing we need to provide somehow (e.g. in ~/.gradle/gradle.properties)
```
signing.keyId=
signing.password=
signing.secretKeyRingFile=
bintrayUsername=
bintrayApiKey=
bintrayGpgPassword=
SONATYPE_NEXUS_USERNAME=
SONATYPE_NEXUS_PASSWORD=
```
android/libs/fbjni is submodule, to be able to add publishing tasks to it (it needs to be published as separate maven dependency) - I created `android/libs/fbjni_local` that has only `build.gradle` with release tasks.
pytorch_android dependency for ':fbjni' changed from implementation -> api as implementation treated as 'private' dependency which is translated to scope=runtime in maven pom file, api works as 'compile'
Testing:
it's already published on bintray with version 0.0.4 and can be used in gradle files as
```
repositories {
maven {
url "https://dl.bintray.com/facebook/maven"
}
}
dependencies {
implementation 'com.facebook:pytorch_android:0.0.4'
implementation 'com.facebook:pytorch_android_torchvision:0.0.4'
}
```
It was published in com.facebook group
I requested sync to jcenter from bintray, that usually takes 2-3 days
Versioning added version suffixes to aar output files and circleCI jobs for android start failing as they expected just pytorch_android.aar pytorch_android_torchvision.aar, without any version
To avoid it - I changed circleCI android jobs to zip *.aar files and publish as single artifact with name artifacts.zip, I will add kostmo to check this part, if circleCI jobs finish ok - everything works :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25351
Reviewed By: kostmo
Differential Revision: D17135886
Pulled By: IvanKobzarev
fbshipit-source-id: 64eebac670bbccaaafa1b04eeab15760dd5ecdf9
Summary:
Introducing circleCI jobs for pytorch_android gradle builds, the ultimate goal of it at the moment - to run:
```
gradle assembleRelease -p ~/workspace/android/pytorch_android assembleRelease
```
To assemble android gradle build (aar) we need to have results of libtorch-android shared library with headers for 4 android abis, so pytorch_android_gradle_build requires 4 jobs
```
- pytorch_android_gradle_build:
requires:
- pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_build
- pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_64_build
- pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v7a_build
- pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v8a_build
```
All jobs use the same base docker_image, differentiate them by committing docker images with different android_abi -suffixes (like it is now for xla and namedtensor): (it's in `&pytorch_linux_build_defaults`)
```
if [[ ${BUILD_ENVIRONMENT} == *"namedtensor"* ]]; then
export COMMIT_DOCKER_IMAGE=$output_image-namedtensor
elif [[ ${BUILD_ENVIRONMENT} == *"xla"* ]]; then
export COMMIT_DOCKER_IMAGE=$output_image-xla
elif [[ ${BUILD_ENVIRONMENT} == *"-x86"* ]]; then
export COMMIT_DOCKER_IMAGE=$output_image-android-x86
elif [[ ${BUILD_ENVIRONMENT} == *"-arm-v7a"* ]]; then
export COMMIT_DOCKER_IMAGE=$output_image-android-arm-v7a
elif [[ ${BUILD_ENVIRONMENT} == *"-arm-v8a"* ]]; then
export COMMIT_DOCKER_IMAGE=$output_image-android-arm-v8a
elif [[ ${BUILD_ENVIRONMENT} == *"-x86_64"* ]]; then
export COMMIT_DOCKER_IMAGE=$output_image-android-x86_64
else
export COMMIT_DOCKER_IMAGE=$output_image
fi
```
pytorch_android_gradle_build job copies headers and libtorch.so, libc10.so results from libtorch android docker images, to workspace first and to android_abi=x86 docker image afterwards, to run there final gradle build calling `.circleci/scripts/build_android_gradle.sh`
For PR jobs we have only `pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_build` libtorch android build => it will have separate gradle build `pytorch_android_gradle_build-x86_32` that does not do docker copying,
it calls the same `.circleci/scripts/build_android_gradle.sh` which has only-x86_32 logic by condition on BUILD_ENVIRONMENT:
`[[ "${BUILD_ENVIRONMENT}" == *-gradle-build-only-x86_32* ]]`
And has filtering to un only for PR as for other runs we will have the full build. Filtering checks `-z "${CIRCLE_PULL_REQUEST:-}"`
```
- run:
name: filter_run_only_on_pr
no_output_timeout: "5m"
command: |
echo "CIRCLE_PULL_REQUEST: ${CIRCLE_PULL_REQUEST:-}"
if [ -z "${CIRCLE_PULL_REQUEST:-}" ]; then
circleci step halt
fi
```
Updating docker images to the version with gradle, android_sdk, openjdk - jenkins job with them https://ci.pytorch.org/jenkins/job/pytorch-docker-master/339/
pytorch_android_gradle_build successful run: https://circleci.com/gh/pytorch/pytorch/2604797#artifacts/containers/0
pytorch_android_gradle_build-x86_32 successful run: https://circleci.com/gh/pytorch/pytorch/2608945#artifacts/containers/0
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25286
Reviewed By: kostmo
Differential Revision: D17115861
Pulled By: IvanKobzarev
fbshipit-source-id: bc88fd38b38ed0d0170d719fffa375772bdea142
Summary:
Tensor has getDataAsFloatArray(), we also support Int and Byte Tensors,
adding symmetric methods for Int and Byte, that will throw
IllegalStateException if called for not appropriate type
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25183
Reviewed By: dreiss
Differential Revision: D17052674
Pulled By: IvanKobzarev
fbshipit-source-id: 1d44944461ad008e202e382152cd0690c61124f4
Summary:
TLDR; initial commit of android java-jni wrapper of pytorchscript c++ api
The main idea is to provide java interface for android developers to use pytorchscript modules.
java API tries to repeat semantic of c++ and python pytorchscript API
org.pytorch.Module (wrapper of torch::jit::script::Module)
- static Module load(String path)
- IValue forward(IValue... inputs)
- IValue runMethod(String methodName, IValue... inputs)
org.pytorch.Tensor (semantic of at::Tensor)
- newFloatTensor(long[] dims, float[] data)
- newFloatTensor(long[] dims, FloatBuffer data)
- newIntTensor(long[] dims, int[] data)
- newIntTensor(long[] dims, IntBuffer data)
- newByteTensor(long[] dims, byte[] data)
- newByteTensor(long[] dims, ByteBuffer data)
org.pytorch.IValue (semantic of at::IValue)
- static factory methods to create pytorchscript supported types
Examples of usage api could be found in PytorchInstrumentedTests.java:
Module module = Module.load(path);
IValue input = IValue.tensor(Tensor.newByteTensor(new long[]{1}, Tensor.allocateByteBuffer(1)));
IValue output = module.forward(input);
Tensor outputTensor = output.getTensor();
ThreadSafety:
Api is not thread safe, all synchronization must be done on caller side.
Mutability:
org.pytorch.Tensor buffer is DirectBuffer with native byte order, can be created with static factory methods specifing DirectBuffer.
At the moment org.pytorch.Tensor does not hold at::Tensor on jni side, it has: long[] dimensions, type, DirectByteBuffer blobData
Input tensors are mutable (can be modified and used for the next inference),
Uses values from buffer on the momment of Module#forward or Module#runMethod calls.
Buffers of input tensors is used directly by input at::Tensor
Output is copied from output at::Tensor and is immutable.
Dependencies:
Jni level is implemented with usage of fbjni library, that was developed in Facebook,
and was already used and opensourced in several opensource projects,
added to the repo as submodule from personal account to be able to switch submodule
when fbjni will be opensourced separately.
ghstack-source-id: b39c848359a70d717f2830a15265e4aa122279c0
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25084
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25105
Reviewed By: dreiss
Differential Revision: D16988107
Pulled By: IvanKobzarev
fbshipit-source-id: 41ca7c9869f8370b8504c2ef8a96047cc16516d4