Commit Graph

175 Commits

Author SHA1 Message Date
Ivan Kobzarev
de5821d291 Torchscript print to logcat (#31456)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31456

External request https://discuss.pytorch.org/t/jit-android-debugging-the-model/63950

By default torchscript print function goes to stdout. For android it is not seen in logcat by default.
This change propagates it to logcat.

Test Plan: Imported from OSS

Differential Revision: D19171405

Pulled By: IvanKobzarev

fbshipit-source-id: f9c88fa11d90bb386df9ed722ec9345fc6b25a34
2020-01-15 16:44:56 -08:00
David Reiss
4daa3dedbe Fix IValue.isList
Summary: I think this was wrong before?

Test Plan: Not sure.

Reviewed By: IvanKobzarev

Differential Revision: D19221358

fbshipit-source-id: 27e675cac15dde29e026305f4b4e6cc774e15767
2020-01-07 16:33:36 -08:00
David Reiss
1b4d3d5748 Properly return data from non-contiguous tensors in Java
Summary:
These were returning incorrect data before.  Now we make a contiguous copy
before converting to Java.  Exposing raw data to the user might be faster in
some cases, but it's not clear that it's worth the complexity and code size.

Test Plan: New unit test.

Reviewed By: IvanKobzarev

Differential Revision: D19221361

fbshipit-source-id: 22ecdad252c8fd968f833a2be5897c5ae483700c
2020-01-07 16:33:31 -08:00
David Reiss
2d6a2c898c Support tensors with a storage offset in Java (#31584)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31584

These were returning incorrect data before.

Test Plan: New unit test.

Reviewed By: IvanKobzarev

Differential Revision: D19221360

fbshipit-source-id: b3f01de086857027f8e952a1c739f60814a57acd
2020-01-07 16:33:26 -08:00
David Reiss
6d1fa8296b Support tensors with empty shape in Java
Summary: These are valid tensors.

Test Plan: New unit test.

Reviewed By: IvanKobzarev

Differential Revision: D19221362

fbshipit-source-id: fa9af2fc539eb7381627b3d473241a89859ef2ba
2020-01-07 16:33:21 -08:00
Ivan Kobzarev
492ca46e71 Fix androidTest - exclude host tests from it
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31522

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D19200861

Pulled By: IvanKobzarev

fbshipit-source-id: a6024f3013398f9e0d237e06c984a20493d42f11
2020-01-06 11:29:46 -08:00
Ivan Kobzarev
3a19980b78 Tensor class created from java does not call native methods
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31520

Test Plan: Imported from OSS

Reviewed By: iseeyuan

Differential Revision: D19199477

Pulled By: IvanKobzarev

fbshipit-source-id: ba51454586a9385dba4ab73936f907346e0105d1
2019-12-20 14:40:54 -08:00
David Reiss
35b249769d Exclude lite interpreter Java files from OSS host build
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31204

Test Plan: Imported from OSS

Differential Revision: D19200610

Pulled By: dreiss

fbshipit-source-id: 0cf41c99b4c2604afc2dccfebbea213c0e1f9638
2019-12-20 13:32:27 -08:00
Ivan Kobzarev
930d0751e6 Java Tensor hybrid, owns at::Tensor, no memcopy for java outputs. (#30501)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30501

**Motivation**:
In current state output of libtorch Module forward,runMethod is mem copied to java ByteBuffer, which is allocated, at least in some versions of android, on java heap. That could lead to intensive garbage collection.

**Change**:
Output java tensor becomes owner of output at::Tensor and holds it (as `pytorch_jni::TensorHybrid::tensor_` field) alive until java part is not destroyed by GC. For that org.pytorch.Tensor becomes 'Hybrid' class in fbjni naming and starts holding member field `HybridData mHybridData;`

If construction of it starts from java side - java constructors of subclasses (we need all the fields initialized, due to this `mHybridData` is not declared final, but works as final) call `this.mHybridData = super.initHybrid();` to initialize cpp part (`at::Tensor tensor_`).

If construction starts from cpp side - cpp side is initialiaed using provided at::Tensor with `makeCxxInstance(std::move(tensor))` and is passed to java method `org.pytorch.Tensor#nativeNewTensor` as parameter `HybridData hybridData`, which holds native pointer to cpp side.

In that case `initHybrid()` method is not called, but parallel set of ctors of subclasses are used, which stores `hybridData` in `mHybridData`.

Renaming:
`JTensor` -> `TensorHybrid`

Removed method:
`JTensor::newAtTensorFromJTensor(JTensor)` becomes trivial `TensorHybrid->cthis()->tensor()`

Test Plan: Imported from OSS

Differential Revision: D18893320

Pulled By: IvanKobzarev

fbshipit-source-id: df94775d2a010a1ad945b339101c89e2b79e0f83
2019-12-15 21:36:20 -08:00
Ivan Kobzarev
701e05dcbb Buck test targets robolectric,instrumentattion
Summary:
Buck targets for robolectric and instrumentation tests for pytorch android:
```
buck test fbsource//fbandroid/mode/server //xplat/caffe2/android:test_host
```
```
buck test //xplat/caffe2/android:test_instrumentation
```
For both:
```
buck test fbsource//fbandroid/mode/server //xplat/caffe2/android:pytorch
```

Models in assets:
`pt_android_test_asset` - creates buck target that can be included in both robolectric and instrumentation tests that contains asset created from provided torchscript sources as separate file, using the latest binaries of libtorch.

`pt_gen_test_asset_bin`  does that tacing, usage format
```
generate_test_asset input_file.jit output_file.py
```

Example of test-host setup for users of pytorch android:
robolectric tests:

```
load("fbsource//xplat/caffe2:pt_defs.bzl", "pt_android_test_asset", "pt_predictor_binary", "PT_ANDRIOID_TEST_HOST_JNI_DEPS")

pt_android_test_asset(
    name = "test_asset",
    src = "test_asset.jit",
    asset_name = "test_asset.pt",
)

robolectric3_test(
    name = "example_test_host",
    srcs = [...],
    jni_deps = PT_ANDRIOID_TEST_HOST_JNI_DEPS,
    deps = [
        ":pytorch_common",
        ":test_asset",
        "//fbandroid/java/com/facebook/soloader/annotation:annotation",
        "//fbandroid/java/com/facebook/testing/robolectric/v3:v3",
        "//fbandroid/libraries/soloader/java/com/facebook/soloader:soloader",
        "//fbandroid/third-party/java/robolectric3/robolectric:robolectric",
    ],
)
```

COMMON_LINKER_FLAGS = ["-Wl,--no-as-needed"] can not be applied on MacOs

Test Plan:
```
[twsvcscm@od0187.atn1 /data/sandcastle/boxes/fbsource (b416b20a)]$ buck test fbsource//fbandroid/mode/server //xplat/caffe2/android:pytorch
Parsing buck files: finished in 7.2 sec
Creating action graph: finished in 0.7 sec
Building: finished in 11.9 sec (100%) 791/791 jobs, 0 updated
  Total time: 19.9 sec
Testing: finished in 11.0 sec (30 PASS/0 FAIL)
RESULTS FOR //xplat/caffe2/android:test_host //xplat/caffe2/android:test_instrumentation
PASS     159ms 15 Passed   0 Skipped   0 Failed   org.pytorch.PytorchHostTests
PASS     152ms 15 Passed   0 Skipped   0 Failed   org.pytorch.PytorchInstrumentedTests (localhost:31930)
TESTS PASSED
```

OSS changes test:
```
gradle -p android pytorch_android:cAT passes
```

Reviewed By: dreiss

Differential Revision: D18799005

fbshipit-source-id: 881609826a837efebc8526aee40355c5a62947d0
2019-12-14 20:29:52 -08:00
Ivan Kobzarev
065685180d Loading module from android asset (#30378)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30378

Loading module directly from android assets. Iteration on https://github.com/pytorch/pytorch/pull/30109
Loading Module:
```
mModule = AndroidUtils.loadModuleFromAsset(assetName, getAssets());
```

`org.pytorch.AndroidUtils` is excluded from pytorch_jni host build

Testing:
test_app module load switched to this approach and works fine
```
gradle test_app:installMobNet2QuantDebug -PABI_FILTERS=x86 && adb shell am start -n org.pytorch.testapp.mobNet2Quant/org.pytorch.testapp.MainActivity
```

Test Plan: Imported from OSS

Differential Revision: D18893269

Pulled By: IvanKobzarev

fbshipit-source-id: a7c73776f40e9c67bef233da05db56cc6efbe76a
2019-12-14 20:29:37 -08:00
Ivan Kobzarev
f7c92f60ba Typo in filename align with classname
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31235

Test Plan: Imported from OSS

Differential Revision: D19001793

Pulled By: IvanKobzarev

fbshipit-source-id: ae7f410be6b3c291f1feb3027b5b4a6b7ce15ab3
2019-12-12 23:16:29 -08:00
Ivan Kobzarev
db90a5b992 Switch to open sourced fbjni (#30175)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30175

fbjni was opensourced and java part is published as 'com.facebook.fbjni:fbjni-java-only:0.0.3'
switching to it.
We still need submodule fbjni inside the repo (which is already pointing to  https://github.com/facebookincubator/fbjni) for so linking.

**Packaging changes**:
before that `libfbjni.so` came from pytorch_android_fbjni dependency, as we also linked fbjni in `pytorch_android/CMakeLists.txt` - it was built in pytorch_android, but excluded for publishing. As we had 2 libfbjni.so there was a hack to exclude it for publishing and resolve duplication locally.
```
        if (rootProject.isPublishing()) {
            exclude '**/libfbjni.so'
        } else {
            pickFirst '**/libfbjni.so'
        }
```

After this change fbjni.so will be packaged inside pytorch_android.aar artefact and we do not need this gradle logic.

I will update README in separate PR after landing previous PR to readme(https://github.com/pytorch/pytorch/pull/30128) to avoid conflicts

Test Plan: Imported from OSS

Differential Revision: D18982235

Pulled By: IvanKobzarev

fbshipit-source-id: 5097df2557858e623fa480625819a24a7e8ad840
2019-12-12 20:05:22 -08:00
Ivan Kobzarev
ca8cb3241a Expose setNumThreads to android api (#31205)
Summary:
PR https://github.com/pytorch/pytorch/pull/31033 was unlanded due to macos build failure:
https://app.circleci.com/jobs/github/pytorch/pytorch/3916388

This PR has changes that `setNumThreads` is only for android and moved to separate class `org.pytorch.PytorchAndroid` as a static function which is better as it has global effect
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31205

Reviewed By: dreiss

Differential Revision: D18977250

Pulled By: IvanKobzarev

fbshipit-source-id: 4995859808af498c82933c4db52bd7c7dfae90e5
2019-12-12 18:57:27 -08:00
Michael Suo
c0bcfd0445 Revert D18923167: Expose setNumThreads to android api
Test Plan: revert-hammer

Differential Revision:
D18923167

Original commit changeset: 8d98c2edbff4

fbshipit-source-id: 7db37cff298c511d0dd9eb373811c769e4a73be9
2019-12-12 09:23:58 -08:00
Ivan Kobzarev
6225443009 Expose setNumThreads to android api (#31033)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31033

Intention:
There are requests from users to control number of threads from android side:
https://discuss.pytorch.org/t/android-pytorch-forward-method-running-in-a-separate-thread-slow-down-ui-thread/63516/2
https://discuss.pytorch.org/t/threading-of-model-pytorch-android/62490/2

At the moment `setNumThreads` is placed in `org.pytorch.Module`, but this method changes global threadPool size, in future we will move it to some separate class to repeat python binding structure, which has torch.set_num_threads()

Test Plan: Imported from OSS

Differential Revision: D18923167

Pulled By: IvanKobzarev

fbshipit-source-id: 8d98c2edbff42e9b673509672dce3f2dd03a923e
2019-12-11 14:20:14 -08:00
Edward Yang
38986e1dea Split libtorch.so back into libtorch_{cpu,cuda,hip} (#30315)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30315

The new structure is that libtorch_cpu contains the bulk of our
code, and libtorch depends on libtorch_cpu and libtorch_cuda.
This is a reland of https://github.com/pytorch/pytorch/pull/29731 but
I've extracted all of the prep work into separate PRs which can be
landed before this one.

Some things of note:

* torch/csrc/cuda/nccl.cpp was added to the wrong list of SRCS, now fixed (this didn't matter before because previously they were all in the same library)
* The dummy file for libtorch was brought back from the dead; it was previously deleted in #20774
In an initial version of the patch, I forgot to make torch_cuda explicitly depend on torch_cpu. This lead to some very odd errors, most notably "bin/blob_test: hidden symbol `_ZNK6google8protobuf5Arena17OnArenaAllocationEPKSt9type_infom' in lib/libprotobuf.a(arena.cc.o) is referenced by DSO"
* A number of places in Android/iOS builds have to add torch_cuda explicitly as a library, as they do not have transitive dependency calculation working correctly
* I had to torch_cpu/torch_cuda caffe2_interface_library so that they get whole-archived linked into torch when you statically link. And I had to do this in an *exported* fashion because torch needs to depend on torch_cpu_library. In the end I exported everything and removed the redefinition in the Caffe2Config.cmake. However, I am not too sure why the old code did it in this way in the first place; however, it doesn't seem to have broken anything to switch it this way.
* There's some uses of `__HIP_PLATFORM_HCC__` still in `torch_cpu` code, so I had to apply it to that library too (UGH). This manifests as a failer when trying to run the CUDA fuser. This doesn't really matter substantively right now because we still in-place HIPify, but it would be good to fix eventually. This was a bit difficult to debug because of an unrelated HIP bug, see https://github.com/ROCm-Developer-Tools/HIP/issues/1706

Fixes #27215 (as our libraries are smaller), and executes on
part of the plan in #29235.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18790941

Pulled By: ezyang

fbshipit-source-id: 01296f6089d3de5e8365251b490c51e694f2d6c7
2019-12-04 08:04:57 -08:00
Sebastian Messmer
bc2e6d10fa Back out "Revert D17908478: Switch PyTorch/Caffe2 to C++14"
Summary: Original commit changeset: 775d2e29be0b

Test Plan: CI

Reviewed By: mruberry

Differential Revision: D18775520

fbshipit-source-id: a350b3f86b66d97241f208786ee67e9a51172eac
2019-12-03 14:33:43 -08:00
Sebastian Messmer
a2ed50c920 Revert D17908478: Switch PyTorch/Caffe2 to C++14
Test Plan: revert-hammer

Differential Revision:
D17908478

Original commit changeset: 6e340024591e

fbshipit-source-id: 775d2e29be0bc3a0db64f164c8960c44d4877d5d
2019-11-27 14:57:05 -08:00
Sebastian Messmer
d0acc9c085 Switch PyTorch/Caffe2 to C++14 (#30406)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30406

ghstack-source-id: 94642238

Test Plan: waitforsandcastle

Differential Revision: D17908478

fbshipit-source-id: 6e340024591ec2c69521668022999df4a33b4ddb
2019-11-27 10:47:31 -08:00
Ivan Kobzarev
5ada5363fc GenericDict/List type use unshapedType() (#30428)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30428

Reported issue https://discuss.pytorch.org/t/incomprehensible-behaviour/61710

Steps to reproduce:

```
class WrapRPN(nn.Module):
    def __init__(self):
        super().__init__()

    def forward(self, features):
        # type: (Dict[str, Tensor]) -> int
        return 0
```

```
#include <torch/script.h>

int main() {
  torch::jit::script::Module module = torch::jit::load("dict_str_tensor.pt");

  torch::Tensor tensor = torch::rand({2, 3});
  at::IValue ivalue{tensor};
  c10::impl::GenericDict dict{c10::StringType::get(),ivalue.type()};
  dict.insert("key", ivalue);
  module.forward({dict});
}
```

ValueType of `c10::impl::GenericDict` is from the first specified element as `ivalue.type()`
It fails on type check in` function_schema_inl.h` !value.type()->isSubtypeOf(argument.type())
as `DictType::isSubtypeOf` requires equal KeyType and ValueType, while `TensorType`s are different.

Fix:
Use c10::unshapedType for creating Generic List/Dict

Test Plan: Imported from OSS

Differential Revision: D18717189

Pulled By: IvanKobzarev

fbshipit-source-id: 1e352a9c776a7f7e69fd5b9ece558f1d1849ea57
2019-11-26 17:38:36 -08:00
Xingying Cheng
e9cc4a5942 Add @DoNotStrip to nativeNewTensor method. (#30472)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30472

Add DoNotStrip to nativeNewTensor method.
ghstack-source-id: 94596624

Test Plan:
Triggered build on diff for automation_fbandroid_fallback_release.

buck install -r fb4a

Tested BI cloaking using pytext lite interpreter.

Obverse that logs are sent to scuba table:

{F223408345}

Reviewed By: linbinyu

Differential Revision: D18709087

fbshipit-source-id: 74fa7a0665640c294811a50913a60ef8d6b9b672
2019-11-26 12:16:33 -08:00
Xingying Cheng
20dfae4099 Fix the crashes for c++ not able to find java class through Jni (#30390)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30390

Fix the crashes for c++ not able to find java class through Jni
ghstack-source-id: 94499644

Test Plan: buck install -r fb4a

Reviewed By: ljk53

Differential Revision: D18667992

fbshipit-source-id: aa1b19c6dae39d46440f4a3e691054f7f8b1d42e
2019-11-25 14:51:23 -08:00
David Reiss
90cb1e67ff Fix exception message in Java Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30205

Test Plan: Imported from OSS

Reviewed By: linbinyu

Differential Revision: D18653568

Pulled By: dreiss

fbshipit-source-id: a5fcb809eba641a7fbd0e99e835eceeb248e680c
2019-11-22 12:04:49 -08:00
Jiakai Liu
f5ef3a6fb6 disable JIT optimizer in Android wrapper for mobile custom build (#30285)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30285

PR #30144 introduced custom build script to tailor build to specific
models. It requires a list of all potentially used ops at build time.

Some JIT optimization passes can transform the IR by replacing
operators, e.g. decompose pass can replace aten::addmm with aten::mm if
coefficients are 1s.

Disabling optimization pass can ensure that the list of ops we dump from
the model is the list of ops that are needed.

Test Plan: - rerun the test on PR #30144 to verify the raw list without aten::mm works.

Differential Revision: D18652777

Pulled By: ljk53

fbshipit-source-id: 084751cb9a9ee16d8df7e743e9e5782ffd8bc4e3
2019-11-22 00:25:04 -08:00
David Reiss
e5fc86130a Remove unnecessary linker flags from JNI host build (#30206)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30206

- --whole-archive isn't needed because we link libtorch as a dynamic
  dependency, rather than static.
- --gc-sections isn't necessary because most (all?) of the code in our
  JNI library is used (and we're not staticly linking libtorch).
  Removing this one is useful because it's not supported by lld.

Test Plan:
Built on Linux.  Library size was unchanged.
Upcoming diff enables Mac JNI build.

Differential Revision: D18653500

Pulled By: dreiss

fbshipit-source-id: 49ce46fb86a775186f803ada50445b4b2acb54a8
2019-11-21 20:10:06 -08:00
Junjie Bai
352731bd6e Revert D18632773: Split libtorch.so back into libtorch_{cpu,cuda,hip}
Test Plan: revert-hammer

Differential Revision:
D18632773

Original commit changeset: ea717c81e0d7

fbshipit-source-id: 18601439f9f81c9f389020e5a0e4e04adb21772d
2019-11-21 15:01:09 -08:00
Edward Yang
ec30d9028a Split libtorch.so back into libtorch_{cpu,cuda,hip} (#29731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29731

The new structure is that libtorch_cpu contains the bulk of our
code, and libtorch depends on libtorch_cpu and libtorch_cuda.

Some subtleties about the patch:
- There were a few functions that crossed CPU-CUDA boundary without API macros. I just added them, easy enough. An inverse situation was aten/src/THC/THCTensorRandom.cu where we weren't supposed to put API macros directly in a cpp file.
- DispatchStub wasn't getting all of its symbols related to static members on DispatchStub exported properly. I tried a few fixes but in the end I just moved everyone off using DispatchStub to dispatch CUDA/HIP (so they just use normal dispatch for those cases.) Additionally, there were some mistakes where people incorrectly were failing to actually import the declaration of the dispatch stub, so added includes for those cases.
- torch/csrc/cuda/nccl.cpp was added to the wrong list of SRCS, now fixed (this didn't matter before because previously they were all in the same library)
- The dummy file for libtorch was brought back from the dead; it was previously deleted in #20774
- In an initial version of the patch, I forgot to make torch_cuda explicitly depend on torch_cpu. This lead to some very odd errors, most notably "bin/blob_test: hidden symbol `_ZNK6google8protobuf5Arena17OnArenaAllocationEPKSt9type_infom' in lib/l
ibprotobuf.a(arena.cc.o) is referenced by DSO"
- A number of places in Android/iOS builds have to add torch_cuda explicitly as a library, as they do not have transitive dependency calculation working correctly. This situation also happens with custom C++ extensions.
- There's a ROCm compiler bug where extern "C" on functions is not respected. There's a little workaround to handle this.
- Because I was too lazy to check if HIPify was converting TORCH_CUDA_API into TORCH_HIP_API, I just made it so HIP build also triggers the TORCH_CUDA_API macro. Eventually, we should translate and keep the nature of TORCH_CUDA_API constant in all cases.

Fixes #27215 (as our libraries are smaller), and executes on
part of the plan in #29235.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18632773

Pulled By: ezyang

fbshipit-source-id: ea717c81e0d7554ede1dc404108603455a81da82
2019-11-21 11:27:33 -08:00
Ivan Kobzarev
fd74a19aa4 apply clang format -i (#30180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30180

Just applying `clang-format -i` to not mix it with other changes

Test Plan: Imported from OSS

Differential Revision: D18627473

Pulled By: IvanKobzarev

fbshipit-source-id: ed341e356fea31b8515de29d5ea2ede07e8b66a2
2019-11-20 16:46:43 -08:00
Ivan Kobzarev
8e3486de81 No debug symbols in release android buidls (#30123)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30123

In groovy string `'false'` is resolved as boolean `true`

thats why even as in `gradle.properties`:
```
nativeLibsDoNotStrip=false
```
branch `if (nativeLibsDoNotStrip)` always passed

Test Plan: Imported from OSS

Differential Revision: D18606907

Pulled By: IvanKobzarev

fbshipit-source-id: c10140e775624294c732e78ae3c41e05c7c9ad92
2019-11-19 16:44:56 -08:00
Xingying Cheng
26dabad5a4 Add LiteModule java class for lite interpreter. (#30061)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30061
Create INativePeer Interface and move NativePeer class from Module.java. Create LiteModuleLoader and LiteNativePeer.java for Lite Interpreter binding.
ghstack-source-id: 94169187

Reviewed By: dreiss

Differential Revision: D18511688

fbshipit-source-id: 1a69c94b28c8a02631f53079ca7ddcaa57eca38f
2019-11-18 19:53:20 -08:00
Xingying Cheng
4f94aed8a3 Reformatting module class. (#29957)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29957

Reformatting module class.
ghstack-source-id: 94058645

Test Plan: buck build xplat/caffe2/android:pytorch

Reviewed By: iseeyuan

Differential Revision: D18548185

fbshipit-source-id: 8c1f5cbf491d42915e091e6245b4f308eb162f93
2019-11-18 18:39:29 -08:00
David Reiss
d22f61432d Update fbjni and enable PyTorch JNI build
Summary:
- Add a "BUILD_JNI" option that enables building PyTorch JNI bindings and
  fbjni.  This is off by default because it adds a dependency on jni.h.
- Update to the latest fbjni so we can inhibit building its tests,
  because they depend on gtest.
- Set JAVA_HOME and BUILD_JNI in Linux binary build configurations if we
  can find jni.h in Docker.

Test Plan:
- Built on dev server.
- Verified that libpytorch_jni links after libtorch when both are built
  in a parallel build.

Differential Revision: D18536828

fbshipit-source-id: 19cb3be8298d3619352d02bb9446ab802c27ec66
2019-11-15 13:59:44 -08:00
Xingying Cheng
6dc8d72f94 Change from int64_t to jlong for mac build (#29861)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29861

Follow https://github.com/pytorch/pytorch/issues/6570 to run ./run_host_tests.sh for Mac Build, we saw error below:

```error: cannot initialize a parameter of type 'const facebook::jni::JPrimitiveArray<_jlongArray *>::T *' (aka 'const long *') with an rvalue of type
      'std::__1::vector<long long, std::__1::allocator<long long> >::value_type *' (aka 'long long *')
    jTensorShape->setRegion(0, tensorShapeVec.size(), tensorShapeVec.data());```
ghstack-source-id: 93961091

Test Plan: Run ./run_host_tests.sh and verify build succeed.

Reviewed By: dreiss

Differential Revision: D18519087

fbshipit-source-id: 869be12c82e6e0f64c878911dc12459defebf40b
2019-11-14 21:29:59 -08:00
Ivan Kobzarev
eef349a679 host build gradle publishing (#29749)
Summary:
To publish snapshots:
`gradle -p android pytorch_host:uploadArchives`
(for test changed version to 0.0.1-SNAPSHOT)
Result:
https://oss.sonatype.org/#nexus-search;quick~pytorch_java_only

https://oss.sonatype.org/service/local/repositories/snapshots/content/org/pytorch/pytorch_java_only/0.0.1-SNAPSHOT/
jar:
https://oss.sonatype.org/service/local/repositories/snapshots/content/org/pytorch/pytorch_java_only/0.0.1-SNAPSHOT/pytorch_java_only-0.0.1-20191113.211446-1.jar

sources:
https://oss.sonatype.org/service/local/repositories/snapshots/content/org/pytorch/pytorch_java_only/0.0.1-SNAPSHOT/pytorch_java_only-0.0.1-20191113.211446-1-sources.jar
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29749

Differential Revision: D18496644

Pulled By: IvanKobzarev

fbshipit-source-id: 136213c23b9ab1e3e22059ad9c8b53822c026b3b
2019-11-14 11:44:02 -08:00
Ivan Kobzarev
aa6e992ffb Subscribe for record function and if android do atrace (#28708)
Summary:
ghstack-source-id: 5edaf47155
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28708

Some cpp formatting changes as I run `clang-format -i`

Testing on devserver:
make assets (models):
```
pushd android/test_app/; python make_assets.py; popd
```
Build test_app apk:
```
TRACE_ENABLED=1 sh android/build_test_app.sh

find . -type f -name *apk
./android/test_app/app/build/outputs/apk/mobNet2Quant/debug/test_app-mobNet2Quant-debug.apk
./android/test_app/app/build/outputs/apk/resnet18/debug/test_app-resnet18-debug.apk
```

Install apk:
`adb install -r test_app-mobNet2Quant-debug.apk`
Run app on the device.
Systrace:
```
$ANDROID_HOME/platform-tools/systrace/systrace.py -t 10 -a org.pytorch.testapp.mobNet2Quant sched freq idle am wm gfx view binder_driver hal dalvik camera input res -o trace.html
```
trace.html contains sections like `jni::Module::forward`

![Screenshot 2019-11-12 18 36 30](https://user-images.githubusercontent.com/6638825/68728156-5d245580-057b-11ea-9e71-e47681894fe4.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28712

Differential Revision: D18495898

Pulled By: IvanKobzarev

fbshipit-source-id: 0bced4a442f9dd90525520972a2c1f5d51f57df3
2019-11-13 20:55:40 -08:00
Xingying Cheng
5654eccfe2 Add pytorch_jni_lite for lite interpreter. (#29621)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29621

Add pytorch_jni_lite for lite interpreter.
ghstack-source-id: 93867325

Test Plan:
buck build xplat/caffe2/android:pytorch-jni

buck build xplat/caffe2/android:pytorch

buck install -r fb4a

Reviewed By: dreiss

Differential Revision: D18438343

fbshipit-source-id: 7d4dee11d352cc9a67339c45d9d7f4a2ba285ebc
2019-11-13 16:16:29 -08:00
Xingying Cheng
9c9c361f67 Separate out pytorch_jni into pytorch_jni_jit and pytorch_jni_common. (#29617)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29617

As for internal build, we will use mobile interpreter instead of full jit, so we will need to separate the existing pytorch_jni.cpp into pytorch_jni_jit.cpp and pytorch_jni_common.cpp. pytorch_jni_common.cpp will be used both from pytorch_jni_jit.cpp(open_source) and future pytorch_jni_lite.cpp(internal).
ghstack-source-id: 93691214

Test Plan: buck build xplat/caffe2/android:pytorch

Reviewed By: dreiss

Differential Revision: D18387579

fbshipit-source-id: 26ab845c58a0959bc0fdf1a2b9a99f6ad6f2fc9c
2019-11-12 11:13:44 -08:00
David Reiss
c1140f20dc Rename PyTorch JNI library to pytorch_jni (#29412)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29412

Originally, this was going to be Android-only, so the name wasn't too
important.  But now that we're planning to distribute it with libtorch,
we should give it a more distinctive name.

Test Plan:
Ran tests according to
https://github.com/pytorch/pytorch/issues/6570#issuecomment-548537834

Reviewed By: IvanKobzarev

Differential Revision: D18405207

fbshipit-source-id: 0e6651cb34fb576438f24b8a9369e10adf9fecf9
2019-11-08 14:29:13 -08:00
Ivan Kobzarev
92b9de1428 Test application for profiling, CMake params for debug symbols (#28406)
Summary:
Reason:
To have one-step build for test android application based on the current code state that is ready for profiling with simpleperf, systrace etc. to profile performance inside the application.

## Parameters to control debug symbols stripping
Introducing  /CMakeLists parameter `ANDROID_DEBUG_SYMBOLS` to be able not to strip symbols for pytorch (not add linker flag `-s`)
which is checked in `scripts/build_android.sh`

On gradle side stripping happens by default, and to prevent it we have to specify
```
android {
  packagingOptions {
       doNotStrip "**/*.so"
  }
}
```
which is now controlled by new gradle property `nativeLibsDoNotStrip `

## Test_App
`android/test_app` - android app with one MainActivity that does inference in cycle

`android/build_test_app.sh` - script to build libtorch with debug symbols for specified android abis and adds `NDK_DEBUG=1` and `-PnativeLibsDoNotStrip=true` to keep all debug symbols for profiling.
Script assembles all debug flavors:
```
└─ $ find . -type f -name *apk
./test_app/app/build/outputs/apk/mobilenetQuant/debug/test_app-mobilenetQuant-debug.apk
./test_app/app/build/outputs/apk/resnet/debug/test_app-resnet-debug.apk
```

## Different build configurations

Module for inference can be set in `android/test_app/app/build.gradle` as a BuildConfig parameters:
```
    productFlavors {
        mobilenetQuant {
            dimension "model"
            applicationIdSuffix ".mobilenetQuant"
            buildConfigField ("String", "MODULE_ASSET_NAME", buildConfigProps('MODULE_ASSET_NAME_MOBILENET_QUANT'))
            addManifestPlaceholders([APP_NAME: "PyMobileNetQuant"])
            buildConfigField ("String", "LOGCAT_TAG", "\"pytorch-mobilenet\"")
        }
        resnet {
            dimension "model"
            applicationIdSuffix ".resnet"
            buildConfigField ("String", "MODULE_ASSET_NAME", buildConfigProps('MODULE_ASSET_NAME_RESNET18'))
            addManifestPlaceholders([APP_NAME: "PyResnet"])
            buildConfigField ("String", "LOGCAT_TAG", "\"pytorch-resnet\"")
        }
```

In that case we can setup several apps on the same device for comparison, to separate packages `applicationIdSuffix`: 'org.pytorch.testapp.mobilenetQuant' and different application names and logcat tags as `manifestPlaceholder` and another BuildConfig parameter:
```
─ $ adb shell pm list packages | grep pytorch
package:org.pytorch.testapp.mobilenetQuant
package:org.pytorch.testapp.resnet
```

In future we can add another BuildConfig params e.g. single/multi threads and other configuration for profiling.

At the moment 2 flavors - for resnet18 and for mobilenetQuantized
which can be installed on connected device:

```
cd android
```
```
gradle test_app:installMobilenetQuantDebug
```
```
gradle test_app:installResnetDebug
```

## Testing:
```
cd android
sh build_test_app.sh
adb install -r test_app/app/build/outputs/apk/mobilenetQuant/debug/test_app-mobilenetQuant-debug.apk
```

```
cd $ANDROID_NDK
python simpleperf/run_simpleperf_on_device.py record --app org.pytorch.testapp.mobilenetQuant -g --duration 10 -o /data/local/tmp/perf.data
adb pull /data/local/tmp/perf.data
python simpleperf/report_html.py
```

Simpleperf report has all symbols:
![Screenshot 2019-10-22 11 06 21](https://user-images.githubusercontent.com/6638825/67315740-0bc50100-f4bc-11e9-8f9e-2499be13d63e.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28406

Differential Revision: D18386622

Pulled By: IvanKobzarev

fbshipit-source-id: 3a751192bbc4bc3c6d7f126b0b55086b4d586e7a
2019-11-08 14:19:04 -08:00
Xingying Cheng
8a33f1150d Use nativeloader instead of system loader to load JNI library for soloader compatibility. (#29350)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29350

ghstack-source-id: 93491099

Test Plan: P121597890

Reviewed By: dreiss

Differential Revision: D18352773

fbshipit-source-id: 712c3f5d10a3d4c815c5554bb62e1a95563ba7ff
2019-11-07 16:09:29 -08:00
David Reiss
42faf961c8 Update fbjni submodule to new upstream and latest version
Summary:
The central fbjni repository is now public, so point to it and
take the latest version, which includes support for host builds
and some condensed syntax.

Test Plan: CI

Differential Revision: D18217840

fbshipit-source-id: 454e3e081f7e3155704fed692506251c4018b2a1
2019-10-31 11:48:25 -07:00
David Reiss
b1bf595e54 Update generated test model
Summary:
The Java and Python code were updated, but the test currently fails
because the model was not regenerated.

Test Plan: Ran test.

Reviewed By: xcheng16

Differential Revision: D18217841

fbshipit-source-id: 002eb2d3ed0eaa14b3d7b087b621a6970acf1378
2019-10-31 11:03:20 -07:00
David Reiss
80e270a76c Add support for host build to pytorch_android native code (#27664)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27664

When ANDROID_ABI is not set, find libtorch headers and libraries from
the LIBTORCH_HOME build variable (which must be set by hand), place
output under a "host" directory, and use dynamic linking instead of
static.

This doesn't actually work without some local changes to fbjni, but I
want to get the changes landed to avoid unnecessary merge conflicts.

Test Plan: Imported from OSS

Differential Revision: D18210315

Pulled By: dreiss

fbshipit-source-id: 685a62de3c2a0a52bec7fd6fb95113058456bac8
2019-10-29 16:04:18 -07:00
David Reiss
34455c68b5 Remove unnecessary BUILD_DIR variable in Android CMake build (#27663)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27663

CMake sets CMAKE_BINARY_DIR and creates it automatically.  Using this
allows us to use the -B command-line flag to CMake to specify an
alternate output directory.

Test Plan: Imported from OSS

Differential Revision: D18210316

Pulled By: dreiss

fbshipit-source-id: ba2f6bd4b881ddd00de73fe9c33d82645ad5495d
2019-10-29 16:04:13 -07:00
David Reiss
c9423c30b3 Add host build for pytorch_android (#27662)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27662

This adds a new gradle subproject at pytorch_android/host and tweaks
the top-level build.gradle to only run some Android bits on the other
projects.

Referencing Java sources from inside the host directory feels a bit
hacky, but getting host and Android Gradle builds to coexist in the same
directory hit several roadblocks.  We can try a bigger refactor to
separate the Android-specific and non-Android-specific parts of the
code, but that seems overkill at this point for 4 Java files.

This doesn't actually run without some local changes to fbjni, but I
want to get the files landed to avoid unnecessary merge conflicts.

Test Plan: Imported from OSS

Differential Revision: D18210317

Pulled By: dreiss

fbshipit-source-id: dafb54dde06a5a9a48fc7b7065d9359c5c480795
2019-10-29 16:04:09 -07:00
Jiakai Liu
04bfc213ab remove AutoNonVariableTypeMode guard around forward() call (#28399)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28399

This is also to address issue #26764

Turns out it's incorrect to wrap the entire forward() call with
NonVariableTypeMode guard as some JIT passes has is_variable() check and
can be triggered within forward() call, e.g.:
jit/passes/constant_propagation.cpp

Since now we are toggling NonVariableTypeMode per method/op call, we can
remove the guard around forward() now.

Test Plan: - With stacked PRs, verified it can load and run previously failed models.

Differential Revision: D18055850

Pulled By: ljk53

fbshipit-source-id: 3074d0ed3c6e05dbfceef6959874e5916aea316c
2019-10-22 14:08:49 -07:00
Zachary DeVito
5136ed0e44 Remove attempToRecoverType (#26767)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26767

Now that we have tagged ivalues, we can accurately recover the type with
`ivalue.type()`. This reomoves the other half-implemented pathways that
were created because we didn't have tags.

Test Plan: Imported from OSS

Differential Revision: D17561191

Pulled By: zdevito

fbshipit-source-id: 26aaa134099e75659a230d8a5a34a86dc39a3c5c
2019-10-16 11:07:13 -07:00
David Reiss
4a28ab95d0 Clean up JavaDoc comments in pytorch_android
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27455

Test Plan: Imported from OSS

Differential Revision: D17800658

Pulled By: dreiss

fbshipit-source-id: dbd01d9fa5ac82c50daf54c2869dc18be233d8dd
2019-10-07 17:01:30 -07:00
David Reiss
1ffa81d772 Various cleanups to pytorch_android API (#27454)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27454

See detailed discussion at
https://github.com/pytorch/pytorch/issues/27350

Test Plan: Imported from OSS

Reviewed By: IvanKobzarev

Differential Revision: D17800480

Pulled By: dreiss

fbshipit-source-id: bf174e8b16231b89be771de0fa54c41e864a3eb0
2019-10-07 17:01:26 -07:00
David Reiss
b66df47a11 Refactor python_android test to separate Android-specific components (#27453)
Summary:
All of the test cases move into a base class that is extended by the
intrumentation test and a new "HostTests" class that can be run in
normal Java.  (Some changes to the build script and dependencies are
required before the host test can actually run.)

ghstack-source-id: fe1165b513241b92c5f4a81447f5e184b3bfc75e
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27453

Test Plan: Imported from OSS

Reviewed By: IvanKobzarev

Differential Revision: D17800410

fbshipit-source-id: 1184f0caebdfa219f4ccd1464c67826ac0220181
2019-10-07 17:01:22 -07:00
Ivan Kobzarev
3e20d9c0dc Module method destroy
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27090

Test Plan: Imported from OSS

Differential Revision: D17674096

Pulled By: IvanKobzarev

fbshipit-source-id: d1c0db3797730bff90db83259a38904e71f7941d
2019-09-30 15:51:42 -07:00
Ivan Kobzarev
3e480f8fb8 Fix fbjni packaging, exclude for publishing, include by default (#26995)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26995

Fix current setup, exclude fbjni - we can not use independently pytorch_android:package, for example for testing `gradle pytorch_android:cAT`

But for publishing it works as pytorch_android has dep on fbjni that will be also published

For other cases - we have 2 fbjni.so - one from native build (CMakeLists.txt does add_subdirectory(fbjni_dir)), and from dependency ':fbjni'
We need both of them as ':fbjni' also contains java classes

As a fix: keep excluding for publishing tasks (bintrayUpload, uploadArchives), but else - pickFirst (as we have 2 sources of fbjni.so)

# Testing

gradle cAT works, fbjni.so included
gradle bintrayUpload (dryRun==true) - no fbjni.so

Test Plan: Imported from OSS

Differential Revision: D17637775

Pulled By: IvanKobzarev

fbshipit-source-id: edda56ba555678272249fe7018c1f3a8e179947c
2019-09-27 15:21:26 -07:00
Ivan Kobzarev
55fc377857 Check if QNNPACK is supported before set (#26935)
Summary:
ghstack-source-id: 0e873a56a879cab30b7fa1778e65d9cb89474f05
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26935
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26936

Differential Revision: D17617452

Pulled By: IvanKobzarev

fbshipit-source-id: 4dbcdc55044dd2050b28062baa8b58c8387a1e4e
2019-09-26 16:36:54 -07:00
Ivan Kobzarev
ed82a28cf0 QEngine::QNNPACK enabled, module.eval()
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26855

Test Plan: Imported from OSS

Differential Revision: D17589837

Pulled By: IvanKobzarev

fbshipit-source-id: 0084538e9b9d760a8728cdcd5723fc7fae5838c7
2019-09-25 18:11:08 -07:00
Jiakai Liu
8f54d0d6b6 update android/iOS build library packing (#26565)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26565

For OSS mobile build we should keep QNNPACK off and PYTORCH_QNNPACK on
as we don't include caffe2 ops that use third_party/QNNPACK.

Update android/iOS build script to include new libraries accordingly.

Test Plan: - CI build

Differential Revision: D17508918

Pulled By: ljk53

fbshipit-source-id: 0483d45646d4d503b4e5c1d483e4df72cffc6c68
2019-09-20 17:48:15 -07:00
Ivan Kobzarev
f7ba68e1f7 Support IValue string type (#26517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26517

Support IValue string kind

added 2 instrumented tests -> regenerated test.pt

# Test plan
Start android emulator
```
cd ./android/
gradle pytorch_android:cAT
```
tests passed

# Nits
Moved method IValue#getBool() - to have an order: bool, long, double, string

Test Plan: Imported from OSS

Differential Revision: D17513683

Pulled By: IvanKobzarev

fbshipit-source-id: d328f25772b61f54fb6fd3b2afacde3d7372f25c
2019-09-20 17:29:42 -07:00
Jiakai Liu
d6e3aed032 add eigen blas for mobile build (#26508)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26508

Enable BLAS for pytorch mobile build using Eigen BLAS.
It's not most juicy optimization for typical mobile CV models as we are already
using NNPACK/QNNPACK for most ops there. But it's nice to have good fallback
implementation for other ops.

Test Plan:
- Create a simple matrix multiplication script model:
```
import torch

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.weights = torch.ones(1000, 1000)

    def forward(self, x):
        return torch.mm(x, self.weights)

n = Net()
module = torch.jit.trace_module(n, {'forward': torch.ones(1000, 1000)})
module.save('mm.pk')
```

- Before integrate with eigen blas:
```
adb shell 'cd /data/local/tmp; \
./speed_benchmark_torch \
--model=mm.pk \
--input_dims="1000,1000" \
--input_type=float \
--warmup=5 \
--iter=5'

Milliseconds per iter: 2218.52.
```

- After integrate with eigen blas:
```
adb shell 'cd /data/local/tmp; \
./speed_benchmark_torch_eigen \
--model=mm.pk \
--input_dims="1000,1000" \
--input_type=float \
--warmup=5 \
--iter=5'

Milliseconds per iter: 314.535.
```

- Improve MobileNetV2 single thread perf by ~5%:
```
adb shell 'cd /data/local/tmp; \
./speed_benchmark_torch \
--model=mobilenetv2.pk \
--input_dims="1,3,224,224" \
--input_type=float \
--warmup=5 \
--iter=20 \
--print_output=false \
--caffe2_threadpool_force_inline=true'

Milliseconds per iter: 367.055.

adb shell 'cd /data/local/tmp; \
./speed_benchmark_torch_eigen \
--model=mobilenetv2.pk \
--input_dims="1,3,224,224" \
--input_type=float \
--warmup=5 \
--iter=20 \
--print_output=false \
--caffe2_threadpool_force_inline=true'

Milliseconds per iter: 348.77.
```

Differential Revision: D17489587

fbshipit-source-id: efe542db810a900f680da7ec7e60f215f58db66e
2019-09-20 15:45:11 -07:00
Jiakai Liu
6fcbc37753 improve how pytorch_android cmake imports static lib (#26525)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26525

Create a util function to avoid boilerplate code as we are adding more
libraries.

Test Plan: - build CI;

Differential Revision: D17495394

Pulled By: ljk53

fbshipit-source-id: 9e19f96ede4867bdff5157424fa68b71e6cff8bf
2019-09-20 15:45:06 -07:00
Jiakai Liu
9f4174c496 expose USE_STATIC_DISPATCH macro to public headers
Summary:
USE_STATIC_DISPATCH needs to be exposed as we don't hide header files
containing it for iOS (yet). Otherwise it's error-prone to request all
external projects to set the macro correctly on their own.
Also remove redundant USE_STATIC_DISPATCH definition from other places.

Test Plan:
- build android gradle to confirm linker can still strip out dead code;
- integrate with demo app to confirm inference can run without problem;

Differential Revision: D17484260

Pulled By: ljk53

fbshipit-source-id: 653f597acb2583761b723eff8026d77518007533
2019-09-20 14:01:49 -07:00
Jiakai Liu
956b708437 turn off autograd mode in android JNI wrapper (#26477)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26477

- At inference time we need turn off autograd mode and turn on no-variable
  mode since we strip out these modules for inference-only mobile build.
- Both flags are stored in thread-local variables so we cannot simply
  set them to false glboally.
- Add "autograd/grad_mode.h" header to all-in-one header 'torch/script.h'
  to reduce friction for iOS engs who might need do this manually in their
  project.

P.S. I tried to hide AutoNonVariableTypeMode in codegen but figured it's not
very trivial (e.g. there are manually written part not covered by codegen).
Might try it again later.

Test Plan: - Integrate with Android demo app to confirm inference runs correctly.

Differential Revision: D17484259

Pulled By: ljk53

fbshipit-source-id: 06887c8b527124aa0cc1530e8e14bb2361acef31
2019-09-19 21:25:39 -07:00
Ivan Kobzarev
436c60a854 javadocs for Tensor, IValue, Module (#26149)
Summary:
At the moment it includes https://github.com/pytorch/pytorch/pull/26219 changes. That PR is landing at the moment, afterwards this PR will contain only javadocs.

Applied all dreiss comments from previous version.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26149

Differential Revision: D17490720

Pulled By: IvanKobzarev

fbshipit-source-id: f340dee660d5ffe40c96b43af9312c09f85a000b
2019-09-19 16:50:43 -07:00
Jiakai Liu
6b4bbdda37 fix JNI wrapper for IValue interface change (#26448)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26448

Seems CI was broken by PR #25439 - fix based on interface change.

Test Plan: - build locally

Differential Revision: D17468987

Pulled By: ljk53

fbshipit-source-id: 3c1cb582c8d05357a94295b670b2ce61a7a5a4cd
2019-09-18 23:54:03 -07:00
Ivan Kobzarev
6387ffab65 Exclude libfbjni.so from pytorch_android not to have its duplicating (#26382)
Summary:
fbjni is used during linking `libpytorch.so` and is specified in `pytorch_android/CMakeLists.txt` and as a result its included as separate `libfbjni.so` and is included to `pytorch_android.aar`

We also have java part of fbjni and its connected to pytorch_android as gradle dependency which contains `libfbjni.so`

As a result when we specify gradle dep `'org.pytorch:pytorch_android'` (it has libjni.so) and it has transitive dep `'org.pytorch:pytorch_android_fbjni'` that has `libfbjni.so` and we will have gradle  ambiguity error about this

Fix - excluding libfbjni.so from pytorch_android.aar packaging, using `libfbjni.so` from gradle dep `'org.pytorch:pytorch_android_fbjni'`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26382

Differential Revision: D17468723

Pulled By: IvanKobzarev

fbshipit-source-id: fcad648cce283b0ee7e8b2bab0041a2e079002c6
2019-09-18 18:40:48 -07:00
Ashkan Aliabadi
dc851ab5d4 Integrate forked QNNPACK into mobile PyTorch builds. (#25844)
Summary:
Enable forked QNNPACK builds in PyTorch mobile.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25844

Differential Revision: D17336458

Pulled By: AshkanAliabadi

fbshipit-source-id: 6ea09dd6c114b64313e9159bf7f17253bc87bfdb
2019-09-16 20:50:43 -07:00
Ivan Kobzarev
b07991f7f5 Fix error messages; tensor creation method names with type (#26219)
Summary:
After offline discussion with dzhulgakov :
 - In future we will introduce creation of byte signed and byte unsigned dtype tensors, but java has only signed byte - we will have to add some separation for it in method names ( java types and tensor types  can not be clearly mapped) => Returning type in method names

- fixes in error messages

- non-static method Tensor.numel()

- Change Tensor toString() to be more consistent with python

Update on Sep 16:

Type renaming on java side to uint8, int8, int32, float32, int64, float64
```
public abstract class Tensor {
  public static final int DTYPE_UINT8 = 1;
  public static final int DTYPE_INT8 = 2;
  public static final int DTYPE_INT32 = 3;
  public static final int DTYPE_FLOAT32 = 4;
  public static final int DTYPE_INT64 = 5;
  public static final int DTYPE_FLOAT64 = 6;
```
```
  public static Tensor newUInt8Tensor(long[] shape, byte[] data)
  public static Tensor newInt8Tensor(long[] shape, byte[] data)
  public static Tensor newInt32Tensor(long[] shape, int[] data)
  public static Tensor newFloat32Tensor(long[] shape, float[] data)
  public static Tensor newInt64Tensor(long[] shape, long[] data)
  public static Tensor newFloat64Tensor(long[] shape, double[] data)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26219

Differential Revision: D17406467

Pulled By: IvanKobzarev

fbshipit-source-id: a0d7d44dc8ce8a562da1a18bd873db762975b184
2019-09-16 18:27:16 -07:00
Ivan Kobzarev
448c53747a CircleCI android nightly (snapshot) build publishing (#26069)
Summary:
To publish android snapshots to sonatype repository:
1. set gradle properties SONATYPE_NEXUS_USERNAME, SONATYPE_NEXUS_PASSWORD, ANDROID_SIGN_KEY, ANDROID_SIGN_PASS
these variables are stored as context environment variables in 'org-member' circleCI context
2. gradle -p ~/workspace/android/ uploadArchives

Due to gradle bugs in version 5 uploadArchives task works correctly with gradle version 4.10.3
That is also the reason of changes  `archiveClassifier.set('sources')` -> `classifier = 'sources'` as archiveClassifier was introduced in version 5

Registering nightly build job that publishes *-SNAPSHOT version of android api

Testing:
CircleCI successful snapshot publishing run https://circleci.com/gh/pytorch/pytorch/2786503?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
Corresponding published artifacts can be seen: https://oss.sonatype.org/#nexus-search;quick~pytorch_android
<img width="1316" alt="Screenshot 2019-09-16 09 36 14" src="https://user-images.githubusercontent.com/6638825/64976167-7f447480-d865-11e9-95c5-874c5cd62b6d.png">
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26069

Differential Revision: D17406399

Pulled By: IvanKobzarev

fbshipit-source-id: c3dc1e68f02aacbb60d21f8355f676e6e5fc2897
2019-09-16 18:07:53 -07:00
Ivan Kobzarev
d250f01060 Tensor renaming to dtype, shape; support long, double (#26183)
Summary:
Applying dzhulgakov  review comments

org.pytorch.Tensor:
  - dims renamed to shape
  - typeCode to dtype
  - numElements to numel

newFloatTensor, newIntTensor... to newTensor(...)

Add support of dtype=long, double
Resorted in code byte,int,float,long,double
For if conditions order float,int,byte,long,double as I expect that float and int branches will be used more often

Tensor.toString() does not have data, only numel (data buffer capacity)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26183

Differential Revision: D17374332

Pulled By: IvanKobzarev

fbshipit-source-id: ee93977d9c43c400b6c054b6286080321ccb81bc
2019-09-13 15:18:41 -07:00
Ivan Kobzarev
0ea59786e8 Use torch::from_blob instead of shareExternalPointer, nits (#25973)
Summary:
The main part is to switch at::Tensor creation from usage of `torch::empty(torch::IntArrayRef(...))->ShareExternalPointer(...) to torch::from_blob(...)`
Removed explicit set of `device CPU` as `at::TensorOptions` by default `device CPU`
And renaming of local variables removing `input` prefix to make them shorter
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25973

Differential Revision: D17356837

Pulled By: IvanKobzarev

fbshipit-source-id: 679e099b8aebd787dbf8ed422dae07a81243e18f
2019-09-13 13:40:11 -07:00
Jiakai Liu
ffee507d36 change gradle build to use static libtorch + gc-sections (#25984)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25984

Link static libtorch libraries into pytorch.so (API library for android)
with "-Wl,--gc-sections" flag to remove unused symbols in libtorch.

Test Plan:
- full gradle CI with stacked PR;
- will check final artifacts.tgz size change;

Differential Revision: D17312859

Pulled By: ljk53

fbshipit-source-id: 99584d15922867a7b3c3d661ba238a6f99f43db5
2019-09-12 15:12:45 -07:00
Ivan Kobzarev
6e4eeb1d17 Gradle tasks for publishing to bintray, jcenter, mavencentral etc. (#25351)
Summary:
Gradle tasks for publishing to bintray and jcenter, mavencentral; snapshot buidls go to oss.sonatype.org

Those gradle changes adds tasks:

bintrayUpload - publishing on bintray, in 'facebook' org
uploadArchives - uploading to maven repos

Gradle tasks are copied from facebook open sourced libraries like https://github.com/facebook/litho, https://github.com/facebookincubator/spectrum

To do the publishing we need to provide somehow (e.g. in ~/.gradle/gradle.properties)
```
signing.keyId=
signing.password=
signing.secretKeyRingFile=

bintrayUsername=
bintrayApiKey=
bintrayGpgPassword=

SONATYPE_NEXUS_USERNAME=
SONATYPE_NEXUS_PASSWORD=
```

android/libs/fbjni is submodule, to be able to add publishing tasks to it (it needs to be published as separate maven dependency) - I created `android/libs/fbjni_local` that has only `build.gradle` with release tasks.

pytorch_android dependency for ':fbjni' changed from implementation -> api as implementation treated as 'private' dependency which is translated to scope=runtime in maven pom file, api works as 'compile'

Testing:
it's already published on bintray with version 0.0.4 and can be used in gradle files as

```
repositories {
    maven {
        url  "https://dl.bintray.com/facebook/maven"
    }
}

dependencies {
    implementation 'com.facebook:pytorch_android:0.0.4'
    implementation 'com.facebook:pytorch_android_torchvision:0.0.4'
}
```

It was published in com.facebook group

I requested sync to jcenter from bintray, that usually takes 2-3 days

Versioning added version suffixes to aar output files and circleCI jobs for android start failing as they expected just pytorch_android.aar pytorch_android_torchvision.aar, without any version

To avoid it - I changed circleCI android jobs to zip *.aar files and publish as single artifact with name artifacts.zip, I will add kostmo to check this part, if circleCI jobs finish ok - everything works :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25351

Reviewed By: kostmo

Differential Revision: D17135886

Pulled By: IvanKobzarev

fbshipit-source-id: 64eebac670bbccaaafa1b04eeab15760dd5ecdf9
2019-08-30 17:52:34 -07:00
Ivan Kobzarev
0604b45f23 pytorch android circleci integration (#25286)
Summary:
Introducing circleCI jobs for pytorch_android gradle builds, the ultimate goal of it at the moment - to run:
```
gradle assembleRelease -p ~/workspace/android/pytorch_android assembleRelease
```

To assemble android gradle build (aar) we need to have results of libtorch-android shared library with headers for 4 android abis, so pytorch_android_gradle_build requires 4 jobs
```
  - pytorch_android_gradle_build:
      requires:
        - pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_build
        - pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_64_build
        - pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v7a_build
        - pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v8a_build
```
All jobs use the same base docker_image, differentiate them by committing docker images with different android_abi -suffixes (like it is now for xla and namedtensor): (it's in `&pytorch_linux_build_defaults`)
```
      if [[ ${BUILD_ENVIRONMENT} == *"namedtensor"* ]]; then
        export COMMIT_DOCKER_IMAGE=$output_image-namedtensor
      elif [[ ${BUILD_ENVIRONMENT} == *"xla"* ]]; then
        export COMMIT_DOCKER_IMAGE=$output_image-xla
      elif [[ ${BUILD_ENVIRONMENT} == *"-x86"* ]]; then
        export COMMIT_DOCKER_IMAGE=$output_image-android-x86
      elif [[ ${BUILD_ENVIRONMENT} == *"-arm-v7a"* ]]; then
        export COMMIT_DOCKER_IMAGE=$output_image-android-arm-v7a
      elif [[ ${BUILD_ENVIRONMENT} == *"-arm-v8a"* ]]; then
        export COMMIT_DOCKER_IMAGE=$output_image-android-arm-v8a
      elif [[ ${BUILD_ENVIRONMENT} == *"-x86_64"* ]]; then
        export COMMIT_DOCKER_IMAGE=$output_image-android-x86_64
      else
        export COMMIT_DOCKER_IMAGE=$output_image
      fi
```
pytorch_android_gradle_build job copies headers and libtorch.so, libc10.so results from libtorch android docker images, to workspace first and to android_abi=x86 docker image afterwards, to run there final gradle build calling `.circleci/scripts/build_android_gradle.sh`

For PR jobs we have only `pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_build` libtorch android build => it will have separate gradle build `pytorch_android_gradle_build-x86_32` that does not do docker copying,
it calls the same `.circleci/scripts/build_android_gradle.sh` which has only-x86_32 logic by condition on BUILD_ENVIRONMENT:
`[[ "${BUILD_ENVIRONMENT}" == *-gradle-build-only-x86_32* ]]`
And has filtering to un only for PR as for other runs we will have the full build. Filtering checks `-z "${CIRCLE_PULL_REQUEST:-}"`
```
    - run:
        name: filter_run_only_on_pr
        no_output_timeout: "5m"
        command: |
          echo "CIRCLE_PULL_REQUEST: ${CIRCLE_PULL_REQUEST:-}"
          if [ -z "${CIRCLE_PULL_REQUEST:-}" ]; then
            circleci step halt
          fi
```

Updating docker images to the version with gradle, android_sdk, openjdk - jenkins job with them https://ci.pytorch.org/jenkins/job/pytorch-docker-master/339/

pytorch_android_gradle_build successful run: https://circleci.com/gh/pytorch/pytorch/2604797#artifacts/containers/0
pytorch_android_gradle_build-x86_32 successful run: https://circleci.com/gh/pytorch/pytorch/2608945#artifacts/containers/0
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25286

Reviewed By: kostmo

Differential Revision: D17115861

Pulled By: IvanKobzarev

fbshipit-source-id: bc88fd38b38ed0d0170d719fffa375772bdea142
2019-08-29 11:29:23 -07:00
Ivan Kobzarev
c0334015ed add to Tensor symmetric methods getDataAsIntArray, getDataAsByteArray (#25183)
Summary:
Tensor has getDataAsFloatArray(), we also support Int and Byte Tensors,
adding symmetric methods for Int and Byte, that will throw
IllegalStateException if called for not appropriate type
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25183

Reviewed By: dreiss

Differential Revision: D17052674

Pulled By: IvanKobzarev

fbshipit-source-id: 1d44944461ad008e202e382152cd0690c61124f4
2019-08-26 19:11:11 -07:00
Ivan Kobzarev
56245ffe05 Fix python lints for generate_test_torchscripts.py (#25107)
Summary:
Fix lints, checked with flake8
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25107

Reviewed By: zrphercule

Differential Revision: D16991296

Pulled By: IvanKobzarev

fbshipit-source-id: 5b69d716e3c458dc2cfe5b668a390c7272b1c74f
2019-08-23 11:37:23 -07:00
Ivan Kobzarev
d62bca9792 jni-java wrapper for pytorchScript api (#25084)
Summary:
TLDR; initial commit of android java-jni wrapper of pytorchscript c++ api

The main idea is to provide java interface for android developers to use pytorchscript modules.
java API tries to repeat semantic of c++ and python pytorchscript API

org.pytorch.Module (wrapper of torch::jit::script::Module)
 - static Module load(String path)
 - IValue forward(IValue... inputs)
 - IValue runMethod(String methodName, IValue... inputs)

org.pytorch.Tensor (semantic of at::Tensor)
 - newFloatTensor(long[] dims, float[] data)
 - newFloatTensor(long[] dims, FloatBuffer data)

 - newIntTensor(long[] dims, int[] data)
 - newIntTensor(long[] dims, IntBuffer data)

 - newByteTensor(long[] dims, byte[] data)
 - newByteTensor(long[] dims, ByteBuffer data)

org.pytorch.IValue (semantic of at::IValue)
 - static factory methods to create pytorchscript supported types

Examples of usage api could be found in PytorchInstrumentedTests.java:

Module module = Module.load(path);
IValue input = IValue.tensor(Tensor.newByteTensor(new long[]{1}, Tensor.allocateByteBuffer(1)));
IValue output = module.forward(input);
Tensor outputTensor = output.getTensor();

ThreadSafety:
Api is not thread safe, all synchronization must be done on caller side.

Mutability:
org.pytorch.Tensor buffer is DirectBuffer with native byte order, can be created with static factory methods specifing DirectBuffer.
At the moment org.pytorch.Tensor does not hold at::Tensor on jni side, it has: long[] dimensions, type, DirectByteBuffer blobData

Input tensors are mutable (can be modified and used for the next inference),
Uses values from buffer on the momment of Module#forward or Module#runMethod calls.
Buffers of input tensors is used directly by input at::Tensor

Output is copied from output at::Tensor and is immutable.

Dependencies:
Jni level is implemented with usage of fbjni library, that was developed in Facebook,
and was already used and opensourced in several opensource projects,
added to the repo as submodule from personal account to be able to switch submodule
when fbjni will be opensourced separately.

ghstack-source-id: b39c848359a70d717f2830a15265e4aa122279c0
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25084
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25105

Reviewed By: dreiss

Differential Revision: D16988107

Pulled By: IvanKobzarev

fbshipit-source-id: 41ca7c9869f8370b8504c2ef8a96047cc16516d4
2019-08-23 10:42:44 -07:00