Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48990
Introducing TensorImageUtils methods to prepare tensors in channelsLast MemoryFormat.
ChannlesLast is preferred for performance.
Not to introduce api breaking changes, adding additional parameter MemoryFormat which is CONTIGUOUS by default.
Testing by checking test_app that uses this call
```
gradle -p android installMnetLocalBaseDebug -PABI_FILTERS=arm64-v8a
```
Test Plan: Imported from OSS
Reviewed By: jeffxtang
Differential Revision: D27173940
Pulled By: IvanKobzarev
fbshipit-source-id: 27788082d2c8b190323eadcf18de25d2c3b5e1f1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29455
- Don't need to load native library.
- Shape is now private.
Test Plan: Ran test.
Reviewed By: IvanKobzarev
Differential Revision: D18405213
fbshipit-source-id: e1d1abcf2122332317693ce391e840904b69e135
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27359
Adding methods to TensorImageUtils:
```
bitmapToFloatBuffer(..., FloatBuffer outBuffer, int outBufferOffset)
imageYUV420CenterCropToFloat32Tensor(..., FloatBuffer outBuffer, int outBufferOffset)
```
To be able to
- reuse FloatBuffer for inference
- to create batch-Tensor (contains several images/bitmaps)
As we reuse FloatBuffer for example demo app - image classification,
profiler shows less memory allocations (before that for every run we created new input tensor with newly allocated FloatBuffer) and ~-20ms on my PixelXL
Known open question:
At the moment every tensor element is written separatly calling `outBuffer.put()`, which is native call crossing lang boundaries
As an alternative - to allocation `float[]` on java side and fill it and put it in `outBuffer` with one call, reducing native calls, but increasing memory allocation on java side.
Tested locally just eyeballing durations - have not noticed big difference - decided to go with less memory allocations.
Will be good to merge into 1.3.0, but if not - demo app can use snapshot dependencies with this change.
PR with integration to demo app:
https://github.com/pytorch/android-demo-app/pull/6
Test Plan: Imported from OSS
Differential Revision: D17758621
Pulled By: IvanKobzarev
fbshipit-source-id: b4f1a068789279002d7ecc0bc680111f781bf980
Summary:
- Normalization mean and std specified as parameters instead of hardcode
- imageYUV420CenterCropToFloat32Tensor before this change worked only with square tensors (width==height) - added generalization to support width != height with all rotations and scalings
- javadocs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26690
Differential Revision: D17556006
Pulled By: IvanKobzarev
fbshipit-source-id: 63f3321ea2e6b46ba5c34f9e92c48d116f7dc5ce
Summary:
After offline discussion with dzhulgakov :
- In future we will introduce creation of byte signed and byte unsigned dtype tensors, but java has only signed byte - we will have to add some separation for it in method names ( java types and tensor types can not be clearly mapped) => Returning type in method names
- fixes in error messages
- non-static method Tensor.numel()
- Change Tensor toString() to be more consistent with python
Update on Sep 16:
Type renaming on java side to uint8, int8, int32, float32, int64, float64
```
public abstract class Tensor {
public static final int DTYPE_UINT8 = 1;
public static final int DTYPE_INT8 = 2;
public static final int DTYPE_INT32 = 3;
public static final int DTYPE_FLOAT32 = 4;
public static final int DTYPE_INT64 = 5;
public static final int DTYPE_FLOAT64 = 6;
```
```
public static Tensor newUInt8Tensor(long[] shape, byte[] data)
public static Tensor newInt8Tensor(long[] shape, byte[] data)
public static Tensor newInt32Tensor(long[] shape, int[] data)
public static Tensor newFloat32Tensor(long[] shape, float[] data)
public static Tensor newInt64Tensor(long[] shape, long[] data)
public static Tensor newFloat64Tensor(long[] shape, double[] data)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26219
Differential Revision: D17406467
Pulled By: IvanKobzarev
fbshipit-source-id: a0d7d44dc8ce8a562da1a18bd873db762975b184
Summary:
Applying dzhulgakov review comments
org.pytorch.Tensor:
- dims renamed to shape
- typeCode to dtype
- numElements to numel
newFloatTensor, newIntTensor... to newTensor(...)
Add support of dtype=long, double
Resorted in code byte,int,float,long,double
For if conditions order float,int,byte,long,double as I expect that float and int branches will be used more often
Tensor.toString() does not have data, only numel (data buffer capacity)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26183
Differential Revision: D17374332
Pulled By: IvanKobzarev
fbshipit-source-id: ee93977d9c43c400b6c054b6286080321ccb81bc
Summary:
Initial commit of pytorch_android_torchvision that has utility methods for
android.media.Image, YUV_420_888 format (camera output) -> Tensor(Float) with torchvision format, normalized by ImageNet mean,std
Bitmap -> Tensor(Float) torchvision format
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25185
Reviewed By: dreiss
Differential Revision: D17053008
Pulled By: IvanKobzarev
fbshipit-source-id: 6bf7a39615bf876999982b06925e7444700e284b