Commit Graph

22 Commits

Author SHA1 Message Date
Jiakai Liu
3b1c3996e1 remove RTTI check for TensorImpl shadow copy (#22773)
Summary:
We introduced RTTI in recent change: https://github.com/pytorch/pytorch/pull/21613

For internal mobile build we don't enable '-frtti' yet. This diff is trying to replace
RTTI with alternative approach.

According to dzhulgakov we could compare two tensors' type_id directly in most cases -
which is more strict than comparing TensorImpl subclass type as TensorImpl -> type_id
mapping is 1-to-n but it's more proper for this use case.

The only two cases where we can relax direct type comparison (for legacy reason) are:
1. CPUTensor <-> CUDATensor;
2. SparseCPUTensor <-> SparseCUDATensor;
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22773

Differential Revision: D16277696

Pulled By: ljk53

fbshipit-source-id: 043e264fbacc37b7a11af2046983c70ddb62a599
2019-07-15 23:21:57 -07:00
Your Name
d632b1ff3c Expose is_mkldnn to python and register it as torchscript prim op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22386

Differential Revision: D16074722

Pulled By: bddppq

fbshipit-source-id: b9b2a05a894847640084f063fba68d9db4e6aec1
2019-07-01 12:31:59 -07:00
Junjie Bai
7d81e62562 Add mkldnn tests for running end to end resnet models
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22041

Differential Revision: D15928786

Pulled By: bddppq

fbshipit-source-id: 4b12e5bda2da13aba2d63d357a0a854d59317362
2019-06-20 22:42:49 -07:00
xiaobing.zhang
b6f542f8a1 Add aten mkldnn transpose (#21943)
Summary:
This PR is about:

1.  Make mkldnn reshape can share same memory fro plain format tensor.

2.  Add mkldnn transpose operator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21943

Differential Revision: D15916063

Pulled By: bddppq

fbshipit-source-id: d1971c67341f277c1e80c1fa34e213b6c27f4062
2019-06-19 22:20:46 -07:00
xiaobing.zhang
c06ccbe663 Add aten mkldnn zero_ operator (#20573)
Summary:
### mkldnn backward ops list:
 - [ ] \(https://github.com/pytorch/pytorch/pull/20567) Add aten mkldnn conv2d backward operator 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20570) Add aten mkldnn backward ops: relu, linear and reshape 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20572) Add aten mkldnn batchnorm backward operator 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20573) Add aten mkldnn zero_ operator💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20575) Add mkldnn mul operator 💚
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20573

Differential Revision: D15820477

Pulled By: bddppq

fbshipit-source-id: 35d95f5b4e013c8db1911f52148550a2e40a2e68
2019-06-14 09:48:49 -07:00
xiaobing.zhang
b599bb3836 Add mkldnn mul operator (#20575)
Summary:
### mkldnn backward ops list:
 - [ ] \(https://github.com/pytorch/pytorch/pull/20567) Add aten mkldnn conv2d backward operator 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20570) Add aten mkldnn backward ops: relu, linear and reshape 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20572) Add aten mkldnn batchnorm backward operator 💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20573) Add aten mkldnn zero_ operator💛
 - [ ] \(https://github.com/pytorch/pytorch/pull/20575) Add mkldnn mul operator 💛
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20575

Differential Revision: D15799529

Pulled By: bddppq

fbshipit-source-id: 4887d8ef1a0e316ad9db199b657d9481fc13e486
2019-06-12 22:41:51 -07:00
Junjie Bai
5744fb3007 Add mkldnn softmax operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21516

Differential Revision: D15712759

Pulled By: bddppq

fbshipit-source-id: bf515135263156bea1a2b3e53a47edf697b8b1e2
2019-06-07 15:22:18 -07:00
Aapo Kyrola
b161832f10 support ceil mode by padding changes (#21310)
Summary:
Modify MKLDNN pooling operation to support ceil mode by adjusting the right/bottom padding accordingly. This is done similarly as in Caffe (see discussion https://github.com/pytorch/pytorch/pull/19205#discussion_r276903751).

To make this possible, I split the padding to left and right (top / bottom). This naming is confusing but actually follows mkldnn's own naming for pooling::compute(). We increase the r paddings so that it matches the ceiling mode expected output size.

Strengthened the test case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21310

Reviewed By: bddppq

Differential Revision: D15611664

Pulled By: akyrola

fbshipit-source-id: 46b40015dafef69a8fd5e7b2c261d8dbf448cd20
2019-06-06 14:47:35 -07:00
Cheng,Penghui
57f932a638 Enable 'empty' function for mkldnn
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21184

Differential Revision: D15625296

Pulled By: bddppq

fbshipit-source-id: 47d26798bcf48e227ffd813f299959a7b8993641
2019-06-04 14:16:13 -07:00
xiaobing.zhang
ebc8d7170e fix the bug for mkldnn clone (#20943)
Summary:
This PR is to solve the bug for clone a MKLDNN tensor, please see the issue https://github.com/pytorch/pytorch/issues/20895.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20943

Differential Revision: D15511516

Pulled By: mrshenli

fbshipit-source-id: 05b41d6c7eaf8703521f4c768b8f26ec8501dc5e
2019-05-27 12:09:52 -07:00
Will Feng
8cde4c4d22 Remove Variable::Impl and DifferentiableViewImpl (#17072)
Summary:
As part of the Variable/Tensor merge work: https://github.com/pytorch/pytorch/issues/13638, we make the following changes in this PR:
1. Remove the `Variable::Impl` class and the `DifferentiableViewImpl` class
2. Change all `Variable.data()` call sites to either use `Variable` directly, or use `Variable.tensor_data()`
3. Remove `Variable.data()` API
3. Add `Variable.variable_data()` that matches `tensor.data` in Python API, which creates a new `Variable` that shares the same storage and tensor metadata with the original `Variable`, but with a completely new autograd history.

After this PR, Variable doesn't wrap a Tensor internally anymore, and both Variable and Tensor use the same TensorImpl class as its `impl_`. The only difference is that Variable always has AutogradMeta in its TensorImpl, but Tensor doesn't.

**Note that this PR is BC-breaking in the following use cases:**

**Use Case 1:**
Previously, `x.data = y` works even if `x` and `y` are of different TensorImpl type (e.g. `x` is a CPU dense tensor whose impl is of type TensorImpl, while `y` is a CPU sparse tensor whose impl is of type SparseTensorImpl). However, after this PR, `x.data = y` doesn't work anymore if `x` and `y` are of different TensorImpl type, because the underlying implementation `variable.set_data(tensor)` no longer works if `variable` and `tensor` have different TensorImpl type.

**Use Case 2:**
If a tensor `x`'s `grad` is sparse, accumulating dense gradients to `x` will change the tensor that `x.grad` is pointing to. This is better illustrated with the following example:
```python
params = torch.tensor([1.5, 1.5]).requires_grad_()
with torch.no_grad():
    # Change gradient to a sparse tensor
    params.grad = torch.sparse_coo_tensor(torch.tensor([[1, 1]]).long(), torch.tensor([1., 1.]))

grad_saved = params.grad
params.backward(torch.tensor([1.5, 1.5]))
assert id(grad_saved) == id(params.grad)  # This will fail after this PR
```
The assertion in the last line will fail after this PR, because adding dense gradients to sparse gradients will change the `params.grad` tensor reference.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17072

Differential Revision: D14075257

Pulled By: yf225

fbshipit-source-id: 0e681df641270dea586042dd26db59f2e76b5957
2019-05-23 21:09:04 -07:00
Junjie Bai
70caa2efe2 Add mkldnn sigmoid operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20820

Reviewed By: dzhulgakov

Differential Revision: D15455866

fbshipit-source-id: 712b06dfbd441051dc284a1acdf94926df09bc1d
2019-05-23 12:51:57 -07:00
Junjie Bai
8dedb04c26 Enable torch.jit.trace for mkldnn modules
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20800

Differential Revision: D15447892

fbshipit-source-id: 78e76523c5412c020a2bc22d6998ff7b36356720
2019-05-23 12:51:54 -07:00
Junjie Bai
63585c3b81 Add support for save and load mkldnn modules
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20799

Reviewed By: wanchaol

Differential Revision: D15447891

fbshipit-source-id: e34de946c79282fb934a5c52ff1def41c7993c75
2019-05-23 12:51:50 -07:00
Junjie Bai
cb8ff2a2b4 Add mkldnn support for adaptive_avg_pool2d (#19818)
Summary:
AdaptiveAvgPool2d is used in torchvision resnet models https://github.com/pytorch/vision/blob/9a481d0/torchvision/models/resnet.py#L145

Fixes #19797
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19818

Differential Revision: D15112777

Pulled By: bddppq

fbshipit-source-id: 6c9b29c805d28356cda49c10c2cd3ce9d7a8b3f5
2019-04-30 15:00:34 -07:00
Junjie Bai
c9f380df02 Add aten mkldnn linear operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19210

Reviewed By: dzhulgakov

Differential Revision: D14901641

fbshipit-source-id: 8fa68b9941fd93cea0f313a828cba34c5c81ae11
2019-04-26 13:41:57 -07:00
Junjie Bai
48b81da4cb Add aten mkldnn view operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19209

Reviewed By: dzhulgakov

Differential Revision: D14894545

fbshipit-source-id: 69455184811de1d1444b5d494e4a9d8c83301431
2019-04-26 13:41:54 -07:00
Junjie Bai
61d5a8dded Add aten mkldnn add operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19207

Reviewed By: dzhulgakov

Differential Revision: D14889477

fbshipit-source-id: 2c5e5ea5dfc26a9c9a172c5fa2c6d7584b167e16
2019-04-26 13:41:51 -07:00
Junjie Bai
fb53c189b3 Add aten mkldnn batch_norm operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19206

Reviewed By: dzhulgakov

Differential Revision: D14887205

fbshipit-source-id: ea00c9e3205c449d08ab29535309164f951aab95
2019-04-26 13:41:48 -07:00
Junjie Bai
4864000e55 Add aten mkldnn ops: relu, max_pool2d and avg_pool2d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19205

Reviewed By: dzhulgakov

Differential Revision: D14850598

fbshipit-source-id: 5bbd5909c06df9c980de680ffb81bf772766c0ba
2019-04-26 13:41:44 -07:00
Junjie Bai
3445020ca3 Add aten mkldnn conv2d operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19204

Reviewed By: dzhulgakov

Differential Revision: D14857513

fbshipit-source-id: 1172c9785e5a17a7d7360474551bdc7a511b3f2f
2019-04-26 13:41:41 -07:00
jgong5
3ad710b837 Add MKL-DNN Tensor (#17748)
Summary:
This is a minimalist PR to add MKL-DNN tensor per discussion from Github issue: https://github.com/pytorch/pytorch/issues/16038

Ops with MKL-DNN tensor will be supported in following-up PRs to speed up imperative path.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17748

Reviewed By: dzhulgakov

Differential Revision: D14614640

Pulled By: bddppq

fbshipit-source-id: c58de98e244b0c63ae11e10d752a8e8ed920c533
2019-04-08 21:41:38 -07:00