Commit Graph

224 Commits

Author SHA1 Message Date
Michael Suo
f5919dba45 refactoring of module/object (#22203)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22203
ghimport-source-id: 6b3807ac8aa53df2fdd770b43d8e54b8f0d69c20

Test Plan: Imported from OSS

Differential Revision: D15998760

Pulled By: suo

fbshipit-source-id: dd51edbcb66561189ae9d94a129434092bcad01b
2019-07-04 17:12:04 -07:00
Michael Suo
3b2844eeea Make CompilationUnit own Functions (#22202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22202
ghimport-source-id: de6c963af1df76d2d6357155e64a5913ab879f76

Test Plan: Imported from OSS

Differential Revision: D15998761

Pulled By: suo

fbshipit-source-id: 5414a6424953738d823b265d20dc67dde6e5b2d8
2019-07-04 17:12:00 -07:00
Wanchao Liang
799633e4cd move casting ops from prim to aten
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22275

Test Plan: Imported from OSS

Differential Revision: D16060597

Pulled By: wanchaol

fbshipit-source-id: a11d8ad3b037e15bd670cc7cd3fefd4f0abd0bba
2019-07-03 22:22:28 -07:00
Sebastian Messmer
e68dc899d1 Fix compiler warnings (#22162)
Summary:
Fix various compiler warnings
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22162

Differential Revision: D16085339

Pulled By: smessmer

fbshipit-source-id: d36a4b334315f1a5942cac46443a7d166ca36d0d
2019-07-02 14:12:55 -07:00
Sebastian Messmer
6d5871300b Use concrete types on call sites for Dict/List (#22004)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22004

In future, we want all dicts/lists to store information about the types they contain.
This is only possible if the creation API doesn't allow creating lists/dicts without type information.
This diff removes some call sites that don't specify type information and have it specify type information.

Reviewed By: dzhulgakov

Differential Revision: D15906387

fbshipit-source-id: 64766a2534b52c221e8a5501a85eaad13812e7bd
2019-07-02 11:52:35 -07:00
Karl Ostmo
a845d02cd5 Revert D16088191: Added math.log2 and hypot
Differential Revision:
D16088191

Original commit changeset: 5d80c480243d

fbshipit-source-id: 12ea2617e3af5bf81b1f2a57f8633ca06a99db5b
2019-07-02 10:18:42 -07:00
Horace He
b76877728a Added math.log2 and hypot
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21512

Test Plan: Imported from OSS

Differential Revision: D16088191

Pulled By: Chillee

fbshipit-source-id: 5d80c480243d2644c96df26337cf65918d79443e
2019-07-02 06:28:34 -07:00
Your Name
d632b1ff3c Expose is_mkldnn to python and register it as torchscript prim op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22386

Differential Revision: D16074722

Pulled By: bddppq

fbshipit-source-id: b9b2a05a894847640084f063fba68d9db4e6aec1
2019-07-01 12:31:59 -07:00
Owen Anderson
7cc8f37f56 Reduce needless copying when returning lists of tensors in the JIT interpreter. (#21690)
Summary:
This fixes the JIT performance gap reported in https://twitter.com/VahidK/status/1138677898439561216
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21690

Differential Revision: D15783709

fbshipit-source-id: 23bb4acda6b60c27e95667e1d53c7d261a87167d
2019-06-28 19:00:05 -07:00
Horace He
ac39869370 Fixed list() not making a copy (#22093)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/22087
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22093

Differential Revision: D16036814

Pulled By: Chillee

fbshipit-source-id: 3c7106f907415ed0f600acaf45d2c61e1c60867a
2019-06-27 13:55:43 -07:00
davidriazati
be0631b6ee Add the rest of the dict API (#21979)
Summary:
This adds the rest of the `dict.???` methods that were missing

Pull Request resolved: https://github.com/pytorch/pytorch/pull/21979

Pulled By: driazati

Differential Revision: D16023573

fbshipit-source-id: 3ea9bd905090e2a176af654a8ca98c7d965ea679
2019-06-27 11:08:18 -07:00
Horace He
c9626a11cc Made a += b for lists do an in place add (#21896)
Summary:
In talks with smessmer, we decided that it'd be better to put the logic in `list`, as optimal behavior requires knowing `.capacity()`

Results on my cpu (for the benchmark here: https://twitter.com/VahidK/status/1138674536679821312) now look like this:
```
Pytorch batch_gather took 0.018311 seconds.
Pytorch batch_gather jit took 0.013921 seconds.
Pytorch vectorized batch_gather took 0.001384 seconds.
```
Previously, `batch_gather jit` took 3x as long as `batch_gather`.

Some logic taken from https://github.com/pytorch/pytorch/pull/21690. Note that these two PR's are somewhat orthogonal. That PR handles this benchmark by looking at the alias analysis, while this PR specializes for `+=`.

Note that we can't jit the vectorized version as we think `torch.arange` returns a float tensor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21896

Differential Revision: D15998628

Pulled By: Chillee

fbshipit-source-id: b0085960da4613578b94deb98ac62c0a4532a8c3
2019-06-27 10:59:24 -07:00
Wanchao Liang
3ba72a11db Revert D15999938: [jit] Add the rest of the dict API
Differential Revision:
D15999938

Original commit changeset: 7bc2a55e3f79

fbshipit-source-id: e377c00e990d6f058960936e69712b77851c06fa
2019-06-26 14:16:37 -07:00
davidriazati
af9e0085f2 Add the rest of the dict API (#21979)
Summary:
This adds the rest of the `dict.???` methods that were missing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21979

Pulled By: driazati

Differential Revision: D15999938

fbshipit-source-id: 7bc2a55e3f791015a0ff2e3731703075cf0770ee
2019-06-26 10:40:29 -07:00
Sebastian Messmer
de85abf226 Allow default construction of Dict/List (#22084)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22084

For DictPtr/ListPtr, default construction was disallowed because it was ambigious if it's supposed to create an empty list or a nullptr.
But since we renamed them to Dict/List, we can now allow default construction without ambiguity.

Differential Revision: D15948098

fbshipit-source-id: 942a9235b51608d1870ee4a2f2f0a5d0d45ec6e6
2019-06-25 17:40:48 -07:00
Wanchao Liang
e0f5ab2c2e Tree based Iterator infrastructure: for in range/list/tensor/zip/enumerate (#21801)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21801
ghimport-source-id: b019d3e9a6f9bf152991a01b40e424dff176ffaa

Test Plan: Imported from OSS

Differential Revision: D15948545

Pulled By: wanchaol

fbshipit-source-id: 6110a0f3ab08cbbb398441e8330f56083ecd2d99
2019-06-22 01:00:42 -07:00
Sebastian Messmer
275087383b ListPtr->List DictPtr->Dict step 2 (#21937)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21937

This changes call sites to use the new naming scheme

Reviewed By: zdevito

Differential Revision: D15892404

fbshipit-source-id: 8d32aa90a0ead1066688166478f299fde9c2c133
2019-06-19 18:02:05 -07:00
davidriazati
5eb25c3704 Support in membership checks (#21527)
Summary:
This PR adds support for `in` checks like `key in my_dict`

For now it leaves lists as a follow up due to the changes around `IValue` lists and it needing an `IValue` equality op.

For objects it uses the magic method `__contains__(self, key)`
](https://our.intern.facebook.com/intern/diff/15811203/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21527

Pulled By: driazati

Differential Revision: D15811203

fbshipit-source-id: 95745060394f8a9450efaaf8ab09d9af83bea01e
2019-06-18 09:49:12 -07:00
Bram Wasti
8aeb4ef4bf Add python string standard lib (#21807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21807
ghimport-source-id: dcb2c78b8facb90a323ab9212b7703e553354273

Test Plan: Imported from OSS

Differential Revision: D15835509

Pulled By: bwasti

fbshipit-source-id: bc8bc5ae5a4fb4a1581aa94485973ed87af4eaaf
2019-06-17 15:48:36 -07:00
Ailing Zhang
ff1172d705 high pri Jit builtins (#21451)
Summary:
bin/hex/oct/round/chr
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21451

Differential Revision: D15702863

Pulled By: ailzhang

fbshipit-source-id: 9f69896b79e7584f12353e9f2ee2969dbe1ec6d6
2019-06-16 09:48:38 -07:00
Will Feng
4a2fc00db0 Revert D15830704: [jit] Add Python string standard lib
Differential Revision:
D15830704

Original commit changeset: e55a8c6bf910

fbshipit-source-id: 1ec953bfaabab0288e953f48cde0a32370ac3fc6
2019-06-14 20:52:58 -07:00
James Reed
4bcc72fe95 Support for NamedTuple (#21428)
Summary:
Resolves https://github.com/pytorch/lockdown/issues/18

This implements NamedTuple by taking advantage of the existing `names` field in `TupleType`.

TODO: This currently doesn't retain the NamedTuple-ness through serialization. Discussed with suo offline, we can probably make a way to define an anonymous NamedTuple in script (e.g. `NamedTuple('Foo', [('a', int), ('b', float), ('c', List[float])])` and serialize that
TODO: implement support for calling the constructor with kwargs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21428

Differential Revision: D15741564

Pulled By: jamesr66a

fbshipit-source-id: c077cbcea1880675ca6deb340a9ec78f824a136c
2019-06-14 16:45:56 -07:00
Bram Wasti
dddc65db9e Add Python string standard lib (#21059)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21059
ghimport-source-id: f813585cde1b275c134b19009a2f5c0b3d70fc6e

Reviewed By: jamesr66a

Differential Revision: D15830704

Pulled By: bwasti

fbshipit-source-id: e55a8c6bf910a163b9a5260235e315af9532b129
2019-06-14 13:34:42 -07:00
Sebastian Messmer
b527e48588 Use c10::List (#21177)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21177

- Integrate c10::ListPtr into IValue and the c10 dispatcher.
- Streamline conversion to/from IValue. Before, we had IValue::to<> and kernel_functor.h had its own ivalue_to_arg_type and return_type_to_ivalue. They are now unified. Also, this means that nested types like Dicts of Lists of Optional of Dict of ... do work as expected now

Differential Revision: D15476433

fbshipit-source-id: bde9df80df20091aa8e6ae17ba7e90abd149b954
2019-06-12 13:58:24 -07:00
James Reed
c2a18a6702 Override print when python is present (#21625)
Summary:
This makes it so we can see the output of prim::Print in environments like iPython notebooks which override sys.stdout
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21625

Differential Revision: D15756793

Pulled By: jamesr66a

fbshipit-source-id: 7d9a14b2e229ed358e784318e9d862677db2c461
2019-06-11 22:58:22 -07:00
Will Feng
968114ae3d Revert D15769256: [jit] Add python string standard lib
Differential Revision:
D15769256

Original commit changeset: 1af487446361

fbshipit-source-id: 96bea4a49664dad68762bef75ae28e64c673f8b1
2019-06-11 16:54:43 -07:00
Bram Wasti
9241c4b3c6 Add python string standard lib (#21656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21656
ghimport-source-id: cc7d7f68e33e95a97f6274c50823138aa4bacabb

Differential Revision: D15769256

Pulled By: bwasti

fbshipit-source-id: 1af487446361d90d03dce004c3e2169a3e62667d
2019-06-11 15:23:23 -07:00
Kartikey Pandey
2378c120e6 Implements divmod function (#20979)
Summary:
This PR refer to issue #18627
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20979

Differential Revision: D15743929

Pulled By: wanchaol

fbshipit-source-id: 967fc3fd519501e427176e10b112c8be1390540b
2019-06-10 15:00:56 -07:00
eellison
8a88d33103 Uninitialized Ivalue (#21387)
Summary:
Create an uninitialized ivalue. This will be needed for Breaks & Continues to match up if block outputs of values that are guaranteed not to be used but need to escape the block scope. It is not exposed to users.

Was previously part of final returns but I was asked to make a separate PR for it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21387

Differential Revision: D15745124

Pulled By: eellison

fbshipit-source-id: ae6a6f766b4a70a71b9033987a630cfbf044e296
2019-06-10 14:51:24 -07:00
Nikolay Korovaiko
30d6933016 BailOut Graphs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21381

Differential Revision: D15724412

Pulled By: Krovatkin

fbshipit-source-id: 18e4a1916c7cd1baea76953d0087d6257e58c55b
2019-06-10 11:49:38 -07:00
Zachary DeVito
13edda417d Prepare interpreter for function calling (#21558)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21558
ghimport-source-id: a8a19dbefea869ca1401e5afea6c02f31f95b99a

Reviewed By: suo

Differential Revision: D15729491

Pulled By: zdevito

fbshipit-source-id: 9629664608a2379a2ddcafaf741fa8463c4fb917
2019-06-09 15:28:13 -07:00
Zachary DeVito
d71501259b Revert D15572818: Prepare interpreter for function calling
Differential Revision:
D15572818

Original commit changeset: 3a9b5f053664

fbshipit-source-id: b932411e8e88c7414c8db332d6049fe4e26bd83e
2019-06-07 22:20:54 -07:00
Zachary DeVito
c53e4d012d Prepare interpreter for function calling (#21185)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21185
ghimport-source-id: 6b9cb92d1f1f59bb980dcfa0d29dfe985ee955d1

Reviewed By: jamesr66a

Differential Revision: D15572818

Pulled By: zdevito

fbshipit-source-id: 3a9b5f053664c09212b97f1391d8d006337b5550
2019-06-07 20:56:46 -07:00
Horace He
7e300fbb21 Added degrees, radians, ldexp (#21131)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21131
ghimport-source-id: 62b9cb71a17f9c9a7999a6e33c2d8b840ce097ff

Differential Revision: D15563184

Pulled By: Chillee

fbshipit-source-id: e2c47fb9f9c0fe9f039cfd001c5e6d5b455e034c
2019-06-05 19:17:02 -07:00
Horace He
f8202d85a0 Added frexp, isinf, isnan, isfinite (#21130)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21130
ghimport-source-id: fa771086da13deed232e142db6f940439bcc67bc

Differential Revision: D15563186

Pulled By: Chillee

fbshipit-source-id: fe33dbc454af2a9626ad810a5304300eb17d7530
2019-06-05 18:46:39 -07:00
Horace He
ba2bdf8d0e Added factorial (#21129)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21129
ghimport-source-id: a676dd33c4d0b2b60c3e9ce725bda0abeb22375f

Differential Revision: D15563183

Pulled By: Chillee

fbshipit-source-id: 641cae34c181a16c772665f5f7ed01c96a67ea9c
2019-06-05 11:51:03 -07:00
davidriazati
f172fadd80 Make warnings be UserWarnings with source file info (#21231)
Summary:
Redo of #15201, this makes `warnings.warn` calls match their Python
behavior
](https://our.intern.facebook.com/intern/diff/15605266/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21231

Pulled By: driazati

Differential Revision: D15605266

fbshipit-source-id: 5931fd720b0c40d52dd492fbd1f5a76abefaab5c
2019-06-05 11:09:11 -07:00
Horace He
92b76df8f6 Finished trigonometric functions (#21128)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21128
ghimport-source-id: d566de103f2aefc59e6423181de325d8f42620f4

Differential Revision: D15563190

Pulled By: Chillee

fbshipit-source-id: ad2e09cac5c7dae9978a7bd61098c2828620cdc4
2019-06-04 17:59:09 -07:00
Horace He
7309cb60fd Finished the high-priority functions (#21127)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21127
ghimport-source-id: 609021958e76ea01299f62b9491038005e6b4f27

Differential Revision: D15563189

Pulled By: Chillee

fbshipit-source-id: 5c6155a69fff7447689ef012ea303dc358d50486
2019-06-04 17:59:05 -07:00
Horace He
622588d8fd Added remainder of high-priority trigonometric math ops (#21126)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21126
ghimport-source-id: e310f3cfb28436b99ad038691887ca82068ca2c9

Differential Revision: D15563191

Pulled By: Chillee

fbshipit-source-id: 7135ddd5bc9eebc818694fa8b67eaade907fa8a1
2019-06-04 17:59:02 -07:00
Horace He
6938de8851 made floor/ceil return ints (#21124)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21124
ghimport-source-id: e3e45bd50c9af1ee03fd58f2f4d631ce23d9612e

Differential Revision: D15563187

Pulled By: Chillee

fbshipit-source-id: 6504a41da883a8287d64db20d40cf958edb7404c
2019-06-04 10:32:16 -07:00
Lucas Hendren
770089c2b8 math module support: isnan, asinh, atanh, cosh, sinh, and tanh (#19337)
Summary:
driazati and eellison Please review This PR is for #19026 .  Specifically, isnan, asinh, atanh, cosh, sinh, and tanh
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19337

Differential Revision: D15580932

Pulled By: driazati

fbshipit-source-id: 38513fa59088e038264f9f6f0d6374a13a165589
2019-06-03 10:54:42 -07:00
Horace He
80020306ef Added base parameter to math.log (#21151)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21151
ghimport-source-id: 76dc0852022a87a000888a787de1391f71923074

Differential Revision: D15563185

Pulled By: Chillee

fbshipit-source-id: 6ed7cc32ed7c103f360022b97f6df47ccd0403e7
2019-05-30 13:32:52 -07:00
Wanchao Liang
2cd1c78632 Revert D15523444: [jit] move casting ops from prim to aten
Differential Revision:
D15523444

Original commit changeset: 642342bf1cce

fbshipit-source-id: 29de1c7e19cbb3273230c280346e786e61d2d445
2019-05-29 13:42:05 -07:00
Wanchao Liang
a0111aaf0d move casting ops from prim to aten (#21002)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21002
ghimport-source-id: 4c88a54a3ecb76c5ca3c2c328b749350860a166d

Differential Revision: D15523444

Pulled By: wanchaol

fbshipit-source-id: 642342bf1ccea83c88897bc023979a32ee01addf
2019-05-29 12:36:47 -07:00
Horace He
dd903eb645 Add start and step parameters for range in torchscript (#20795)
Summary:
Fixes #18440

I calculate a derived index from `start,stop,step` as `start + step*index`. When `start=0` and `step=1` (the defaults/`range(n)`), this is the same behavior as before.

Unluckily, it seems that we do not optimize out operations like `x*1` or `x+0`. That means that we're doing lots of redundant operations when we don't need to. EDIT: More specifically, it seems like we only do this optimization for (tensor, scalar): https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/passes/peephole.cpp#L128

The most annoying part of this code is calculating the number of iterations, given `start, stop, step`. I ended up going with the formula `(abs(stop-start) + abs(step)-1)//abs(step)`. Other intuitively appealing formulas like `(stop-start + step -1)//step` don't work for negative numbers.

I tried using `SymbolicVariable` for the calculations, but it seems that `symbolicvariable` only outputs ops for `tensors`, not the integers we have.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20795

Differential Revision: D15446869

Pulled By: Chillee

fbshipit-source-id: 6085545ace04e25985c6ac870226f7a651f670d5
2019-05-29 12:31:29 -07:00
Horace He
2ba608b4a0 Fixed gcd to use 64 bit integers (#21041)
Summary:
Not much to say. Fixes implementation introduced here: https://github.com/pytorch/pytorch/pull/19115
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21041

Differential Revision: D15528801

Pulled By: Chillee

fbshipit-source-id: bacd709eb711ca00156bd70480d6051b437517ed
2019-05-28 16:20:55 -07:00
Wanchao Liang
0885dd28c8 refactor register_prim_ops (#21001)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21001
ghimport-source-id: f1b8e3999bf18fb0f3b857a13c3e3f609e1e4b4e

Differential Revision: D15523445

Pulled By: wanchaol

fbshipit-source-id: c1e29b0985bde580703a1fca9df46da773826df6
2019-05-28 14:11:04 -07:00
Thomas Viehmann
17941f9979 JIT: Eliminate SumToSize by using Optional Lists (#18697)
Summary:
This PR is a eliminates unneeded grad_sum_to_size and in particular speeds up the LSTM backward by allowing better fusion.

It consists of two parts:
- In AutoDiff, record broadcasting sizes only if the broadcast output size is different from the input size, otherwise record None.
- The specialization of Optional arguments (#18407) allows us to then eliminate ` _grad_sum_to_size(t, None)` in the peephole optimization   step.

Thus, in the LSTM case, no SumToSize remain in the crucial fusion group. The trick here is that we can specialize on the runtime information from the forward.

I'm testing that different broadcasting situations lead to different graphs.

I didn't move all symbolic_script _grad_sum_to_size to the new logic, but it might be better to do this incrementally, anyway.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18697

Differential Revision: D15482076

Pulled By: wanchaol

fbshipit-source-id: 7f89367e35b8729910077c95c02bccefc8678afb
2019-05-24 11:24:17 -07:00
Will Feng
8cde4c4d22 Remove Variable::Impl and DifferentiableViewImpl (#17072)
Summary:
As part of the Variable/Tensor merge work: https://github.com/pytorch/pytorch/issues/13638, we make the following changes in this PR:
1. Remove the `Variable::Impl` class and the `DifferentiableViewImpl` class
2. Change all `Variable.data()` call sites to either use `Variable` directly, or use `Variable.tensor_data()`
3. Remove `Variable.data()` API
3. Add `Variable.variable_data()` that matches `tensor.data` in Python API, which creates a new `Variable` that shares the same storage and tensor metadata with the original `Variable`, but with a completely new autograd history.

After this PR, Variable doesn't wrap a Tensor internally anymore, and both Variable and Tensor use the same TensorImpl class as its `impl_`. The only difference is that Variable always has AutogradMeta in its TensorImpl, but Tensor doesn't.

**Note that this PR is BC-breaking in the following use cases:**

**Use Case 1:**
Previously, `x.data = y` works even if `x` and `y` are of different TensorImpl type (e.g. `x` is a CPU dense tensor whose impl is of type TensorImpl, while `y` is a CPU sparse tensor whose impl is of type SparseTensorImpl). However, after this PR, `x.data = y` doesn't work anymore if `x` and `y` are of different TensorImpl type, because the underlying implementation `variable.set_data(tensor)` no longer works if `variable` and `tensor` have different TensorImpl type.

**Use Case 2:**
If a tensor `x`'s `grad` is sparse, accumulating dense gradients to `x` will change the tensor that `x.grad` is pointing to. This is better illustrated with the following example:
```python
params = torch.tensor([1.5, 1.5]).requires_grad_()
with torch.no_grad():
    # Change gradient to a sparse tensor
    params.grad = torch.sparse_coo_tensor(torch.tensor([[1, 1]]).long(), torch.tensor([1., 1.]))

grad_saved = params.grad
params.backward(torch.tensor([1.5, 1.5]))
assert id(grad_saved) == id(params.grad)  # This will fail after this PR
```
The assertion in the last line will fail after this PR, because adding dense gradients to sparse gradients will change the `params.grad` tensor reference.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17072

Differential Revision: D14075257

Pulled By: yf225

fbshipit-source-id: 0e681df641270dea586042dd26db59f2e76b5957
2019-05-23 21:09:04 -07:00