pytorch/torch/csrc/jit/script
Will Feng 8cde4c4d22 Remove Variable::Impl and DifferentiableViewImpl (#17072)
Summary:
As part of the Variable/Tensor merge work: https://github.com/pytorch/pytorch/issues/13638, we make the following changes in this PR:
1. Remove the `Variable::Impl` class and the `DifferentiableViewImpl` class
2. Change all `Variable.data()` call sites to either use `Variable` directly, or use `Variable.tensor_data()`
3. Remove `Variable.data()` API
3. Add `Variable.variable_data()` that matches `tensor.data` in Python API, which creates a new `Variable` that shares the same storage and tensor metadata with the original `Variable`, but with a completely new autograd history.

After this PR, Variable doesn't wrap a Tensor internally anymore, and both Variable and Tensor use the same TensorImpl class as its `impl_`. The only difference is that Variable always has AutogradMeta in its TensorImpl, but Tensor doesn't.

**Note that this PR is BC-breaking in the following use cases:**

**Use Case 1:**
Previously, `x.data = y` works even if `x` and `y` are of different TensorImpl type (e.g. `x` is a CPU dense tensor whose impl is of type TensorImpl, while `y` is a CPU sparse tensor whose impl is of type SparseTensorImpl). However, after this PR, `x.data = y` doesn't work anymore if `x` and `y` are of different TensorImpl type, because the underlying implementation `variable.set_data(tensor)` no longer works if `variable` and `tensor` have different TensorImpl type.

**Use Case 2:**
If a tensor `x`'s `grad` is sparse, accumulating dense gradients to `x` will change the tensor that `x.grad` is pointing to. This is better illustrated with the following example:
```python
params = torch.tensor([1.5, 1.5]).requires_grad_()
with torch.no_grad():
    # Change gradient to a sparse tensor
    params.grad = torch.sparse_coo_tensor(torch.tensor([[1, 1]]).long(), torch.tensor([1., 1.]))

grad_saved = params.grad
params.backward(torch.tensor([1.5, 1.5]))
assert id(grad_saved) == id(params.grad)  # This will fail after this PR
```
The assertion in the last line will fail after this PR, because adding dense gradients to sparse gradients will change the `params.grad` tensor reference.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17072

Differential Revision: D14075257

Pulled By: yf225

fbshipit-source-id: 0e681df641270dea586042dd26db59f2e76b5957
2019-05-23 21:09:04 -07:00
..
builtin_functions.cpp Cleanup includes in torch/csrc/* (#19924) 2019-05-06 14:03:18 -07:00
builtin_functions.h First class modules in the compiler, round 2 (#19167) 2019-04-11 13:55:48 -07:00
class_type.cpp @torch.jit.script(fn) now is a torch.jit.Function (#19721) 2019-04-25 15:53:00 -07:00
compilation_unit.h Replace AT_CHECK with TORCH_CHECK [shard 10/10] 2019-05-15 07:35:37 -07:00
compiler.cpp make magic methods work with casts too (#20654) 2019-05-21 14:23:06 -07:00
compiler.h Turn resolver into a class (#19236) 2019-04-19 13:01:59 -07:00
edit_distance.cpp Print out operator suggestions for unknown builtin op (#15183) 2019-01-04 13:04:44 -08:00
edit_distance.h Cleanup includes in torch/csrc/jit/script/* (#19921) 2019-05-06 13:24:22 -07:00
error_report.h Remove SourceLocation (respin) (#20333) 2019-05-09 16:17:33 -07:00
final_returns.cpp Use SmallVector to allocate Compound operands inline. (#20762) 2019-05-21 16:37:52 -07:00
final_returns.h clang format world (#15524) 2018-12-26 06:55:01 -08:00
function_schema_parser.cpp Memory format support for contiguous and is_contiguous (#20455) 2019-05-16 07:18:24 -07:00
function_schema_parser.h Allow registering ops without specifying the full schema (#19286) 2019-04-18 02:04:46 -07:00
init.cpp Fix bug in exporting node with multiple outputs by scripting 2019-05-22 16:29:36 -07:00
init.h Canonicalize all includes in PyTorch. (#14849) 2018-12-08 19:38:30 -08:00
jit_exception.cpp C++ changes toward libtorch and libcaffe2 unification (#19554) 2019-04-26 01:38:10 -07:00
jit_exception.h C++ changes toward libtorch and libcaffe2 unification (#19554) 2019-04-26 01:38:10 -07:00
lexer.cpp Move function schema parser to ATen/core build target (#19282) 2019-04-18 01:03:37 -07:00
lexer.h Cleanup includes in torch/csrc/jit/script/* (#19921) 2019-05-06 13:24:22 -07:00
logging.cpp C++ changes toward libtorch and libcaffe2 unification (#19554) 2019-04-26 01:38:10 -07:00
logging.h Cleanup includes in torch/csrc/jit/script/* (#19921) 2019-05-06 13:24:22 -07:00
module_python.h Extract Python-specific SugaredValues to a separate file from init.cpp. (#19986) 2019-04-30 19:38:23 -07:00
module.cpp Remove Variable::Impl and DifferentiableViewImpl (#17072) 2019-05-23 21:09:04 -07:00
module.h Add support for __getstate__/__setstate__ on module (#20242) 2019-05-17 14:43:14 -07:00
parse_string_literal.h clang format world (#15524) 2018-12-26 06:55:01 -08:00
parser.cpp fix parsing bugs (#20246) 2019-05-07 19:35:51 -07:00
parser.h Attribute serialization (#17423) 2019-03-18 18:18:22 -07:00
python_sugared_value.cpp Misc error message improvements (#19369) 2019-05-17 15:30:58 -07:00
python_sugared_value.h Add _enable_recursive_script to try to script all Python functions (#19578) 2019-05-17 14:50:45 -07:00
python_tree_views.cpp Cleanup includes in torch/csrc/jit/script/* (#19921) 2019-05-06 13:24:22 -07:00
python_tree_views.h clang format world (#15524) 2018-12-26 06:55:01 -08:00
resolver.h Namespace isolation for classes (#19903) 2019-05-07 22:48:31 -07:00
schema_matching.cpp Use python type string for user facing error msgs (#20657) 2019-05-17 15:04:53 -07:00
schema_matching.h Index into a tuple with non constant integer (#20081) 2019-05-06 14:23:16 -07:00
schema_type_parser.cpp Mark values entering containers as wildcards (#20556) 2019-05-22 16:50:06 -07:00
schema_type_parser.h Move function schema parser to ATen/core build target (#19282) 2019-04-18 01:03:37 -07:00
script_type_parser.cpp Misc error message improvements (#19369) 2019-05-17 15:30:58 -07:00
script_type_parser.h Cleanup includes in torch/csrc/jit/script/* (#19921) 2019-05-06 13:24:22 -07:00
slot.h First class modules in the compiler, round 2 (#19167) 2019-04-11 13:55:48 -07:00
strtod.cpp Fix strtod for MSVC (#20490) 2019-05-15 07:40:44 -07:00
strtod.h Fixing function schema parser for Android (#19281) 2019-04-17 23:50:17 -07:00
sugared_value.cpp Use python type string for user facing error msgs (#20657) 2019-05-17 15:04:53 -07:00
sugared_value.h make magic methods work with casts too (#20654) 2019-05-21 14:23:06 -07:00
tree_views.h Cleanup includes in torch/csrc/jit/script/* (#19921) 2019-05-06 13:24:22 -07:00
tree.h Use SmallVector to allocate Compound operands inline. (#20762) 2019-05-21 16:37:52 -07:00