Wei Yang
807de9a1e3
fix segfault when grad to a hook fn is None ( #12028 )
...
Summary:
- fixes https://github.com/pytorch/pytorch/issues/11751 by checking if a grad is a Python None object before getting cdata from it
- behaviors:
pre-fix
```
>>> a = torch.randn(5, requires_grad=True)
>>> a_list = a.unbind()
>>> a0 = a_list[0]
>>> a0.register_hook
...: def hook(grad):
...: print(grad)
>>> a_list[0].backward()
tensor(1.)
>>> print('a_list[0]', a_list[0].grad, a.grad)
('a_list[0]', None, tensor([1., 0., 0., 0., 0.]))
>>> a_list[1].backward() # segfault
```
post-fix
```
>>> a = torch.randn(5, requires_grad=True)
>>> a_list = a.unbind()
>>> a0 = a_list[0]
>>> a0.register_hook
... : def hook(grad):
... : print(grad)
>>> a_list[0].backward()
tensor(1.)
>>> print(a_list[0].grad, a.grad)
(None, tensor([1., 0., 0., 0., 0.]))
>>> a_list[1].backward()
None
>>> print(a_list[1].grad, a.grad)
(None, tensor([1., 1., 0., 0., 0.]))
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12028
Differential Revision: D10034094
Pulled By: weiyangfb
fbshipit-source-id: 3f2135325fa7d338b920f57752057e4f6a6c0b1d
2018-09-25 19:10:25 -07:00
Sam Gross
1290e586fb
Use at::Tensor based autograd Variable ( #2676 )
...
Variable is now a subclass of at::Tensor backed by a VariableImpl* pImpl. The implementation of the ATen functions is defined in the auto-generated VariableType.h/cpp file.
Currently, only functions which fall through to the base type, such as sizes() and isCuda() are implemented. Differentiable ops like add() and mul() will be added in a subsequent PR.
2017-09-12 11:36:01 -04:00