Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65573
When we remove mutation on
```
x = [0, 1, 3, 4]
x[-2] = 4
```
we have a safety check that the new index will be in bounds of the old index. in practice, this should always be the case otherwise you would have a runtime error. Within that check (not within the actual adjustment) we were using the wrong length of inputs preventing the optimization from firing.
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D31797469
Pulled By: eellison
fbshipit-source-id: 02a1686b9f6016eb5aeb87ed342c043c203dcd0e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65573
When we remove mutation on
```
x = [0, 1, 3, 4]
x[-2] = 4
```
we have a safety check that the new index will be in bounds of the old index. in practice, this should always be the case otherwise you would have a runtime error. Within that check (not within the actual adjustment) we were using the wrong length of inputs preventing the optimization from firing.
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D31732417
Pulled By: eellison
fbshipit-source-id: dd734254c0212ca459c1c135da262974de5299be
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39111
In our present alias analysis, we consider any Value that enter another container as entering the heap, and thus aliasing all other heap values of the same type. There are a number of advantages to this approach:
- it is not to hard to maintain the aliasDb implementation
- it is much easier from an op schema perspective - there are many composite list ops registered internally and externally that would be tricky to register and get right if we did something more complicated
- It limits the size of the AliasDb, because a container of size 10 only contains a single memory dag element instead of 10 elements.
The downside is that we have are unable to handle the simple and extremely common case of a list of tensors being used in an ATen op.
In an example like:
```
def foo(input):
x = torch.tensor([1, 2, 3, 4])
y = [x, x]
input.add_(1)
return torch.cat(y)
```
we will consider x to be written to. any write to any wildcard element (an element that enters a tuple, an element that is taken from a list) will mark x as written to. This can be limiting for our ability to create a functional subset and fuse graphs - as a result, 4 of TorchVision classification models could not be functionalized.
Test Plan: Imported from OSS
Reviewed By: SplitInfinity
Differential Revision: D23828003
Pulled By: eellison
fbshipit-source-id: 9109fcb6f2ca20ca897cae71683530285da9d537
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41503
Fix for https://github.com/pytorch/pytorch/issues/41192
We can map fill_ and zero_ to their functional equivalents full_like and zeros_like
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D22629269
Pulled By: eellison
fbshipit-source-id: f1c62684dc55682c0b3845022e0461ec77d07179