Summary:
In a case like below, if x0 and x1 are both unaliased an only have a single use, than we can rewite the mutation to x2 without breaking observable semantics. This PR makes torchvision.models.alexnet functionalizable.
```
if cond:
x0 = op()
else:
x1 = op()
x2.add_(1)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37694
Differential Revision: D21428275
Pulled By: eellison
fbshipit-source-id: 1e2a39a8fb3819f1f225b7c345e986b3a3db253f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33297
Allowing mutated values as inputs but not outputs has the effect of buffering up all mutated values as inputs to the graph. Just as we values which escape scope as graph inputs but not graph outputs - we should also allow values that get mutated. In both cases, the contract is that that the functional graph cannot write to graph inputs.
Without this patch, if there is a single write to the Tensor wildcard set it would disable all optimization.
Test Plan: Imported from OSS
Differential Revision: D20607175
Pulled By: eellison
fbshipit-source-id: c698e7cf3374e501cd5d835663991026a113ec6b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33199
Remove list appends when we can match them with a list construction. This helps create a larger functional graph
Test Plan: Imported from OSS
Differential Revision: D20603187
Pulled By: eellison
fbshipit-source-id: a60e933b457479d40960994d8ffdf39ef49eaf6e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33186
This helps create larger functional graphs. It has the potential to increase memory use, so in order to land this on by default we would probably also do a reuse of buffers pass.
This is currently O(n * | Removed Nodes | ) because we have to rebuild the alias Db each time we make a change. This pass is critical to creating functional graphs, so this might be a compelling use case to build incremental updates to alias Db.
Test Plan: Imported from OSS
Differential Revision: D20603189
Pulled By: eellison
fbshipit-source-id: 105db52bf38e02188ca6df6d36294466d3309a0a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33020
This is a pass to create functional blocks. The other PRs in the stack help avoid some of the limitations that are are often found in graphs. It's possible that this would work well with a graph that is frozen. Follow up work items that will help this pass:
- We don't currently have any capacity in alias analysis to tell whether a Value that came from the wildcard set "re-escapes" back into the wildcard set.
- More comments on the semantics of the graph and correctness conditions
- We could consider using dynamic dag if the perf of this is a limitation.
- potential make Functional Graphs Functional Blocks instead, so that we do not repeatedly copy constants, also to make IR read easier.
Test Plan: Imported from OSS
Differential Revision: D20603188
Pulled By: eellison
fbshipit-source-id: 6822a6e65f4cc2676f8f6445fe8aa1cb858ebeeb