This PR is part of a series attempting to re-submit https://github.com/pytorch/pytorch/pull/134592 as smaller PRs.
In jit tests:
- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Raise a RuntimeError on tests which have been disabled (not run)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154725
Approved by: https://github.com/clee2000
This is a technical revert of 6d36bbde7e to reconcile it with e50478c02592597f12b8490ec5496f76c7d8b8cc (which is the same + lint changes applied)
Should be skipped during import
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72616
Following logic [here](https://codebrowser.bddppq.com/pytorch/pytorch/aten/src/ATen/WrapDimUtils.h.html#_ZN2atL19legacy_cat_wrap_dimElN3c108ArrayRefINS_6TensorEEE)
The prior version was checking if dim was not None, we should be checking if it's None. Strangely, the shape analysis still worked because the negative indexing just wrapped around, however it would lead to errors in executing shape functions. In follow up I will extend shape functions testing to actually invoke shape functions as well to catch this type of bug.
This wasn't caught in the nnc opinfo tests bc nnc was already failing for cat single-node :'(
Test Plan: Imported from OSS
Reviewed By: Krovatkin
Differential Revision: D34117930
Pulled By: eellison
fbshipit-source-id: 2c60430d7144dc828a6a4789e0015b83153f7a32
(cherry picked from commit 3ee820753f)
Summary:
Needed for NNC dynamic shape fusion. Previously, when creating a partially evaluated graph for symbolic shape compute, if the input wasn't used, we wouldn't compute it, which led to failures when NNC expected this value to be passed in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68281
Reviewed By: navahgar
Differential Revision: D32401365
Pulled By: eellison
fbshipit-source-id: 97a684e5f1faed5df77c8fd69f9623cdba0781f9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66098
`cat` is somewhat special-cased right now because currently we only have list of Tensor inputs where the list is constructed in the JIT IR graph. While that is generally true for Fusion (e.g. why we have ConstantChunk) that may not be true for shape analysis generally, so I'm waiting a bit to generalize.
Test Plan: Imported from OSS
Reviewed By: navahgar, anjali411
Differential Revision: D31797467
Pulled By: eellison
fbshipit-source-id: ca761e214dfd7f3bba8d189f3b3f42ffec064f63
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66097
Adding logic to generate runtime shapes for nodes with multi-outputs. It is generalizing existing flow of looking at a node, getting its shape graph, inlining it, and adding a mapping from the output to the new value in the stitched shape compute graph to loop over multiple outputs.
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D31797468
Pulled By: eellison
fbshipit-source-id: 2c182b71a46b36d33f23ad35b89790a4a5d4471c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65575
This is needed for lowering an NNC model to mobile. It is also the last class of unhandled ops which NNC fuses, and we need integration this for computing output symbolic shapes.
The graph of with two dynamic shape inputs produces:
```
graph(%x.1 : Tensor(SS(-2), 2, 3),
%y.1 : Tensor(SS(-3), 2, 3)):
%5 : int = prim::Constant[value=0]()
%4 : Tensor[] = prim::ListConstruct(%x.1, %y.1)
%6 : Tensor(SS(-4), 2, 3) = aten::cat(%4, %5) # /private/home/eellison/pytorch/test/jit/test_symbolic_shape_analysis.py:290:19
return (%6)
```
With a partial eval graph of
```
Done with partial evaluation
graph(%129 : int[],
%130 : int[],
%dim.14 : int):
%738 : int = prim::Constant[value=3]()
%737 : int = prim::Constant[value=2]()
%132 : int = prim::Constant[value=0]()
%392 : int = aten::__getitem__(%129, %132) # <string>:339:44
%417 : int = aten::__getitem__(%130, %132) # <string>:339:44
%cat_dim_size.48 : int = aten::add(%392, %417) # <string>:339:29
%result_size.5 : int[] = prim::ListConstruct(%cat_dim_size.48, %737, %738)
return (%result_size.5)
```
To handle cat, I essentially make the cat shape op variadic,
replacing
```
torch.cat([x, y]
...
def cat_shape_op(tensors: List[List[int]], dim: int):
...
op(tensors)
```
with
```
def cat_shape_op(x: List[int], y: List[int], dim: int):
tensors = [x, y]
op(tensors)
```
This reuses the existing input Tensor properties partial evaluation path and avoids having to add special handling to optimize out `len(tensors)` calls in the IR.
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D31797471
Pulled By: eellison
fbshipit-source-id: 62c794533d5fabfd3fad056d7e5fe3e8781b22c5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66098
`cat` is somewhat special-cased right now because currently we only have list of Tensor inputs where the list is constructed in the JIT IR graph. While that is generally true for Fusion (e.g. why we have ConstantChunk) that may not be true for shape analysis generally, so I'm waiting a bit to generalize.
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D31732415
Pulled By: eellison
fbshipit-source-id: 7f513cea355f1e4c1d2ca7c32c06690a9bdcb050
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66097
Adding logic to generate runtime shapes for nodes with multi-outputs. It is generalizing existing flow of looking at a node, getting its shape graph, inlining it, and adding a mapping from the output to the new value in the stitched shape compute graph to loop over multiple outputs.
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D31732418
Pulled By: eellison
fbshipit-source-id: 767698d031b1daf002678a025b270e0ede429061
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65575
This is needed for lowering an NNC model to mobile. It is also the last class of unhandled ops which NNC fuses, and we need integration this for computing output symbolic shapes.
The graph of with two dynamic shape inputs produces:
```
graph(%x.1 : Tensor(SS(-2), 2, 3),
%y.1 : Tensor(SS(-3), 2, 3)):
%5 : int = prim::Constant[value=0]()
%4 : Tensor[] = prim::ListConstruct(%x.1, %y.1)
%6 : Tensor(SS(-4), 2, 3) = aten::cat(%4, %5) # /private/home/eellison/pytorch/test/jit/test_symbolic_shape_analysis.py:290:19
return (%6)
```
With a partial eval graph of
```
Done with partial evaluation
graph(%129 : int[],
%130 : int[],
%dim.14 : int):
%738 : int = prim::Constant[value=3]()
%737 : int = prim::Constant[value=2]()
%132 : int = prim::Constant[value=0]()
%392 : int = aten::__getitem__(%129, %132) # <string>:339:44
%417 : int = aten::__getitem__(%130, %132) # <string>:339:44
%cat_dim_size.48 : int = aten::add(%392, %417) # <string>:339:29
%result_size.5 : int[] = prim::ListConstruct(%cat_dim_size.48, %737, %738)
return (%result_size.5)
```
To handle cat, I essentially make the cat shape op variadic,
replacing
```
torch.cat([x, y]
...
def cat_shape_op(tensors: List[List[int]], dim: int):
...
op(tensors)
```
with
```
def cat_shape_op(x: List[int], y: List[int], dim: int):
tensors = [x, y]
op(tensors)
```
This reuses the existing input Tensor properties partial evaluation path and avoids having to add special handling to optimize out `len(tensors)` calls in the IR.
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D31732416
Pulled By: eellison
fbshipit-source-id: 6d93ddf62c34846ec238159f75229632515530b7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63516
how to review: pretty much just check that the inputs generated are a good representation of the op semantics, that should be sufficient for correctness, and then you can also double check the op size semantics by going to https://codebrowser.bddppq.com/pytorch/pytorch/ typing in native::{op_name} and looking at the op implementation as a bonus if you want
Test Plan: Imported from OSS
Reviewed By: driazati
Differential Revision: D30738143
Pulled By: eellison
fbshipit-source-id: c7cd01cb2c8a13cb2664415f3d98aedec19a8e07
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61975
Propagate symbolic dimensions through size calls. We did this by associating SymbolicSizes with integer inputs by looking through their constructors for `x.size(1)` or `x.size()` nodes.
Test Plan: Imported from OSS
Reviewed By: gchanan
Differential Revision: D30196948
Pulled By: eellison
fbshipit-source-id: 377fc1d2f6d396c52dc0e87fa814b15720f1414e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56966
This PR adds a toggle to shape analysis which won't inline complete tensor shapes as constants into the shape compute graph, which is a good stress test on the partial evaluation pipeline.
Test Plan: Imported from OSS
Reviewed By: bdhirsh
Differential Revision: D28444664
Pulled By: eellison
fbshipit-source-id: a62e424515a8837a4b596546efa93af5e8e61f10