mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
This PR allows users to specify int values for dimensions in dynamic_shapes as well as None, for example:
```
class Foo(torch.nn.Module):
def forward(self, x, y, z):
...
foo = Foo()
inputs = (torch.randn(4, 6), torch.randn(5, 4), torch.randn(3, 3))
for dynamic_shapes in [
None
((4, 6), (5, 4), (3, 3)),
((None, 6), None, {0: 3, 1: 3})
]:
_ = export(foo, inputs, dynamic_shapes=dynamic_shapes)
```
All of the above should produce the same ExportedProgram.
This is done by temporarily creating a static dim constraint during analysis, where vr.lower == vr.upper. These constraints are then deleted during _process_constraints(), and do not show up in the final ExportedProgram's range_constraints.
Additionally, export() will also fail if the shapes are mis-specified, for example:
```
_ = export(foo, inputs, dynamic_shapes=((5, None), None, None))
```
leads to `torch._dynamo.exc.UserError: Static shape constraint of 5 does not match input size of 4, for L['x'].size()[0]`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121860
Approved by: https://github.com/avikchaudhuri
|
||
|---|---|---|
| .. | ||
| __init__.py | ||
| _remove_auto_functionalized_pass.py | ||
| _remove_effect_tokens_pass.py | ||
| _safeguard.py | ||
| _trace.py | ||
| _tree_utils.py | ||
| _unlift.py | ||
| custom_obj.py | ||
| dynamic_shapes.py | ||
| exported_program.py | ||
| graph_signature.py | ||
| unflatten.py | ||