pytorch/torch/_dynamo/variables
Pian Pawakapan 4c007073e6 [dynamic shapes] DynamicInts prototype (#162194)
Initial prototype for dynamic int inputs, allows users to run with `torch.compile(f)(DynamicInt(4))`, compiling dynamically and using the underlying hint at runtime.

Current behavior:
- Also works in eager (mostly by subclassing int), as scalar input to torch functions, or numpy/math/etc. For example, `x = DynamicInt(3); torch.randn(x); torch.add(y, z, alpha=x); np.arange(x)` all act as if x = 3.
- Behavior for arithmetic ops is to return new DynamicInts rather than static ints; `DynamicInt(3) * 2 = DynamicInt(6)`. This is via SymNode magic methods, but coverage might not be 100% - for example, I had to explicitly override floordiv to avoid int casting. This is not necessarily the case for non-magic method ops (e.g. `math.cos(x)`). The alternative here is to int cast on all operations, but I opted for this for dynamism propagation in non-compiled regions.
- Doesn't ban fullgraph=False; DynamicInt objects might be leaked back to the user, but I guess this is fine, because they can be casted to ints when needed?
- Dynamo only allocates one symbol per DynamicInt; specifying the same DynamicInt for multiple inputs leads to input deduplication, and a guard installed.
- We don't raise on int specialization (in allowlist/maybe_mark_dynamic style) - but an easy change if needed.
- DynamicInts as nn.Module attributes are handled.
- We don't guard on the DynamicInt id, e.g. users can do the following without recompiling (maybe we should guard?)
```python
x = DynamicInt(4)
f(x)
f(1)
f(DynamicInt(3))  # same as f(3)
```

Follow-up work:
- Specifying shape constraints, either at the int-level, e.g.
```python
DynamicInt(64, name="s0", constraints=["s0 % 32 == 0", "s0 <= 1024"]
```
or at the compilation level, e.g. something like
```python
s0 = DynamicInt(64, name="s0")
s1 = DynamicInt(128, name="s1")
with some_compiler_config.dynamic_int_constraints(["s1 == 2*s0", "s0 % 32 == 0"]):
    f(s0, s1)
```
This should subsume the need for specifying derived SymInts?
- SymFloat support - currently it seems backed floats are specialized by the tensorify float pass, and there's no handling in inductor.
- Propagating dynamism in tensor constructors, e.g. `x = DynamicInt(4); torch.randn(x)` could annotate `_dynamo_dynamic_indices`.

Differential Revision: D81698719

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162194
Approved by: https://github.com/bobrenjc93
2025-09-18 23:26:28 +00:00
..
__init__.py [dynamo] rename set_fullgraph to error_on_graph_break (#161739) 2025-09-04 01:15:06 +00:00
base.py [dynamo][guards] More small guard optimizations (#159345) 2025-07-29 18:36:49 +00:00
builder.py [dynamic shapes] DynamicInts prototype (#162194) 2025-09-18 23:26:28 +00:00
builtin.py redirect iter(range) to range.__iter__() (#161803) 2025-09-04 02:33:03 +00:00
constant.py [dynamo][vllm] Support typing.get_type_hints (#161362) 2025-08-27 09:55:31 +00:00
ctx_manager.py [dynamo] rename set_fullgraph to error_on_graph_break (#161739) 2025-09-04 01:15:06 +00:00
dicts.py Offload set method execution to CPython when possible (#160763) 2025-09-03 18:26:05 +00:00
distributed.py [dynamo][dist] trace DeviceMesh's get_local_rank and get_rank as constants (#160805) 2025-08-20 01:12:24 +00:00
functions.py [dynamo] Use relaxed CLOSURE_MATCH guard then ID_MATCH (#162247) 2025-09-07 01:25:52 +00:00
higher_order_ops.py [dynamo][hop] Introduce Local Map HOP (#161458) 2025-09-17 09:32:38 +00:00
iter.py Fixes for collections.NamedTuple (#159367) 2025-08-18 17:32:59 +00:00
lazy.py [dynamo] Avoid recompiling over unused objects (#156891) 2025-07-09 20:14:34 +00:00
lists.py fixing graph break for namedtuple._replace (#160139) 2025-09-18 14:32:36 +00:00
misc.py [dynamo][vllm] Support typing.get_type_hints (#161362) 2025-08-27 09:55:31 +00:00
nn_module.py [dynamo] Trace nn.Module __delattr__ (#159969) 2025-08-06 23:43:19 +00:00
optimizer.py [Dynamo] Don't guard data ptrs by default with mark_static_address (#162208) 2025-09-12 07:15:10 +00:00
script_object.py [dynamo] Replace unimplemented with unimplemented_v2 in torch/_dynamo/variables/script_object.py (#159343) 2025-08-01 21:30:41 +00:00
sdpa.py [Dynamo][Misc] Apply typing hints for codegen (#150289) 2025-04-04 14:26:22 +00:00
tensor.py Turn on capture_scalar_outputs when fullgraph=True (#163121) 2025-09-18 21:24:15 +00:00
torch_function.py [dynamo] Be consistent with UserMethodVariable source (#160155) 2025-08-09 04:16:14 +00:00
torch.py Revert "[dynamo] Constant fold torch.autograd._profiler_enabled (#158482)" 2025-09-09 00:21:05 +00:00
user_defined.py NamedTuple: Allow side effects for dynamic attributes (#161645) 2025-09-09 19:42:02 +00:00