pytorch/torch/_dynamo/variables
David Berard 7c38b76efe Make offsets dynamic by default (#113734)
Copied from @ezyang 's #113693.

The motivation for this change is that we'd like to guard on storage offset in inductor, to make assumptions about data alignment.

create_symbolic_sizes_strides_storage_offset() creates the sizes/strides/offset for fake tensors - they can either be integers or symints. This PR changes storage_offset to always be dynamic. In variables/builder.py, we remove a conditional so that all tensors get added to tracked_fakes. This is because the storage offset will be dynamic even if the other logic in builder.py suggests that it will be static; otherwise, we run into this issue:

1e260c851b/torch/fx/experimental/symbolic_shapes.py (L892-L895)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113734
Approved by: https://github.com/ezyang
2023-11-17 07:57:21 +00:00
..
__init__.py [dynamo] Add LazyVariableTracker (#111306) 2023-11-07 19:55:19 +00:00
base.py [dynamo] Refactor OrderedDict to dict (#113234) 2023-11-08 09:27:08 +00:00
builder.py Make offsets dynamic by default (#113734) 2023-11-17 07:57:21 +00:00
builtin.py [dynamo] chore: Fallback on const_handler instead of special-casing on ConstantVariable (#113893) 2023-11-17 07:46:58 +00:00
constant.py [dynamo] Remove VariableTracker.propagate (#111726) 2023-11-07 19:55:19 +00:00
ctx_manager.py [contextlib] Wrapping a function with set_grad_enabled will consume its global mutation (#113359) 2023-11-09 19:16:20 +00:00
dicts.py Revert "Support tensors as Dict keys (#111196)" 2023-11-15 23:08:00 +00:00
distributed.py Make FakeProcessGroup traceable (#113314) 2023-11-10 16:03:38 +00:00
functions.py Revert "Support tensors as Dict keys (#111196)" 2023-11-15 23:08:00 +00:00
higher_order_ops.py [HigherOrderOp][BE] change _make_inlined check callable() (#113881) 2023-11-17 02:44:12 +00:00
iter.py [dynamo] Remove VariableTracker.add_options (#111725) 2023-11-07 19:55:19 +00:00
lazy.py [dynamo] Remove VariableTracker.propagate (#111726) 2023-11-07 19:55:19 +00:00
lists.py [dynamo] Remove VariableTracker.propagate (#111726) 2023-11-07 19:55:19 +00:00
misc.py [dynamo] Fix allow_in_graph decorator doesn't work on autograd.Function (#113510) 2023-11-16 22:44:46 +00:00
nn_module.py [dynamo] Remove VariableTracker.propagate (#111726) 2023-11-07 19:55:19 +00:00
optimizer.py [dynamo] Eagerly install guards (#111415) 2023-11-07 19:55:19 +00:00
tensor.py [dynamo] Fix incorrectly casting SymNode to int when input is bool (#113871) 2023-11-16 23:24:57 +00:00
torch_function.py [dynamo] Eagerly install guards (#111415) 2023-11-07 19:55:19 +00:00
torch.py graph break on intermediate leaves that require grad (#113277) 2023-11-16 02:47:45 +00:00
user_defined.py Revert "Support tensors as Dict keys (#111196)" 2023-11-15 23:08:00 +00:00