pytorch/torch/_dynamo/variables
jmaczan 3401665110 Patch the flex_attention._get_mod_type to not use inspect.signature when computing num_positional_args (an alternative fix for flex attention graph break on create_block_mask) (#164923)
The initial fix for inspect.signature uses not a right approach (https://github.com/pytorch/pytorch/pull/164349#pullrequestreview-3306614010). As @williamwen42 suggests (https://github.com/pytorch/pytorch/pull/164349#issuecomment-3379222885) we can just for now get rid of `inspect.signature` call in flex_attention to resolve this high priority issue (https://github.com/pytorch/pytorch/issues/164247#issuecomment-3378673179). In this PR I did exactly this - limited the scope of fix to just computing `num_positional_args` in `flex_attention._get_mod_type` based on properties returned by `NestedUserFunctionVariable.const_getattr` (some were missing so I added them)

Fixes #164247

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164923
Approved by: https://github.com/williamwen42
2025-10-14 18:29:15 +00:00
..
__init__.py [user-streams] Move stream code to streams module (#163027) 2025-10-14 05:43:19 +00:00
base.py [dynamo][guards] More small guard optimizations (#159345) 2025-07-29 18:36:49 +00:00
builder.py [user-streams] Handle aliasing properly (#163028) 2025-10-14 05:43:19 +00:00
builtin.py [user-streams] Move stream code to streams module (#163027) 2025-10-14 05:43:19 +00:00
constant.py More ruff SIM fixes (#164695) 2025-10-09 03:24:50 +00:00
ctx_manager.py [user-streams] Move stream code to streams module (#163027) 2025-10-14 05:43:19 +00:00
dicts.py [1/N] Use "is" in python type comparison (#165037) 2025-10-10 12:36:50 +00:00
distributed.py Fix replacement reconstruct (#164937) 2025-10-09 15:31:23 +00:00
functions.py Patch the flex_attention._get_mod_type to not use inspect.signature when computing num_positional_args (an alternative fix for flex attention graph break on create_block_mask) (#164923) 2025-10-14 18:29:15 +00:00
higher_order_ops.py [2/N] More ruff SIM fixes (#165031) 2025-10-14 14:22:54 +00:00
iter.py [dynamo, nested graph breaks] move cell codegen before side effects codegen (#160601) 2025-10-08 22:02:52 +00:00
lazy.py [dynamo] Avoid recompiling over unused objects (#156891) 2025-07-09 20:14:34 +00:00
lists.py [1/N] Use "is" in python type comparison (#165037) 2025-10-10 12:36:50 +00:00
misc.py More ruff SIM fixes (#164695) 2025-10-09 03:24:50 +00:00
nn_module.py More ruff SIM fixes (#164695) 2025-10-09 03:24:50 +00:00
optimizer.py [Dynamo] Don't guard data ptrs by default with mark_static_address (#162208) 2025-09-12 07:15:10 +00:00
script_object.py [dynamo] Replace unimplemented with unimplemented_v2 in torch/_dynamo/variables/script_object.py (#159343) 2025-08-01 21:30:41 +00:00
sdpa.py More ruff SIM fixes (#164695) 2025-10-09 03:24:50 +00:00
streams.py [user-cuda-streams] Add fork/join custom ops (#162900) 2025-10-14 05:43:19 +00:00
tensor.py Enable ruff rule E721 (#165162) 2025-10-13 01:48:55 +00:00
torch_function.py [dynamo] Be consistent with UserMethodVariable source (#160155) 2025-08-09 04:16:14 +00:00
torch.py [export][dynamo] Fallback to slowpath for MultiHeadAttention for strict export (#164721) 2025-10-09 03:25:15 +00:00
user_defined.py [2/N] Use "is" in python type comparison (#165142) 2025-10-10 15:36:44 +00:00