pytorch/torch
clr 33daaad7d0 dynamo: Handle objects in graph that do not support weakref (#163168)
We are seeing crashes of the form
```
Traceback (most recent call last):
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/symbolic_convert.py", line 1487, in run
    while self.step():
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/symbolic_convert.py", line 1348, in step
    self.dispatch_table[inst.opcode](self, inst)
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/symbolic_convert.py", line 2437, in LOAD_ATTR
    self._load_attr(inst)
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/symbolic_convert.py", line 2425, in _load_attr
    result = BuiltinVariable(getattr).call_function(
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/variables/builtin.py", line 1347, in call_function
    return handler(tx, args, kwargs)
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/variables/builtin.py", line 967, in <lambda>
    tx, [v.realize() for v in args], kwargs
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/variables/builtin.py", line 967, in <listcomp>
    tx, [v.realize() for v in args], kwargs
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/variables/lazy.py", line 72, in realize
    self._cache.realize()
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/variables/lazy.py", line 33, in realize
    self.vt = builder.VariableBuilder(tx, self.source)(self.value)
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/variables/builder.py", line 445, in __call__
    vt = self._wrap(value)
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/variables/builder.py", line 1043, in _wrap
    torch._dynamo.utils.store_user_object_weakref(value)
  File "/packages/aps_ads_vm/launcher_multiapp-inplace#link-tree/torch/_dynamo/utils.py", line 4694, in store_user_object_weakref
    user_obj_id_to_weakref[obj_id] = weakref.ref(obj)
torch._dynamo.exc.InternalTorchDynamoError: TypeError: cannot create weak reference to 'torch.Event' object
```

This pull request makes us gracefully graph break, vs explicitly crashing.

I've added a test which reproduces the issue. There is a side discussion re:
how did torch.Event support ever work here, since it appears you cannot take a
weakref to a torch.Event

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163168
Approved by: https://github.com/Lucaskabela, https://github.com/jansel
2025-09-22 22:11:09 +00:00
..
_awaits
_C [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
_C_flatbuffer
_custom_op [BE]: ruff PLC0207 - use maxsplit kwarg (#160107) 2025-08-08 03:14:59 +00:00
_decomp support unbacked softmax / logsoftmax (#162216) 2025-09-18 15:43:20 +00:00
_dispatch
_dynamo dynamo: Handle objects in graph that do not support weakref (#163168) 2025-09-22 22:11:09 +00:00
_export Improve fake tensor leakage detection in export by not relying on gc too much (#163516) 2025-09-22 22:04:24 +00:00
_functorch Enable logging for absolute memory estimation (#158799) 2025-09-22 18:36:49 +00:00
_higher_order_ops [export] Fix wrap_with_set_grad_enabled retracing (#163295) 2025-09-21 22:54:40 +00:00
_inductor Replace Literal[None] with None in typing (#163489) 2025-09-22 22:10:08 +00:00
_lazy
_library Replace Literal[None] with None in typing (#163489) 2025-09-22 22:10:08 +00:00
_logging Add compile_id: Optional[CompileID] to torch._logging._internal.trace_structured_artifact (#160440) 2025-08-13 06:28:23 +00:00
_numpy
_prims [Bugfix] Match eager stride semantics for cloned tensors with preserve_format in compile (#163017) 2025-09-19 19:41:33 +00:00
_prims_common are_strides_like_channels_last_or_false (#162354) 2025-09-16 00:49:05 +00:00
_refs Better decomp for torch.eye (#163386) 2025-09-22 21:52:37 +00:00
_strobelight
_subclasses Improve fake tensor leakage detection in export by not relying on gc too much (#163516) 2025-09-22 22:04:24 +00:00
_vendor
accelerator Add unified memory APIs for torch.accelerator (#152932) 2025-08-08 17:41:22 +00:00
amp [Easy][AMP] Refactor the AMP logic for getting dtype (#162796) 2025-09-21 06:32:35 +00:00
ao remove allow-untyped-defs from ./torch/ao/quantization/pt2e/duplicate_dq_pass.py (#163470) 2025-09-22 20:29:09 +00:00
autograd [ONNX] Refactor torchscript based exporter (#161323) 2025-09-02 16:10:30 +00:00
backends Revert "[ROCm] SDPA fix mem fault when dropout is enabled (#154864)" 2025-08-26 20:03:59 +00:00
compiler Simplify PrecompileContext to no longer be a CacheArtifactManager (#162886) 2025-09-20 01:24:37 +00:00
contrib
cpu Replace _device_t with torch.types.Device in torch/cpu/__init__.py (#161031) 2025-08-21 00:22:43 +00:00
csrc [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
cuda Replace Literal[None] with None in typing (#163489) 2025-09-22 22:10:08 +00:00
distributed [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
distributions
export Improve fake tensor leakage detection in export by not relying on gc too much (#163516) 2025-09-22 22:04:24 +00:00
fft
func
futures
fx Improve fake tensor leakage detection in export by not relying on gc too much (#163516) 2025-09-22 22:04:24 +00:00
headeronly Add CUDA_KERNEL_ASSERT_PRINTF, a more flexible CUDA_KERNEL_ASSERT_MSG (#160129) 2025-09-16 00:23:48 +00:00
jit Deprecate Lite Interpreter (#163289) 2025-09-18 23:56:21 +00:00
legacy
lib
linalg Revert "Add __init__.pyi to torch/linalg (#160750)" 2025-09-02 16:53:55 +00:00
masked Remove guard_size_oblivious from default contiguity python check, and add aten.sym_is_contiguous. [attempt2] (#160869) 2025-09-08 22:59:13 +00:00
monitor
mps
mtia [BE] Add Documentation for Device APIs (#162834) 2025-09-16 17:01:06 +00:00
multiprocessing Allow parallel start NUMA binding (#161576) 2025-08-28 01:15:58 +00:00
nativert Update placement utils and weights to handle meta device (#162842) 2025-09-17 08:12:32 +00:00
nested Add NestedTensor dispatch for _is_any_true/_is_all_true (#162096) 2025-09-22 20:22:44 +00:00
nn [2/n] Support module.to("cuda:0") in FakeTensorMode on cuda-less machine (#163433) 2025-09-22 20:16:32 +00:00
numa Allow parallel start NUMA binding (#161576) 2025-08-28 01:15:58 +00:00
onnx [mypy] add some import ignores to onnx (#163133) 2025-09-17 09:32:38 +00:00
optim [optim] override SWALR.state_dict and load_state_dict (#163122) 2025-09-17 18:17:26 +00:00
package [BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format (#144552) 2025-08-07 00:09:56 +00:00
profiler removed duplicate imports (#161685) 2025-08-31 16:21:49 +00:00
quantization [BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format (#144552) 2025-08-07 00:09:56 +00:00
signal [BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format (#144552) 2025-08-07 00:09:56 +00:00
sparse Use computed buffer sizes of torch for cusparseLt metadata (#163125) 2025-09-19 22:12:40 +00:00
special [BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format (#144552) 2025-08-07 00:09:56 +00:00
testing [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
utils remove allow-untyped-defs from ./torch/utils/data/datapipes/iter/fileopener.py (#163469) 2025-09-22 20:29:09 +00:00
xpu Add a new API torch.xpu.can_device_access_peer for Intel GPU (#162705) 2025-09-16 18:00:22 +00:00
__config__.py
__future__.py
__init__.py Turn on capture_dynamic_output_shape_ops when fullgraph=True (#163123) 2025-09-18 21:24:15 +00:00
_appdirs.py
_classes.py
_compile.py Replace Literal[None] with None in typing (#163489) 2025-09-22 22:10:08 +00:00
_custom_ops.py
_environment.py
_guards.py fix incorrect interaction between DDPOptimizer and donated buffers (#160745) 2025-09-04 21:57:27 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Use computed buffer sizes of torch for cusparseLt metadata (#163125) 2025-09-19 22:12:40 +00:00
_namedtensor_internals.py
_ops.py [BE] Slight improvements to documentation in python_dispatch (#162963) 2025-09-21 01:45:46 +00:00
_python_dispatcher.py
_size_docs.py
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py Add missing optional for tensor ops (#159028) 2025-07-25 04:36:55 +00:00
_tensor_str.py Fix max_width computation in _tensor_str._Formatter (#126859) 2025-08-01 15:05:41 +00:00
_tensor.py torchdim Python port (#160236) 2025-09-21 03:01:04 +00:00
_thread_safe_fork.py
_torch_docs.py Update docs for quantile to be clearer for nearest (#162423) 2025-09-09 18:04:12 +00:00
_utils_internal.py Add DISABLE_JUSTKNOBS to torch/_utils_internal.py and use it for dynamo _maybe_set_eval_frame (#162298) 2025-09-15 23:00:39 +00:00
_utils.py
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Fix type checking for persistent loads in the weights-only unpickler (#161661) 2025-09-01 19:57:19 +00:00
CMakeLists.txt [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
custom_class_detail.h
custom_class.h
extension.h
functional.py unify broadcast_shapes functions and avoid duplicates (#160251) 2025-08-16 00:54:32 +00:00
header_only_apis.txt [Reland] Migrate ScalarType to headeronly (#159911) 2025-08-06 07:36:37 +00:00
hub.py Allow torch.hub.load with unauthorized GITHUB_TOKEN (#159896) 2025-08-14 18:15:49 +00:00
library.h Using std::make_unique<T>() instead of unique<T>(new T()) (#160723) 2025-08-19 10:25:47 +00:00
library.py Replace Literal[None] with None in typing (#163489) 2025-09-22 22:10:08 +00:00
overrides.py Fully native DTensor.__new__ (#162508) 2025-09-21 18:36:05 +00:00
py.typed
quasirandom.py
random.py
return_types.py
script.h
serialization.py added class or module info for functions blocked by weight-only load (#159935) 2025-08-12 20:52:25 +00:00
storage.py
torch_version.py
types.py
version.py.tpl