mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 00:21:07 +01:00
Previously when we lower backward AOT due to symints, the post grad passes would leave the bw_module in a non-runnable state. This caused issues when compiled autograd tried to trace at runtime. So we had inductor operate on a deepcopy of bw_module. But with https://github.com/pytorch/pytorch/issues/153993, we see that deepcopying real tensors will fail under fake mode due to the device type mismatch between the fake tensors ("meta" device) and the real tensor. So by disabling fake mode, we avoid these errors. This change is a strict improvement over current, but it does reveal that this deepcopy can theoretically cause OOMs. FIXES https://github.com/pytorch/pytorch/issues/153993 Pull Request resolved: https://github.com/pytorch/pytorch/pull/153999 Approved by: https://github.com/jamesjwu, https://github.com/bdhirsh |
||
|---|---|---|
| .. | ||
| _activation_checkpointing | ||
| _aot_autograd | ||
| __init__.py | ||
| aot_autograd.py | ||
| apis.py | ||
| autograd_function.py | ||
| batch_norm_replacement.py | ||
| benchmark_utils.py | ||
| compile_utils.py | ||
| compilers.py | ||
| config.py | ||
| deprecated.py | ||
| eager_transforms.py | ||
| functional_call.py | ||
| fx_minifier.py | ||
| make_functional.py | ||
| partitioners.py | ||
| pyfunctorch.py | ||
| python_key.py | ||
| pytree_hacks.py | ||
| top_operators_github_usage.py | ||
| utils.py | ||
| vmap.py | ||