pytorch/torch/_decomp
Gufan Yin e6ba4d0725 Back out "Do not decompose in functionalization/proxy tensor if autograd wouldn't have decomposed (#164939)" (#165910)
Summary:
Original commit changeset: d6d62d0c96dd

Original Phabricator Diff: D84468451 and D84613184

D84468451 caused CUDA OutOfMemoryError in model.

Test Plan:
D84468451 was found through bisect.  Also double checked on recent trunk 9866939225248c2adc307be7a804b26db0b9b555: f815887517

With this diff that backs out D84468451 and D84613184 : f816114560

Differential Revision: D85025378

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165910
Approved by: https://github.com/clee2000
2025-10-21 16:36:38 +00:00
..
__init__.py Back out "Do not decompose in functionalization/proxy tensor if autograd wouldn't have decomposed (#164939)" (#165910) 2025-10-21 16:36:38 +00:00
decompositions_for_jvp.py Enable all PIE rules on ruff (#165814) 2025-10-18 07:36:18 +00:00
decompositions_for_rng.py [5/N] Apply ruff UP035 rule (#164423) 2025-10-02 07:31:11 +00:00
decompositions.py Back out "Do not decompose in functionalization/proxy tensor if autograd wouldn't have decomposed (#164939)" (#165910) 2025-10-21 16:36:38 +00:00