pytorch/torch/csrc/lazy/ts_backend
Gufan Yin e6ba4d0725 Back out "Do not decompose in functionalization/proxy tensor if autograd wouldn't have decomposed (#164939)" (#165910)
Summary:
Original commit changeset: d6d62d0c96dd

Original Phabricator Diff: D84468451 and D84613184

D84468451 caused CUDA OutOfMemoryError in model.

Test Plan:
D84468451 was found through bisect.  Also double checked on recent trunk 9866939225248c2adc307be7a804b26db0b9b555: f815887517

With this diff that backs out D84468451 and D84613184 : f816114560

Differential Revision: D85025378

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165910
Approved by: https://github.com/clee2000
2025-10-21 16:36:38 +00:00
..
ops
config.cpp
config.h
dynamic_ir.cpp
dynamic_ir.h
ir_builder.h
tensor_aten_ops.cpp
tensor_aten_ops.h
ts_autograd_functions.cpp
ts_autograd_functions.h
ts_backend_impl.cpp
ts_backend_impl.h
ts_eager_fallback.cpp
ts_eager_fallback.h
ts_lowering_context.cpp
ts_lowering_context.h
ts_native_functions.cpp Back out "Do not decompose in functionalization/proxy tensor if autograd wouldn't have decomposed (#164939)" (#165910) 2025-10-21 16:36:38 +00:00
ts_node_lowering.cpp
ts_node_lowering.h
ts_node.cpp
ts_node.h