pytorch/torch/csrc/jit/codegen
jjsjann123 dd6dd03ff2 Enable output allocation cache (#86100)
Cherry-picked from devel branch: https://github.com/csarofeen/pytorch/pull/2010

turns on accidentally disabled output allocation cache [#2002](https://github.com/csarofeen/pytorch/issues/2002)
Updated check for safety regarding allocation cache by iterating all IterDomain on outputs and enables cache re-use only when no extent value is a consumer of fusion inputs (output sizes is not dependent on scalar inputs).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86100
Approved by: https://github.com/csarofeen
2022-10-10 23:31:21 +00:00
..
cuda Enable output allocation cache (#86100) 2022-10-10 23:31:21 +00:00
fuser [ROCm] enable jiterator (#77982) 2022-08-15 16:04:09 +00:00
onednn simple c10 implementation for std::call_once (#78051) 2022-06-28 15:47:03 +00:00