pytorch/aten
Brian Hirsh ddf4f7bc89 fix inference_mode with torch.compile (#101219)
It looks like inference_mode wasn't playing well with functionalization.

If you run torch.compile on a function, and the inputs to the function are tensors created outside of inference mode, then we need to make sure that when we created functional tensor wrappers for those inputs during compilation, those functional wrappers properly mirror whether or not the original tensor is an inference tensor.

Hopefully fixes https://github.com/pytorch/pytorch/issues/101151

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101219
Approved by: https://github.com/albanD, https://github.com/ezyang
2023-05-24 14:58:40 +00:00
..
conda PyTorch -> C++17 (#98209) (#100557) 2023-05-19 00:49:08 +00:00
src fix inference_mode with torch.compile (#101219) 2023-05-24 14:58:40 +00:00
tools Run C++ tests on CI with run_test.py (#99956) 2023-05-09 21:24:12 +00:00
CMakeLists.txt Remove non-existing third_party/catch from CMake (#95420) 2023-02-24 08:00:07 +00:00