pytorch/torch/csrc/inductor
Yang Chen 4d0ae7c9da [inductor] support _scaled_dot_product_flash_attention fallback (#110085)
Summary:
This PR supports _scaled_dot_product_flash_attention fallback kernel.
Note that in the abi_compatible mode, we retrieve outputs by passing
output argument pointers rather than relying on std::get.

It also fixes an issue related to dynamic shapes, where we wrongfully
query undefined dynamic symbols.

Test Plan: ci

Reviewed By: frank-wei

Differential Revision: D49620191

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110085
Approved by: https://github.com/desertfire
2023-09-27 00:09:56 +00:00
..
aoti_runtime [aotinductor] Rename aot_runtime to aoti_runtime (#110007) 2023-09-26 00:46:54 +00:00
aoti_torch [inductor] support _scaled_dot_product_flash_attention fallback (#110085) 2023-09-27 00:09:56 +00:00
inductor_ops.cpp [inductor] Fix inputs with existing offsets (#108168) 2023-08-29 23:47:03 +00:00
inductor_ops.h [inductor] Fix inputs with existing offsets (#108168) 2023-08-29 23:47:03 +00:00