pytorch/test/cpp/aoti_inference
Bin Bao 687c15c0b3 [AOTI][BE] Change test_aoti_inference to one-pass build (#164277)
Summary: To fix https://github.com/pytorch/pytorch/issues/159400. Currently, test_aoti_abi_check and test_aoti_inference need to be built in two passes, first build pytorch using the regular `pythonsetup.py develop` and then build with `CMAKE_FRESH=1 BUILD_AOT_INDUCTOR_TEST=1 python setup.py devleop`. This is cumbersome. Fix by rewriting CMakeLists.txt for test_aoti_inference to one-pass build which runs AOTI to compile models at the test time. Also update CI test script to get rid of two-pass build. For test_aoti_abi_check, it is not AOTI specific, so we make it not guarded by BUILD_AOT_INDUCTOR_TEST.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164277
Approved by: https://github.com/janeyx99
2025-10-28 17:43:22 +00:00
..
aoti_custom_class.cpp [AOTI] Fix #140546 and support AOTI package load for Intel GPU. (#140664) 2024-12-10 05:05:08 +00:00
aoti_custom_class.h
CMakeLists.txt [AOTI][BE] Change test_aoti_inference to one-pass build (#164277) 2025-10-28 17:43:22 +00:00
compile_model.py
generate_lowered_cpu.py [AOTInductor] Add standalone test for compilation from ExportedProgram (#142327) 2024-12-10 06:50:09 +00:00
standalone_compile.sh [AOTInductor] Add standalone test for compilation from ExportedProgram (#142327) 2024-12-10 06:50:09 +00:00
standalone_test.cpp [AOTInductor] Add standalone test for compilation from ExportedProgram (#142327) 2024-12-10 06:50:09 +00:00
test.cpp [AOTI][BE] Change test_aoti_inference to one-pass build (#164277) 2025-10-28 17:43:22 +00:00
test.py [AOTInductor] Add test for enabling CUDACachingAllocator for AOTInductor's Weight (#159279) 2025-07-29 02:52:10 +00:00