pytorch/test/cpp/aoti_inference
Mu-Chu Lee 2291199e9b [AOTInductor] Use CudaCachingAllocator for memory allocation (#162893)
Summary:
Use c10::CudaCachingAllocator for AOTInductor's initial constant buffer
allocation.

Test Plan:
Activate test under test/cpp/aoti_inference/test.cpp

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162893
Approved by: https://github.com/desertfire
2025-09-17 17:08:20 +00:00
..
aoti_custom_class.cpp
aoti_custom_class.h
CMakeLists.txt [AOTI] Fix AOT inductor CMake build dependency order (#157557) 2025-07-04 14:33:36 +00:00
compile_model.py
generate_lowered_cpu.py
standalone_compile.sh
standalone_test.cpp
test.cpp [AOTInductor] Use CudaCachingAllocator for memory allocation (#162893) 2025-09-17 17:08:20 +00:00
test.py [AOTInductor] Add test for enabling CUDACachingAllocator for AOTInductor's Weight (#159279) 2025-07-29 02:52:10 +00:00