ollama/llama/llama.cpp
Daniel Hiltgen 850da848c5
logs: fix bogus "0 MiB free" log line (#12590)
On the llama runner, after the recent GGML bump a new log line reports
incorrect 0 MiB free after our patch to remove memory from the props.  This
adjusts the llama.cpp code to fetch the actual free memory of the active device.
2025-10-14 11:26:28 -07:00
..
common Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
include Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
src logs: fix bogus "0 MiB free" log line (#12590) 2025-10-14 11:26:28 -07:00
tools/mtmd Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
vendor Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
.rsync-filter update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
LICENSE next build (#8539) 2025-01-29 15:03:38 -08:00