From 303be9304c3770f0cca07aab93e902fbf71b8fe2 Mon Sep 17 00:00:00 2001 From: Daniel Hiltgen Date: Tue, 7 Oct 2025 16:21:07 -0700 Subject: [PATCH] docs: improve accuracy of LLM library docs (#12530) --- docs/troubleshooting.md | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md index 1706aee7..18c014d1 100644 --- a/docs/troubleshooting.md +++ b/docs/troubleshooting.md @@ -38,13 +38,7 @@ Join the [Discord](https://discord.gg/ollama) for help interpreting the logs. ## LLM libraries -Ollama includes multiple LLM libraries compiled for different GPUs and CPU vector features. Ollama tries to pick the best one based on the capabilities of your system. If this autodetection has problems, or you run into other problems (e.g. crashes in your GPU) you can workaround this by forcing a specific LLM library. `cpu_avx2` will perform the best, followed by `cpu_avx` and the slowest but most compatible is `cpu`. Rosetta emulation under MacOS will work with the `cpu` library. - -In the server log, you will see a message that looks something like this (varies from release to release): - -``` -Dynamic LLM libraries [rocm_v6 cpu cpu_avx cpu_avx2 cuda_v12 rocm_v5] -``` +Ollama includes multiple LLM libraries compiled for different GPU libraries and versions. Ollama tries to pick the best one based on the capabilities of your system. If this autodetection has problems, or you run into other problems (e.g. crashes in your GPU) you can workaround this by forcing a specific LLM library. **Experimental LLM Library Override**