* Fix vulkan PCI ID and ID handling
Intel GPUs may not report PCI IDs which was leading to incorrect overlap
detection. Switch to using the existing PCI IDs, however AMD GPUs claim not to
report PCI IDs, but actually do, so try anyway, as this is required for ADLX to
find the GPUs on Windows. Numeric IDs lead to scheduling problems, so this also
switches Vulkan to use UUID based IDs. The GPU discovery patches have been
squashed into a single patch to simplify future rebases.
* review comments
On main, the `RENDERER` and `PARSER` fields from the `Modelfile` don't
get propagated to a new model created with a `req.From` parameter. This
is easily triggered via `ollama run qwen3-coder`, then running some save
command like `/save qwen3-coder-custom`.
Added a regression test for this, and then open the config for the
"from" model in order to use its renderer/parser as a default for the
new model. This will fix the CLI and also API-based creates.
Fixes: https://github.com/ollama/ollama/issues/12792
Currently, checking the length of prompts for embeddings to ensure
they fit in the context window (and possible truncation) occurs in
two places - the Ollama server and runner. This can lead to
inconsistencies in both the checks and reported number of tokens
processed. Since we have to do this processing in the runner, this
consolidates all of the logic there.
If we create a memory layout that should fit based on report free VRAM
but allocation still fails, we start applying a backoff. This reduces
free VRAM by an exponential percentage (1%, 2%, 4%...). However, the
points chosen tend to be too dense at the beginning and too sparse at
the end. Therefore, this switches to an incremental backoff (10%, 20%,
30%...).
* DRY out the runner lifecycle code
Now that discovery uses the runners as well, this unifies the runner spawning code
into a single place. This also unifies GPU discovery types with the newer ml.DeviceInfo
* win: make incremental builds better
Place build artifacts in discrete directories so incremental builds don't have to start fresh
* Adjust sort order to consider iGPUs
* handle cpu inference oom scenarios
* review comments
We currently short circuit generation of the cache mask and just
generate an empty tensor of the correct size. However, in some
cases, this can also skip a cast operation. This can result in the
worst case graph being not fully worst case.
We don't actually need the fast path for mask generation, so it's
better to just use the normal code path.
Currently, we only record the time for the last batch when processing
the prompt. This results in unrealistically high numbers for the
old llama runner.
Before:
total duration: 31.273112939s
load duration: 4.97054657s
prompt eval count: 32768 token(s)
prompt eval duration: 235.137439ms
prompt eval rate: 139356.80 tokens/s
eval count: 1873 token(s)
eval duration: 18.173182374s
eval rate: 103.06 tokens/s
After:
total duration: 30.024798033s
load duration: 4.758588663s
prompt eval count: 32768 token(s)
prompt eval duration: 7.779621548s
prompt eval rate: 4212.03 tokens/s
eval count: 1769 token(s)
eval duration: 17.148014223s
eval rate: 103.16 tokens/s
Users on Windows without GPUs are reporting errors relating to
cudaDriverGetVersion with the device set to -1. This ensures we only grab the
driver once we're enumerating actual devices.
When loading the dynamic libraries, if something goes wrong report some
details. Unfortunately this wont explain which dependencies are missing,
but this breadcrumb in the logs should help us diagnose GPU discovery
failures.
* test: harden scheduler tests
This removes reschedDelay which was stale code, and adds
a new configurable timeout for the waitForVRAMRecovery so
tests can now set the timeout to be very short to avoid the
scheduler getting stuck and hitting a test timeout.
* test: tune tests for partial loads
Give stress tests more time when the model is split between CPU/GPU
Adds a temporary global flag to renderers that causes renderers to always
render images as [img]. In a follow up change, we will consider making this
the default, and this flag could eventually be removed
* changing initial status to take into consideration prefill
* Add seperate strings for content and thinking builder
* thinking tests
* remove white space from string before closing think tag