mirror of
https://github.com/zebrajr/ollama.git
synced 2025-12-06 12:19:56 +01:00
* DRY out the runner lifecycle code Now that discovery uses the runners as well, this unifies the runner spawning code into a single place. This also unifies GPU discovery types with the newer ml.DeviceInfo * win: make incremental builds better Place build artifacts in discrete directories so incremental builds don't have to start fresh * Adjust sort order to consider iGPUs * handle cpu inference oom scenarios * review comments |
||
|---|---|---|
| .. | ||
| build_darwin.sh | ||
| build_docker.sh | ||
| build_linux.sh | ||
| build_windows.ps1 | ||
| env.sh | ||
| install.sh | ||
| push_docker.sh | ||
| tag_latest.sh | ||