ollama/scripts
Daniel Hiltgen 3258a89b6e
DRY out the runner lifecycle code (#12540)
* DRY out the runner lifecycle code

Now that discovery uses the runners as well, this unifies the runner spawning code
into a single place.  This also unifies GPU discovery types with the newer ml.DeviceInfo

* win: make incremental builds better

Place build artifacts in discrete directories so incremental builds don't have to start fresh

* Adjust sort order to consider iGPUs

* handle cpu inference oom scenarios

* review comments
2025-10-23 11:20:02 -07:00
..
build_darwin.sh Align versions for local builds (#9635) 2025-03-14 15:44:08 -07:00
build_docker.sh Update ROCm (6.3 linux, 6.2 windows) and CUDA v12.8 (#9304) 2025-02-25 13:47:36 -08:00
build_linux.sh CI: Set up temporary opt-out Vulkan support (#12614) 2025-10-15 14:18:01 -07:00
build_windows.ps1 DRY out the runner lifecycle code (#12540) 2025-10-23 11:20:02 -07:00
env.sh build: avoid unbounded parallel builds (#12319) 2025-09-18 14:57:01 -07:00
install.sh fix: own lib/ollama directory 2025-03-03 13:01:18 -08:00
push_docker.sh change github.com/jmorganca/ollama to github.com/ollama/ollama (#3347) 2024-03-26 13:04:17 -07:00
tag_latest.sh CI: clean up naming, fix tagging latest (#6832) 2024-09-16 16:18:41 -07:00