mirror of
https://github.com/zebrajr/localGPT.git
synced 2025-12-06 12:20:53 +01:00
- Modified OllamaClient to read OLLAMA_HOST environment variable - Updated docker-compose.yml to pass OLLAMA_HOST to backend service - Changed docker.env to use Docker gateway IP (172.18.0.1:11434) - Configured Ollama service to bind to 0.0.0.0:11434 for container access - Added test script to verify Ollama connectivity from within container - All backend tests now pass including chat functionality Co-Authored-By: PromptEngineer <jnfarooq@outlook.com>
12 lines
456 B
Bash
12 lines
456 B
Bash
# Docker environment configuration
|
|
# Set this to use local Ollama instance running on host
|
|
# Note: Using Docker gateway IP instead of host.docker.internal for Linux compatibility
|
|
OLLAMA_HOST=http://172.18.0.1:11434
|
|
|
|
# Alternative: Use containerized Ollama (uncomment and run with --profile with-ollama)
|
|
# OLLAMA_HOST=http://ollama:11434
|
|
|
|
# Other configuration
|
|
NODE_ENV=production
|
|
NEXT_PUBLIC_API_URL=http://localhost:8000
|
|
RAG_API_URL=http://rag-api:8001 |