mirror of
https://github.com/zebrajr/localGPT.git
synced 2025-12-06 12:20:53 +01:00
- Add environment auto-detection in ChatDatabase class - Support both local development and Docker container paths - Local development: uses 'backend/chat_data.db' (relative path) - Docker containers: uses '/app/backend/chat_data.db' (absolute path) - Maintain backward compatibility with explicit path overrides - Update RAG API server to use auto-detection This resolves the SQLite database connection error that occurred when running LocalGPT in local development environments while maintaining compatibility with Docker deployments. Fixes: Database path hardcoded to Docker container path Tested: Local development and Docker environment detection Breaking: No breaking changes - maintains backward compatibility |
||
|---|---|---|
| .. | ||
| chat_data.db | ||
| database.py | ||
| ollama_client.py | ||
| README.md | ||
| requirements.txt | ||
| server.py | ||
| simple_pdf_processor.py | ||
| test_backend.py | ||
| test_ollama_connectivity.py | ||
localGPT Backend
Simple Python backend that connects your frontend to Ollama for local LLM chat.
Prerequisites
-
Install Ollama (if not already installed):
# Visit https://ollama.ai or run: curl -fsSL https://ollama.ai/install.sh | sh -
Start Ollama:
ollama serve -
Pull a model (optional, server will suggest if needed):
ollama pull llama3.2
Setup
-
Install Python dependencies:
pip install -r requirements.txt -
Test Ollama connection:
python ollama_client.py -
Start the backend server:
python server.py
Server will run on http://localhost:8000
API Endpoints
Health Check
GET /health
Returns server status and available models.
Chat
POST /chat
Content-Type: application/json
{
"message": "Hello!",
"model": "llama3.2:latest",
"conversation_history": []
}
Returns:
{
"response": "Hello! How can I help you?",
"model": "llama3.2:latest",
"message_count": 1
}
Testing
Test the chat endpoint:
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello!", "model": "llama3.2:latest"}'
Frontend Integration
Your React frontend should connect to:
- Backend:
http://localhost:8000 - Chat endpoint:
http://localhost:8000/chat
What's Next
This simple backend is ready for:
- ✅ Real-time chat with local LLMs
- 🔜 Document upload for RAG
- 🔜 Vector database integration
- 🔜 Streaming responses
- 🔜 Chat history persistence