localGPT/backend
PromptEngineer 6d73a61e5c refactor: Remove unused imports across codebase
Removed unused import statements from various Python files to improve code clarity and reduce unnecessary dependencies.
2025-07-12 02:34:17 -07:00
..
chat_data.db Integrate multimodal RAG codebase 2025-07-11 00:17:15 -07:00
database.py fix(db): Correct database path and chat history logic 2025-07-12 01:51:57 -07:00
ollama_client.py refactor: Remove unused imports across codebase 2025-07-12 02:34:17 -07:00
README.md Integrate multimodal RAG codebase 2025-07-11 00:17:15 -07:00
requirements.txt Integrate multimodal RAG codebase 2025-07-11 00:17:15 -07:00
server.py fix(db): Correct database path and chat history logic 2025-07-12 01:51:57 -07:00
simple_pdf_processor.py refactor: Remove unused imports across codebase 2025-07-12 02:34:17 -07:00
test_backend.py refactor: Remove unused imports across codebase 2025-07-12 02:34:17 -07:00

localGPT Backend

Simple Python backend that connects your frontend to Ollama for local LLM chat.

Prerequisites

  1. Install Ollama (if not already installed):

    # Visit https://ollama.ai or run:
    curl -fsSL https://ollama.ai/install.sh | sh
    
  2. Start Ollama:

    ollama serve
    
  3. Pull a model (optional, server will suggest if needed):

    ollama pull llama3.2
    

Setup

  1. Install Python dependencies:

    pip install -r requirements.txt
    
  2. Test Ollama connection:

    python ollama_client.py
    
  3. Start the backend server:

    python server.py
    

Server will run on http://localhost:8000

API Endpoints

Health Check

GET /health

Returns server status and available models.

Chat

POST /chat
Content-Type: application/json

{
  "message": "Hello!",
  "model": "llama3.2:latest",
  "conversation_history": []
}

Returns:

{
  "response": "Hello! How can I help you?",
  "model": "llama3.2:latest",
  "message_count": 1
}

Testing

Test the chat endpoint:

curl -X POST http://localhost:8000/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello!", "model": "llama3.2:latest"}'

Frontend Integration

Your React frontend should connect to:

  • Backend: http://localhost:8000
  • Chat endpoint: http://localhost:8000/chat

What's Next

This simple backend is ready for:

  • Real-time chat with local LLMs
  • 🔜 Document upload for RAG
  • 🔜 Vector database integration
  • 🔜 Streaming responses
  • 🔜 Chat history persistence