- Updated Smart Routing description to remove mention of 'graph queries'
- Changed from 'RAG, direct LLM, or graph queries' to 'RAG and direct LLM responses'
- Addresses PR feedback that graph-related features are not yet enabled
Co-Authored-By: PromptEngineer <jnfarooq@outlook.com>
- Document Query Decomposition with API examples
- Add Answer Verification system documentation
- Document Contextual Enrichment during indexing
- Add Late Chunking configuration details
- Document Multimodal Support with vision model integration
- Add detailed dependency information in installation section
- Improve batch processing documentation with real config examples
- Update Advanced Features section with comprehensive API examples
Co-Authored-By: PromptEngineer <jnfarooq@outlook.com>
- Fix incorrect repository URL in Docker deployment section
- Update API endpoints to match actual backend implementation
- Add session-based chat endpoints and management
- Document real index management endpoints
- Update model configuration with actual OLLAMA_CONFIG and EXTERNAL_MODELS
- Replace generic pipeline configs with actual default/fast configurations
- Add system launcher documentation and service architecture
- Improve health monitoring and logging documentation
- Update environment variables to match code implementation
Co-Authored-By: PromptEngineer <jnfarooq@outlook.com>
- Replaced existing localGPT codebase with multimodal RAG implementation
- Includes full-stack application with backend, frontend, and RAG system
- Added Docker support and comprehensive documentation
- Enhanced with multimodal capabilities for document processing
- Preserved git history for localGPT while integrating new functionality
Right now AutoGPTQ 0.3 is installed from requirements.txt and users are then asked to uninstall that and install AutoGPTQ 0.2.2 from source. It is much easier to fix the version to 0.2.2 in requirements.txt in the first place.
Also fix some README.md issues.