Commit Graph

337 Commits

Author SHA1 Message Date
kevin
8d0fe21541 Added feature to save Q&A to CSV based on user flag 2023-10-22 20:43:46 +01:00
Kevin Machado Gamboa
5af3d770c7
Merge branch 'PromtEngineer:main' into feature/save-qa-logging 2023-10-22 20:27:32 +01:00
kevin
46c9559c7e fixing no folder error and import 2023-10-22 20:27:12 +01:00
PromptEngineer
1aa6d14790
Merge pull request #580 from datendrache/crawl
Datendrache; crawl and ingest
2023-10-19 21:27:34 -07:00
PromptEngineer
573c90353c
Merge pull request #581 from Dafterfly/run-offline-without-crashing
Update run_localGPT.py to be able to run without an internet connection
2023-10-13 22:20:57 -07:00
PromptEngineer
c45b23b98a
Merge pull request #579 from mark-greene/main
Change markdown files to use the UnstructuredMarkdownLoader
2023-10-13 22:19:14 -07:00
Dafterfly
c59db5fd97
Update run_localGPT.py to be able to run without an internet connection 2023-10-13 03:10:54 +02:00
Datendrache
f590539ba0 Datendrache; crawl and ingest 2023-10-12 12:41:17 -06:00
Mark
738d7485df Change markdown files to use the UnstructuredMarkdownLoader 2023-10-12 13:14:58 -04:00
PromptEngineer
279dfbb45c
Merge pull request #555 from DaneCode/patch-1
Update requirements.txt
2023-10-09 21:09:20 -07:00
kevin
b0d76516ac fixing unrecognized character & 2023-10-08 13:16:41 +01:00
kevin
ac9418ee22 Added feature to save Q&A to CSV based on user flag 2023-10-07 17:18:54 +01:00
kevin
d85fa97783 Added feature to save the user questions and model answers to CSV based on flag 2023-10-07 17:04:59 +01:00
Dane Thompson
ea2fd28851
Update requirements.txt
INFO: pip is looking at multiple versions of onnx to determine which version is compatible with other requirements. This could take a while.
ERROR: Cannot install protobuf==3.20.0 and unstructured-inference because these package versions have conflicting dependencies.

The conflict is caused by:
    The user requested protobuf==3.20.0
    onnx 1.14.1 depends on protobuf>=3.20.2

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
2023-10-04 11:03:26 -05:00
PromptEngineer
15109efffc
Merge pull request #549 from marook/add-llama-cpp-to-docker
add llama-cpp-python to Dockerfile
2023-10-03 22:38:51 -07:00
PromptEngineer
d7fef20dde Mistral-7B Support Added
- Support for Mistral-7B Added to localGPT.
- Bug fix in the API code, will not delete the existing DB.
- Optimized the streamlit UI.
2023-10-02 15:31:18 -07:00
Markus Peröbner
417b7e606e add llama-cpp-python to Dockerfile
because it's not present in the requirements.txt but is requred for
loading the LLaMa 2 model.
2023-10-02 08:44:48 +02:00
PromptEngineer
15e96488b6 API Update
- Updated the API code to use the prompt template.
- Removed unused code from run_local_API.py
- Standardized the API endpoint in the localGPTUI
2023-09-25 18:54:19 -07:00
PromptEngineer
db1b36ebf6
Merge pull request #496 from simi/patch-1
Update README.md
2023-09-20 01:06:22 -07:00
Josef Šimánek
1e933b66b4
Update README.md
- fix cpu ingest example
2023-09-19 18:19:07 +02:00
PromptEngineer
9d83fca31e
Merge pull request #485 from KonradHoeffner/pr-spellcheck
spellcheck README.md
2023-09-18 18:28:58 -07:00
Konrad Höffner
9b3b58034b spellcheck README.md 2023-09-18 10:22:18 +02:00
PromptEngineer
a219bb91d0
Update README.md 2023-09-17 11:44:46 -07:00
PromptEngineer
4f9fb00b3a Added model path
Added the path to download the model.
2023-09-17 01:41:41 -07:00
PromptEngineer
9577b4abd1 Default model changed to gguf
- Default model changed to llama-2-7B-chat model in gguf format
2023-09-16 00:48:30 -07:00
PromptEngineer
03bc158945
Merge pull request #479 from PromtEngineer/gguf_support_model_update
GGUF Support and Llama-Cpp-Python GPU support
2023-09-15 23:07:37 -07:00
PromptEngineer
38244ed2c5
Merge branch 'main' into gguf_support_model_update 2023-09-15 23:04:45 -07:00
PromptEngineer
16f949ed93 run_localGPT.py updated
- Moved all model loading code out of the run_localGPT.py file.
- Removed llamacpp from requirement.txt file. It needs to be installed separately to ensure it supports GPU
2023-09-15 22:59:21 -07:00
PromptEngineer
53ac9227cf
Update README.md 2023-09-15 22:41:42 -07:00
PromptEngineer
25202dd153
Merge pull request #478 from Dafterfly/main
Allow the number of gpu layers and the number of batches to be configured in constants.py
2023-09-15 22:15:01 -07:00
PromptEngineer
0ba1749f91
Update README.md 2023-09-15 22:07:10 -07:00
PromptEngineer
c5551913f0
Update README.md 2023-09-15 20:59:06 -07:00
PromptEngineer
1e24bc23d1
Update README.md 2023-09-14 20:34:47 -07:00
.
23525d4439 allow for allow the number of gpu layers and the number of batches to be configured in constants.py for v2 script 2023-09-15 01:46:16 +02:00
PromptEngineer
545b735e06
Update README.md 2023-09-14 16:30:30 -07:00
PromptEngineer
4c43332952
Update README.md 2023-09-14 16:05:30 -07:00
.
5756d31a48 Give an example of values that work 2023-09-15 00:53:40 +02:00
.
c08cff479c allow the number of gpu layers and the number of batches to be configured in constants.py 2023-09-15 00:49:18 +02:00
PromptEngineer
b91bc69f7e
Update README.md 2023-09-14 01:06:05 -07:00
PromptEngineer
fe9663c603
Update README.md 2023-09-14 00:55:57 -07:00
PromptEngineer
5e4c0c73a4
Update README.md 2023-09-13 22:16:15 -07:00
PromptEngineer
121e35a303
Default back to GGML model constants.py 2023-09-13 09:35:49 -07:00
PromptEngineer
a49c29a0a1 Major code update to run_localGPT
This includes the following update.

- Support for GGUF models.
- Ability to enable/disable chat history
- Set parameters in constants.py
- Prompt Template for Llama-2 (this works well) and generic template for other models.
- Major rewrite of the main run_localGPT.py as run_localGPT_v2.py (This will replace the original version after testing by the community).
- and more :)
2023-09-13 01:04:40 -07:00
PromptEngineer
0f6b4d1d90
Merge pull request #454 from KonradHoeffner/pr-fix-docker-llama-version
use correct llama-cpp-python version in Dockerfile
2023-09-12 00:12:22 -07:00
Konrad Höffner
b92dcf7a4c use correct llama-cpp-python version in Dockerfile 2023-09-05 12:22:10 +02:00
PromptEngineer
5ab5e1921a
Merge pull request #450 from hauntedness/resume_download
add resume_download as default to hf_hub_download
2023-09-04 17:49:24 -07:00
hauntedness
b2fa076153 when download large model from huggingface, there is often a ReadTimeoutError, add resume_download to avoid to rerun from start 2023-09-03 22:07:28 +08:00
PromptEngineer
80a3ffcfe5 adding example constitution.pdf file 2023-08-29 00:47:14 -07:00
PromptEngineer
0b3fa1d8a2
Merge pull request #388 from karthikcs/kcs-upgrade-chroma
Upgrade to latest ChromaDB version
2023-08-28 22:46:55 -07:00
PromptEngineer
afcf32335b
Merge pull request #369 from ChrisAylen/issue-368-Enhancement-Enable-folder-recursion-in-ingest.py
added folder recursion to ingest.py
2023-08-28 22:40:46 -07:00