This is a simple tool to help you to create an AI-powered chatbot for your documentation.
What it used:
- LangChain: for embedding and similarity calculation
- OpenAI GPT-3.5 turbo: for conversation summary and response generation
- Streamlit: for web interface
First, use all documents + LangChain + OpenAI Embedding to create a Vector Store.
When receiving a user's question, generate the vector of the question and obtain K most similar document paragraphs as part of prompt for background information.
At the same time, retrieve the latest K messages in the conversation history and summary as part of prompt for historical messages.
Then use the user's original message as the final prompt question and ask OpenAI's model for an answer.
Finally, record this question-answer pair as part of Memory storage.
This is a screenshot of the demo:
Follow the steps below to deploy the service.
- Make sure you installed
docker
anddocker-compose
on your machine. - Clone this repository.
- Run
cd scripts && bash prepare.sh
to prepare the environment. - You will be asked to edit the
.env
file. Please fill in each field with the correct information. You will also need to edit other files accordingly, including the prompt template. - Then place your documentation files (text files only) in
data/docs
folder. - Run
cd scripts && bash install.sh
to install the service.
The service will be deployed on localhost:<WEB_PORT>
.
If you defined MAGIC_WORD
in .env
file, when user input the magic word,
the assistant will allow user to ask any question.
Note: In this mode, the assistant won't use the knowledge base.