Integration of the advanced llama2 AI model with Telegram to provide real-time chatbot responses. Seamless interaction, smart replies, and scalable deployment.
- Python: You need to have Python installed on your system.
- Python Packages:
telegram
json
os
asyncio
- Llama C++ Library: Ensure you've properly installed and set up the
llama_cpp
library. The provided code assumes you have the Llama model stored at./llama-2-7b.Q4_K_M.gguf
.
-
Clone this repository:
git clone https://github.com/yihong1120/Llama2-Telegram-Bot.git cd Llama2-Telegram-Bot
-
Install necessary Python packages:
pip install -r requirements.txt
-
Replace the
TOKEN
placeholder in the code with your Telegram bot token. This is essential for the bot to function.
Run the script:
python app.py
Upon execution, the bot will start listening to incoming messages. Users can start a conversation with the bot on Telegram. The bot will then respond to user messages using the Llama model.
The chatbot keeps track of the last 20 messages per user to ensure it has a relevant context while generating responses. Conversations are saved as JSON files, named according to the user's ID.
Feel free to fork this repository and make modifications. Pull requests are welcome!