mirror of
https://github.com/lxmfy/ollama-bot.git
synced 2025-11-23 01:41:11 +00:00
LXMFy bot to interact with Ollama
- Introduced CONTEXT_FILES environment variable to specify context file paths. - Added load_context_files function to read and combine content from specified files. - Implemented build_system_prompt function to merge base prompt with loaded context. - Updated bot logic to utilize the new FULL_SYSTEM_PROMPT variable for system messages. - Improved formatting and readability in various sections of the code. |
||
|---|---|---|
| .github/workflows | ||
| lxmfy_ollama_bot | ||
| .deepsource.toml | ||
| .env-example | ||
| .gitignore | ||
| Dockerfile | ||
| LICENSE | ||
| lxmfy-ollama-showcase.png | ||
| Makefile | ||
| poetry.lock | ||
| pyproject.toml | ||
| README.md | ||
ollama-bot
Interact with Ollama LLMs using LXMFy bot framework.
Setup
curl -o .env https://raw.githubusercontent.com/lxmfy/ollama-bot/main/.env-example
edit .env with your Ollama API URL, Model, and LXMF address.
Installation and Running
Using Makefile
Requires poetry and make to be installed.
make install
make run
Using pipx
pipx install git+https://github.com/lxmfy/ollama-bot.git
lxmfy-ollama-bot
Using Poetry directly
poetry install
poetry run lxmfy-ollama-bot
Docker
Using Makefile
make docker-pull
make docker-run
Using Docker directly
First, pull the latest image:
docker pull ghcr.io/lxmfy/ollama-bot:latest
Then, run the bot, mounting your .env file:
docker run -d \
--name ollama-bot \
--restart unless-stopped \
--network host \
-v $(pwd)/.env:/app/.env \
ghcr.io/lxmfy/ollama-bot:latest
Commands
Command prefix: /
/help - show help message
/about - show bot information
Chat
Send any message without the / prefix to chat with the AI model.
The bot will automatically respond using the configured Ollama model.
Note: This only uses /api/generate ollama endpoint so bot wont remember your last message.
