A Dockerized Telegram bot that summarize last messages in a group
- 🤖 Telegram bot interface
- ⚙️ Command-based interaction
- 📝 Summarize recent messages in group chats
- 🧠 AI-powered summarization
- 🔌 Customizable LLM integration (OpenAI, Ollama, Cloudflare AI, llama.cpp)
- 🎤 Voice message transcription using whisper.cpp or Cloudflare AI Whisper
- 🐳 Docker containerized for easy deployment
| Variable | Description | Default |
|---|---|---|
TELEGRAM_BOT_TOKEN |
Your Telegram bot token (required) | - |
WHITELISTED_CHATS |
Comma-separated list of allowed chat IDs (optional) | - |
LLM_PROVIDER |
Explicit LLM provider selection (optional). Valid options: openai, cloudflare, ollama, llama.cpp. If not set, auto-detects based on configured credentials (priority: OpenAI, Cloudflare, Ollama, llama.cpp). |
- |
OPENAI_API_KEY |
Your OpenAI API key (optional, for OpenAI integration) | - |
OPENAI_BASE_URL |
Custom OpenAI API base URL (optional, for OpenAI-compatible APIs) | - |
OPENAI_MODEL |
OpenAI model to use (optional, for OpenAI integration) | gpt-4o-mini |
OLLAMA_URI |
URI for the Ollama server (optional) | http://localhost:11434 |
OLLAMA_MODEL |
Model to use with Ollama (optional) | llama3.1 |
CLOUDFLARE_ACCOUNT_ID |
Cloudflare account ID (optional, for Cloudflare AI LLM and STT) | - |
CLOUDFLARE_AUTH_KEY |
Cloudflare authorization key (optional, for Cloudflare AI LLM and STT) | - |
CLOUDFLARE_MODEL |
Cloudflare model name (optional, for Cloudflare AI) | - |
LLAMA_CPP_MODEL_PATH |
Path to your GGUF model file (optional, for local llama.cpp inference) | - |
STT_PROVIDER |
Explicit STT provider selection (optional). Valid options: whisper.cpp, cloudflare. If not set, auto-detects based on configured credentials (whisper.cpp prioritized if available). |
- |
WHISPER_CPP_MODEL_PATH |
Path to your Whisper GGML model file (optional, for local voice transcription with whisper.cpp) | - |
CRON_SCHEDULE |
Cron schedule for automatic summaries, in cron syntax (optional). Set to never to disable. |
59 23 * * * |
REDIS_URL |
URL for the Redis server (optional) | redis://localhost:6379 |
MSG_LENGTH_LIMIT |
Minimum message length to trigger automatic summarization | 1000 |
- Open Telegram and search for @BotFather.
- Send
/newbotcommand and follow the instructions. - Choose a name and username for your bot.
- Save the bot token provided by BotFather.
- Send
/setprivacyto @BotFather. - Select your bot from the list.
- Choose "Disable" to allow the bot to read group messages.
- This is required for the bot to access and summarize group messages.
- For OpenAI: Get your API key from OpenAI Platform.
- For Ollama: No API key needed, just ensure Ollama is running locally.
- For Cloudflare AI: Get your account ID and auth key from Cloudflare Dashboard.
- For llama.cpp: Download a GGUF model file (e.g., from Hugging Face) and set the path to it. This enables fully local inference without external API calls.
- For Voice Transcription: Choose one of the following STT (Speech-to-Text) options:
- whisper.cpp (local): Download a Whisper GGML model from whisper.cpp models and set the path. Available models:
ggml-tiny.bin,ggml-base.bin,ggml-small.bin,ggml-medium.bin,ggml-large-v3.bin. Larger models are more accurate but require more memory (large-v3 needs ~4GB RAM). - Cloudflare AI Whisper: Uses the
@cf/openai/whisper-large-v3-turbomodel. Just configureCLOUDFLARE_ACCOUNT_IDandCLOUDFLARE_AUTH_KEY(same credentials used for LLM). No local model download required.
- whisper.cpp (local): Download a Whisper GGML model from whisper.cpp models and set the path. Available models:
- Create a
.envfile in the project root with your configuration:TELEGRAM_BOT_TOKEN=your_bot_token_here # LLM Configuration (choose one provider) # LLM_PROVIDER=llama.cpp # Optional: force specific provider (openai, cloudflare, ollama, llama.cpp) LLAMA_CPP_MODEL_PATH=/path/to/models/your-model.gguf # STT Configuration for voice transcription (choose one provider) # STT_PROVIDER=whisper.cpp # Optional: force specific provider (whisper.cpp, cloudflare) WHISPER_CPP_MODEL_PATH=/path/to/models/ggml-base.bin # Or use Cloudflare AI for both LLM and STT # CLOUDFLARE_ACCOUNT_ID=your_account_id_here # CLOUDFLARE_AUTH_KEY=your_auth_key_here # LLM_PROVIDER=cloudflare # STT_PROVIDER=cloudflare
docker compose -f docker/docker-compose.yml up -ddocker compose -f docker/docker-compose.yml down- Add your bot to any Telegram group where you want to use it.
- The bot will automatically start monitoring messages in the group.
- If you want to restrict the bot to specific groups, add the group IDs to the
WHITELISTED_CHATSenvironment variable, separated by commas, and restart the bot.
- Send
/summaryin the group chat. - The bot will analyze all available messages in the chat and provide a concise summary. The message history is automatically cleared after 8 hours of inactivity.
SummaryGram is made with ♥ by derogab and it's released under the MIT license.
If you like this project or directly benefit from it, please consider buying me a coffee:
🔗 bc1qd0qatgz8h62uvnr74utwncc6j5ckfz2v2g4lef
⚡️ derogab@sats.mobi
💶 Sponsor on GitHub
