OpenAI-compatible and Gemini-native LLM queries for groups, channels and private chats on Telegram.
- Clone the repo into a working folder
git clone https://github.com/dhjw/telegram-chatbot && cd telegram-chatbot - Set up and enter a virtualenv (optional).
python -m venv venvthensource ./venv/bin/activate(Linux) or.\venv\Scripts\activate(Windows) - Install requisites to the current environment
pip install -r requirements.txt - Get a bot token from @Botfather
- Get API keys from Google, OpenAI and xAI (you will need credits except for Google; note OpenAI's expire in a year and mini models are cheap so just get $5 to start)
- Copy
config.example.jsontoconfig.jsonand configure it - Search for your bot and add it to your groups/channels (click profile name > Add to group), or open a private chat
- When happy with the active groups, use BotFather's
/setjoingroupscommand to prevent the bot being added to more - Use the hidden
/idcommand to find out thechat_idfor each chat. Add them toallow_chat_idsin config.json in an array, e.g.[-12312312, 123123], - When you restart the bot all other chats will be ignored
- Be careful not to break the strict JSON config file
There's a few ways to do it.
- Activate the venv, as in Setup, and run
python ./bot.py, or make it executablechmod +x ./bot.pyand run it directly./bot.py - Run it from the venv without activating
/path/to/venv/bin/python /path/to/bot.py - If you're not using a venv, run it with the system python3 or make it executable and run it
Clone the repo and apply changes, keeping your config.json and venv.
cd /path/to/parent
git clone https://github.com/dhjw/telegram-chatbot tmp
rm -rf tmp/.git
cp -rf tmp/. telegram-chatbot/
rm -rf tmp
- gemini-2.5-flash is noticeably faster than gemini-2.5-pro and has 250 instead of 100 free requests per day
- Gemini grounding + live search (big free-tier)
- Grok 3 live search (expensive)
- Media input support
- Treat replies as new requests automatically
- Image generation