A full-stack, edge-deployed AI support agent with a built-in human-in-the-loop dashboard. The AI automatically handles user queries via Telegram, but a human operator can seamlessly take over the conversation in real-time through a web interface powered by WebSockets.
Built on Cloudflare Workers with Hono, Durable Objects, and D1 — zero cold starts, globally distributed, and serverless.
Author: Sadiq Ahmed · sadiq@sadiq.is-a.dev
- AI-Powered Auto-Replies — Incoming Telegram messages are answered by an LLM automatically (configurable via any OpenAI-compatible API).
- Human-in-the-Loop Dashboard — Operator web UI to monitor all conversations and jump into any chat in real-time.
- Real-Time WebSocket Sync — Messages from Telegram are broadcast instantly to the dashboard; operator replies are sent back to Telegram.
- Hibernatable Durable Objects — WebSocket connections persist across DO evictions using Cloudflare's hibernation API, keeping costs near zero during idle periods.
- Smart AI / Human Toggle — AI replies can be toggled globally, or set to activate only when the human operator is inactive (auto-detected via WebSocket presence).
- Human-Like Typing Simulation — AI responses are delivered with randomized typing delays and
typing...indicators to feel natural. - Chat Search & Pagination — Search across all conversations and paginate through message history.
- Unread Message Indicators — Dashboard shows unseen messages with visual badges.
- Mobile-Responsive Dashboard — The operator UI works on phones with a native-app-like experience.
- Basic Auth Protected — Dashboard and API endpoints are secured behind HTTP Basic Authentication stored in D1.
- Webhook Secret Validation — Telegram webhook requests are verified using a secret token.
┌──────────────┐ Webhook ┌────────────────────────────────────────────┐
│ Telegram │ ──────────────▶ │ Cloudflare Worker (Hono) │
│ User │ ◀────────────── │ │
└──────────────┘ Bot API Reply │ ┌─────────┐ ┌──────┐ ┌──────────────┐ │
│ │ LLM API │ │ D1 │ │Durable Object│ │
│ │ (Groq) │ │(SQLite│ │ (ChatHandler)│ │
│ └─────────┘ └──────┘ └──────┬───────┘ │
│ │ │
└────────────────────────────────┼──────────┘
│
WebSocket
│
┌──────────▼──────────┐
│ Operator Dashboard │
│ (Browser / Web) │
└─────────────────────┘
- A Telegram user sends a message → Telegram delivers it to the Worker via webhook.
- The Worker upserts the user in D1, then decides whether to generate an AI reply based on the current state (
use_ai_reply, operator activity). - If AI is enabled, the Worker calls the configured LLM endpoint (OpenAI-compatible) with conversation history for context.
- The AI reply is sent back to Telegram with a simulated typing delay.
- Simultaneously, the incoming message (and AI response, if any) is broadcast via WebSocket to all connected operator dashboards.
- From the dashboard, a human operator can send a reply — which goes to the Durable Object → Telegram Bot API → user, and is logged in D1.
- When the operator's WebSocket disconnects, their last-active timestamp is recorded. The AI resumes auto-replying after a configurable inactivity window (default: 5 seconds).
ai-support-agent/
├── public/
│ ├── scripts/
│ │ └── chat.js # Client-side WebSocket logic for the dashboard
│ └── robots.txt
├── scripts/
│ ├── models-gen.mjs # Auto-generates models/index.ts barrel file
│ └── webhook-setup.js # Interactive Telegram webhook registration
├── src/
│ ├── app.ts # Hono app entry — routes & webhook handler
│ ├── ChatHandler.ts # Durable Object — WebSocket server & chat actions
│ ├── configs.ts # All configurable constants (LLM, pagination, prompts)
│ ├── utils.ts # JSON helpers, base64, IP utilities
│ ├── database/
│ │ ├── factory.ts # Drizzle ORM D1 connection factory
│ │ └── migrations/ # Drizzle-generated SQL migrations
│ ├── helpers/
│ │ ├── chats.ts # Chat & message history queries
│ │ ├── common.ts # askLLM(), chatServer(), pagination utilities
│ │ ├── states.ts # App state (KV-like) get/set/init via D1
│ │ └── telegram.ts # Telegram Bot API helpers (send, typing, humanReply)
│ ├── middlewares/
│ │ ├── BasicAuth.ts # HTTP Basic Auth middleware (credentials in D1)
│ │ └── TgSecretCheck.ts # Telegram webhook secret verification
│ ├── models/
│ │ ├── BasicAuthCredentials.ts
│ │ ├── ChatLog.ts
│ │ ├── ChatUser.ts
│ │ ├── State.ts
│ │ └── index.ts # Auto-generated barrel export
│ ├── pages/
│ │ └── chat.tsx # Hono JSX — operator dashboard HTML
│ └── types/
│ └── webhookupdate.ts # Telegram webhook payload types
├── drizzle.config.ts
├── wrangler.jsonc # Cloudflare Worker configuration
├── tsconfig.json
└── package.json
| Layer | Technology |
|---|---|
| Runtime | Cloudflare Workers |
| Framework | Hono (with JSX for server-rendered pages) |
| Database | Cloudflare D1 (SQLite) |
| ORM | Drizzle ORM |
| Real-Time | Durable Objects with Hibernatable WebSockets |
| AI / LLM | Any OpenAI-compatible API (default: Groq) |
| Bot Platform | Telegram Bot API (webhooks) |
| Package Mgr | Bun |
- Bun (v1.0+)
- Wrangler CLI (installed as a dev dependency)
- A Cloudflare account with Workers & D1 enabled
- A Telegram Bot Token
- An API key for an OpenAI-compatible LLM provider (e.g., Groq)
git clone https://github.com/your-username/ai-support-agent.git
cd ai-support-agent
bun installbunx wrangler d1 create ai-support-agentCopy the database_id from the output.
Open wrangler.jsonc and fill in the placeholders:
Tip: For production, use
wrangler secret putinstead of plaintext vars for sensitive values likeLLM_API_KEYandTELEGRAM_TOKEN.
Generate and apply the schema:
bun run db:generate # generates SQL from Drizzle models
bun run db:migrate # applies migrations locallyFor production:
bunx wrangler d1 migrations apply ai-support-agent --remotebun run webhook-setThis interactive script will:
- Ask for your bot token and worker endpoint (e.g.,
https://ai-support-agent.<your-subdomain>.workers.dev/webhook-x) - Register the webhook with Telegram
- Output a webhook secret — copy this into
wrangler.jsonc→TELEGRAM_WEBHOOK_SECRET
Insert a Basic Auth user directly into D1:
bunx wrangler d1 execute ai-support-agent --local --command \
"INSERT INTO basic_auth_credentials (user, password) VALUES ('admin', 'your-secure-password');"For production, use --remote instead of --local.
bun run devThe worker runs locally at http://localhost:8787.
- Dashboard:
http://localhost:8787/chats(requires Basic Auth) - Webhook:
POST http://localhost:8787/webhook-x
bun run deployAll tunable constants live in src/configs.ts:
| Constant | Default | Description |
|---|---|---|
LLM_ENDPOINT |
https://api.groq.com/openai/v1/chat/completions |
OpenAI-compatible chat completions endpoint |
LLM_MODEL |
openai/gpt-oss-120b |
Model identifier sent to the LLM API |
LLM_MAX_CHAT_HISTORY |
15 |
Max previous messages sent as context to the LLM |
LLM_SYSTEM_PROMPT |
(see file) | System prompt defining the AI agent's personality |
MAX_MESSAGE_HISTORY |
20 |
Messages per page in the dashboard |
MAX_CHAT_HISTORY |
20 |
Chats per page in the sidebar |
These are stored in D1 and can be toggled via the API:
| State Key | Default | Description |
|---|---|---|
use_ai_reply |
1 |
1 = AI auto-replies enabled, 0 = disabled |
use_ai_only_when_inactive |
1 |
1 = AI replies only when the operator is offline |
admin_last_active_at |
— | Timestamp of last operator WebSocket activity |
Toggle AI via the API:
# Disable AI replies
curl -u admin:password "https://your-worker.dev/chat/options?use_ai_reply=0"
# Enable AI only when operator is inactive
curl -u admin:password "https://your-worker.dev/chat/options?use_ai_only_when_inactive=1"| Method | Path | Auth | Description |
|---|---|---|---|
POST |
/webhook-x |
TG Secret | Telegram webhook endpoint |
GET |
/chats |
Basic Auth | Operator dashboard (HTML) |
GET |
/chat/options |
Basic Auth | Toggle AI reply settings (query params) |
GET |
/chat/socket |
Basic Auth | WebSocket upgrade for real-time chat |
Once connected to /chat/socket, the client communicates via JSON messages:
| Action | Direction | Payload | Description |
|---|---|---|---|
loadChats |
Client → Server | { action, page?, limit?, search? } |
Fetch paginated chat list |
loadMessages |
Client → Server | { action, chat_id, page?, limit? } |
Fetch message history for a chat |
sendMessage |
Client → Server | { action, chat_user_id, chat_id, text } |
Send a reply to a Telegram user |
receiveMessage |
Server → Client | { action, data: { chat_id, message, ... } } |
Broadcast incoming Telegram message |
Four tables managed by Drizzle ORM:
chat_users — Telegram users who have messaged the bot
| Column | Type | Notes |
|---|---|---|
id |
INTEGER | Primary key, auto-increment |
chat_id |
TEXT | Unique, Telegram chat ID |
user_id |
TEXT | Telegram user ID |
username |
TEXT | Telegram username |
first_name / last_name |
TEXT | User display name |
last_message |
TEXT | Preview of last message |
last_message_at |
TEXT | Timestamp (epoch ms) |
last_message_is_client |
INTEGER | 1 = from user, 0 = reply |
last_message_seen_at |
TEXT | Null if unread |
chat_logs — Full message history
| Column | Type | Notes |
|---|---|---|
id |
INTEGER | Primary key, auto-increment |
chat_user_id |
INTEGER | FK → chat_users.id |
message |
TEXT | User's message |
response |
TEXT | AI or operator reply |
is_ai_response |
INTEGER | 1 = AI-generated, 0 = human |
created_at |
TEXT | Timestamp (epoch ms) |
states — Key-value configuration store
| Column | Type | Notes |
|---|---|---|
id |
INTEGER | Primary key |
key |
TEXT | Unique config key |
value |
TEXT | Config value |
basic_auth_credentials — Dashboard login credentials
| Column | Type | Notes |
|---|---|---|
id |
INTEGER | Primary key |
user |
TEXT | Unique username |
password |
TEXT | Plaintext password |
| Command | Description |
|---|---|
bun run dev |
Start local development server (Wrangler) |
bun run deploy |
Deploy to Cloudflare Workers (minified) |
bun run db:generate |
Auto-generate model barrel + Drizzle SQL migrations |
bun run db:migrate |
Apply migrations to local D1 |
bun run webhook-set |
Interactive Telegram webhook setup |
bun run cf-typegen |
Generate TypeScript types for Cloudflare bindings |
- Webhook verification — Every incoming Telegram request is validated against the
TELEGRAM_WEBHOOK_SECRETheader. - Dashboard auth — All operator endpoints (
/chats,/chat/socket,/chat/options) require HTTP Basic Auth. - Secrets management — For production, use
wrangler secret putinstead of storing sensitive values as plaintext vars inwrangler.jsonc(although not a big deal if not exposed) - Passwords —
basic_auth_credentialsstores plaintext passwords. Consider hashing for production use (will implement later)
Sadiq Ahmed
- 🌐 Website: sadiq.is-a.dev
- 📧 Email: sadiq@sadiq.is-a.dev
- 🐙 GitHub: @sadiq-bd
Copyright © 2026 Sadiq Ahmed. All rights reserved.
This project is open source. See LICENSE for details.

{ "d1_databases": [ { "database_name": "ai-support-agent", "database_id": "<your-d1-database-uuid>" // from step 2 } ], "vars": { "LLM_API_KEY": "<your-llm-api-key>", // e.g. Groq API key "TELEGRAM_TOKEN": "<your-telegram-bot-token>", // from BotFather "TELEGRAM_WEBHOOK_SECRET": "<generated-later>" // from step 5 } }