A fast, streaming-first AI agent framework built in Rust
A fast, streaming-first AI agent framework built in Rust — connect any platform to any LLM with built-in memory, skills, and self-evolution.
- Multi-platform channels — Telegram, Discord, OpenAI-compatible HTTP API
- Streaming responses — SSE streaming for real-time token delivery
- Tool system — shell, web, filesystem, cron, search, message, spawn
- Agent loop — context management, memory, hooks, and context compaction
- Sub-agent spawning — parallel agent tasks via tokio JoinSet
- Cron scheduling — tick-based scheduler with JSON state persistence
- Health checks — registry-based checks with auto-restart and exponential backoff
- Skill system — TOML manifests, hot-reload,
SkillCompiler, runtime skill injection - Tiered memory —
MemoryStoretrait with HotStore (L1 in-memory) and WarmStore (L2 LanceDB vectors) - Learning & evolution —
LearningEventbus, event processors, prompt assembly from observations - Provider resilience — automatic retry with exponential backoff on 429s
- SSRF protection — network allowlist/denylist, URL validation, sandboxed exec
- Native daemon mode — double-fork daemonization, PID file with flock, signal handling (SIGTERM/SIGINT/SIGHUP), graceful shutdown with log flushing, log rotation (daily)
┌──────────────────────────────┐
│ CLI (clap) │
│ agent · gateway · serve · │
│ daemon · heartbeat · setup · │
│ status │
└──────────────┬───────────────┘
│
┌───────────────────────┼───────────────────────┐
│ │ │
┌───────▼──────┐ ┌──────────▼──────────┐ ┌───────▼──────┐
│ Telegram │ │ Gateway │ │ API Server │
│ (polling) │ │ (ChannelManager) │ │ (Axum) │
└───────┬──────┘ └──────────┬──────────┘ └───────┬──────┘
┌───────┴──────┐ │ │
│ Discord │ │ │
│ (WebSocket) │ │ │
└───────┬──────┘ │ │
│ │ │
└─────────┬───────────┘───────────────────────┘
│
InboundMessage │ Bus (tokio broadcast)
│
┌────────▼────────┐
│ Agent Loop │
│ ┌────────────┐ │
│ │ Context │ │
│ │ Memory │ │
│ │ Skills │ │
│ │ Hooks │ │
│ └─────┬──────┘ │
└────────┼────────┘
│
┌────────────┼────────────┐
│ │ │
┌───────▼──────┐ ┌──▼───────┐ ┌──▼────────────┐
│ Providers │ │ Tools │ │ Sub-agents │
│ │ │ │ │ │
│ · OpenAI │ │ · shell │ │ · parallel │
│ · Anthropic │ │ · web │ │ spawning │
│ · DeepSeek │ │ · fs │ │ · isolated │
│ · Groq │ │ · cron │ │ contexts │
│ · Ollama │ │ · search│ │ │
└──────────────┘ │ · spawn │ └───────────────┘
└──────────┘
│
OutboundMessage │ Bus
│
┌────────▼────────┐
│ Channel → │
│ User Response │
└─────────────────┘
── Evolution Layer ────────────────────────────────────
LearningEvent → EventBus → Processors → (SkillCreate / MemoryUpdate / PromptAdjust)
── Foundation Layer ───────────────────────────────────
kestrel-core · kestrel-config · kestrel-bus
kestrel-session · kestrel-security · kestrel-providers
kestrel-cron · kestrel-heartbeat · kestrel-daemon
kestrel-memory · kestrel-skill · kestrel-learning
cargo build --releasekestrel setup
# Edit ~/.kestrel/config.yaml with your API keys# Interactive agent (one-shot)
kestrel agent "Summarize the latest commits"
# Start gateway (Telegram + Discord)
kestrel gateway
# Start API server
kestrel serve --port 8080
# Periodic health checking
kestrel heartbeat
# Show system status
kestrel status
# Start as daemon (background, double-fork, PID file + flock)
kestrel daemon start
# Check status (auto-cleans stale PID files from crashed instances)
kestrel daemon status
# Stop gracefully (SIGTERM, configurable grace period)
kestrel daemon stop
# Restart (stop + re-exec)
kestrel daemon restartEnvironment variable KESTREL_HOME overrides the default config directory
(~/.kestrel).
# ~/.kestrel/config.yaml
providers:
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4o
base_url: https://api.openai.com/v1 # optional: point to any OpenAI-compatible API
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-sonnet-4-6
channels:
telegram:
token: ${TELEGRAM_BOT_TOKEN}
enabled: true
discord:
token: ${DISCORD_BOT_TOKEN}
enabled: true
agent:
model: gpt-4o
temperature: 0.7
max_tokens: 4096
streaming: true
security:
network:
deny:
- "10.0.0.0/8"
- "172.16.0.0/12"
- "192.168.0.0/16"
daemon:
pid_file: ~/.kestrel/kestrel.pid
log_dir: ~/.kestrel/logs
working_directory: /
grace_period_secs: 30| Command | Description |
|---|---|
agent |
Interactive agent — send a message and get a response |
gateway |
Start the gateway — connect to Telegram, Discord, etc. |
serve |
OpenAI-compatible HTTP API server (Axum) |
heartbeat |
Periodic health checking with auto-restart |
health |
Show health check status |
cron list |
List all cron jobs |
cron status |
Show status of a specific cron job |
config validate |
Validate the config.yaml schema |
config migrate |
Migrate Python kestrel config to kestrel format |
setup |
Interactive configuration wizard |
status |
Show current configuration and system status |
daemon start/stop/restart/status |
Native Unix daemon: double-fork, PID file (flock), SIGTERM/SIGINT/SIGHUP, log rotation |
| Crate | Description |
|---|---|
kestrel-core |
Error types, constants, core types (MessageType, Platform) |
kestrel-config |
YAML config loading, schema validation, path resolution |
kestrel-bus |
Tokio broadcast-based async message bus |
kestrel-session |
SQLite-backed session and conversation store |
kestrel-security |
Network allowlist/denylist, command approval, SSRF protection |
kestrel-providers |
LLM provider trait — OpenAI-compatible and Anthropic SSE streaming |
kestrel-tools |
Tool registry + builtins (shell, web, fs, search, cron, spawn, message) |
kestrel-agent |
Agent loop, context builder, memory, skills, hooks, sub-agents |
kestrel-cron |
Tick-based cron scheduler with JSON state persistence |
kestrel-heartbeat |
Health check registry, periodic task monitoring, auto-restart |
kestrel-channels |
Platform adapters — Telegram, Discord — via ChannelManager |
kestrel-api |
OpenAI-compatible HTTP API server (Axum) |
kestrel-daemon |
Unix daemon: double-fork, PID file (flock), signal handling, file logging |
kestrel-memory |
MemoryStore trait, HotStore (L1 in-memory), WarmStore/LanceDB (L2 vectors) |
kestrel-skill |
Skill trait, TOML manifests, SkillRegistry, SkillCompiler |
kestrel-learning |
LearningEvent bus, event processors, prompt assembly |
| Metric | Value |
|---|---|
| Rust source files | 126 |
| Lines of Rust code | ~62,800 |
| Crates | 16 |
| Minimum Rust version | 1.75 |
# Build everything
cargo build --workspace
# Run all tests
cargo test --workspace
# Lint (must pass with 0 warnings)
cargo clippy --workspace -- -D warnings
# Format check
cargo fmt --all --check
# Quick compile check
cargo check- Thin harness, fat skills — Harness handles the loop, files, context, and safety. Complexity lives in skill files.
- Latent vs deterministic — Judgment goes to the model; parsing and validation stay in code. Never mix the two.
- Context engineering — JIT loading, compaction, and structured notes to stay within the context window.
- Fewer, better tools — Consolidated operations with token-efficient returns and poka-yoke defaults.
- LanceDB over SQLite FTS5 — Semantic vector search for memory and session recall.
- TOML over YAML — Rust-native parsing for skill manifests and configuration.
- Fork the repository and create a feature branch.
- Ensure
cargo test --workspaceandcargo clippy --workspacepass with zero warnings. - Add tests for any new functionality — assertions must be deterministic (no LLM output in test expectations).
- Add
///doc comments on allpubfunctions. - Open a pull request against
main.
CI runs format checks, clippy, build, tests, and a security audit on every push.
This project is licensed under the MIT License.