Professional documentation for the BaseAgent autonomous coding assistant
BaseAgent is a high-performance autonomous agent designed for the Term Challenge. It leverages LLM-driven decision making with advanced context management and cost optimization techniques.
- Overview - What is BaseAgent and core design principles
- Installation - Prerequisites and setup instructions
- Quick Start - Your first task in 5 minutes
- Architecture - Technical architecture and system design
- Configuration - All configuration options explained
- Usage Guide - Command-line interface and options
- Tools Reference - Available tools and their parameters
- Context Management - Token management and compaction
- Best Practices - Optimal usage patterns
- Chutes API Integration - Using Chutes as your LLM provider
| Document | Description |
|---|---|
| Overview | High-level introduction and design principles |
| Installation | Step-by-step setup guide |
| Quick Start | Get running in minutes |
| Architecture | Technical deep-dive with diagrams |
| Configuration | Environment variables and settings |
| Usage | CLI commands and examples |
| Tools | Complete tools reference |
| Context Management | Memory and token optimization |
| Best Practices | Tips for optimal performance |
| Chutes Integration | Chutes API setup and usage |
graph TB
subgraph User["User Interface"]
CLI["CLI (agent.py)"]
end
subgraph Core["Core Engine"]
Loop["Agent Loop"]
Context["Context Manager"]
Cache["Prompt Cache"]
end
subgraph LLM["LLM Layer"]
Client["LiteLLM Client"]
Provider["Provider (Chutes/OpenRouter)"]
end
subgraph Tools["Tool System"]
Registry["Tool Registry"]
Shell["shell_command"]
Files["read_file / write_file"]
Search["grep_files / list_dir"]
end
CLI --> Loop
Loop --> Context
Loop --> Cache
Loop --> Client
Client --> Provider
Loop --> Registry
Registry --> Shell
Registry --> Files
Registry --> Search
- Fully Autonomous - No user confirmation required; makes decisions independently
- LLM-Driven - All decisions made by the language model, not hardcoded logic
- Prompt Caching - 90%+ cache hit rate for significant cost reduction
- Context Management - Intelligent pruning and compaction for long tasks
- Self-Verification - Automatic validation before task completion
- Multi-Provider - Supports Chutes AI, OpenRouter, and litellm-compatible providers
baseagent/
├── agent.py # Entry point
├── src/
│ ├── core/
│ │ ├── loop.py # Main agent loop
│ │ └── compaction.py # Context management
│ ├── llm/
│ │ └── client.py # LLM client (litellm)
│ ├── config/
│ │ └── defaults.py # Configuration
│ ├── tools/ # Tool implementations
│ ├── prompts/
│ │ └── system.py # System prompt
│ └── output/
│ └── jsonl.py # JSONL event emission
├── rules/ # Development guidelines
├── astuces/ # Implementation techniques
└── docs/ # This documentation
MIT License - See LICENSE for details.