Skip to content

Latest commit

 

History

History

README.md

BaseAgent Documentation

Professional documentation for the BaseAgent autonomous coding assistant

BaseAgent is a high-performance autonomous agent designed for the Term Challenge. It leverages LLM-driven decision making with advanced context management and cost optimization techniques.


Table of Contents

Getting Started

Core Concepts

Reference

LLM Providers


Quick Navigation

Document Description
Overview High-level introduction and design principles
Installation Step-by-step setup guide
Quick Start Get running in minutes
Architecture Technical deep-dive with diagrams
Configuration Environment variables and settings
Usage CLI commands and examples
Tools Complete tools reference
Context Management Memory and token optimization
Best Practices Tips for optimal performance
Chutes Integration Chutes API setup and usage

Architecture at a Glance

graph TB
    subgraph User["User Interface"]
        CLI["CLI (agent.py)"]
    end
    
    subgraph Core["Core Engine"]
        Loop["Agent Loop"]
        Context["Context Manager"]
        Cache["Prompt Cache"]
    end
    
    subgraph LLM["LLM Layer"]
        Client["LiteLLM Client"]
        Provider["Provider (Chutes/OpenRouter)"]
    end
    
    subgraph Tools["Tool System"]
        Registry["Tool Registry"]
        Shell["shell_command"]
        Files["read_file / write_file"]
        Search["grep_files / list_dir"]
    end
    
    CLI --> Loop
    Loop --> Context
    Loop --> Cache
    Loop --> Client
    Client --> Provider
    Loop --> Registry
    Registry --> Shell
    Registry --> Files
    Registry --> Search
Loading

Key Features

  • Fully Autonomous - No user confirmation required; makes decisions independently
  • LLM-Driven - All decisions made by the language model, not hardcoded logic
  • Prompt Caching - 90%+ cache hit rate for significant cost reduction
  • Context Management - Intelligent pruning and compaction for long tasks
  • Self-Verification - Automatic validation before task completion
  • Multi-Provider - Supports Chutes AI, OpenRouter, and litellm-compatible providers

Project Structure

baseagent/
├── agent.py                 # Entry point
├── src/
│   ├── core/
│   │   ├── loop.py          # Main agent loop
│   │   └── compaction.py    # Context management
│   ├── llm/
│   │   └── client.py        # LLM client (litellm)
│   ├── config/
│   │   └── defaults.py      # Configuration
│   ├── tools/               # Tool implementations
│   ├── prompts/
│   │   └── system.py        # System prompt
│   └── output/
│       └── jsonl.py         # JSONL event emission
├── rules/                   # Development guidelines
├── astuces/                 # Implementation techniques
└── docs/                    # This documentation

License

MIT License - See LICENSE for details.