Closed-loop AI agent workflows.
Phase-based execution with validation feedback -- each phase is a control loop where the agent iterates until tests pass, linters are clean, or whatever your definition of done is.
AI agents are powerful but undisciplined. They rush to implement, skip analysis, and produce brittle code. Macrocycle fixes this by forcing your agent through structured phases with automated validation feedback loops.
The control loop is the product. Integrations (Sentry, GitHub, etc.) are left to the IDE or tools like Claude Code.
pipx install macrocyclemacrocycle init # Initialize .macrocycle/
macrocycle run fix "ValueError in process_request" # Run workflow
macrocycle run fix "..." --until analyze # Stop after a phase
macrocycle list # List workflows
macrocycle status # Latest run infoEach workflow is a graph of phases. Each phase is a closed-loop control system:
Phase = Control Loop
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Setpoint │───>│Controller│───>│ Actuator │
│(success │ │(Phase │ │(AI Agent) │
│ criteria)│ │Executor) │ └─────┬─────┘
└──────────┘ └────┬─────┘ │
│ ┌────▼─────┐
┌────▼─────┐ │ Plant │
│ Error │<────│(Codebase)│
│ Signal │ └────┬─────┘
└──────────┘ ┌────▼─────┐
^ │ Sensor │
└───────────│(pytest, │
│ ruff...) │
└──────────┘
- Execute steps (LLM prompts, shell commands)
- Validate via a shell command (the sensor)
- If validation fails, feed the error back into the next iteration
- Repeat until convergence or iteration budget exhausted
- Route to next phase
Workflows live in .macrocycle/workflows/ as JSON:
{
"id": "fix",
"name": "Fix Issue",
"agent": {"engine": "cursor"},
"phases": [
{
"id": "analyze",
"steps": [{"id": "impact", "type": "llm", "prompt": "Analyze: {{INPUT}}"}],
"on_complete": "implement"
},
{
"id": "implement",
"steps": [
{"id": "code", "type": "llm", "prompt": "Implement: {{PHASE_OUTPUT:analyze}}"},
{"id": "test", "type": "command", "command": "pytest --tb=short"}
],
"validation": {"command": "pytest -q"},
"max_iterations": 5,
"context": ["analyze"],
"on_complete": "review"
},
{
"id": "review",
"steps": [{"id": "review", "type": "llm", "prompt": "Review changes..."}],
"validation": {"command": "pytest && ruff check ."},
"max_iterations": 3,
"context": ["implement"]
}
]
}Step types: llm (AI prompt) / command (shell command)
Variables: {{INPUT}} / {{PHASE_OUTPUT:id}} / {{STEP_OUTPUT:id}} / {{ITERATION}} / {{VALIDATION_OUTPUT}}
Agent config cascade: Workflow -> Phase -> Step (use cheaper models for iteration-heavy phases)
.macrocycle/
workflows/fix.json
runs/
20260312_143052_fix/
input.txt
manifest.json # Checkpoint for crash recovery
analyze/output.md
implement/output.md
review/output.md