A lightweight Swift 6 framework for building LLM-powered agents with type-safe tool calling.
Zero dependencies · Full Sendable · Async/await · Cloud + Local · MCP
import AgentRunKit
let client = OpenAIClient(apiKey: "sk-...", model: "gpt-5.4", baseURL: OpenAIClient.openAIBaseURL)
let weatherTool = try Tool<WeatherParams, String, EmptyContext>(
name: "get_weather",
description: "Get the current weather"
) { params, _ in
"72°F and sunny in \(params.city)"
}
let agent = Agent(client: client, tools: [weatherTool])
let result = try await agent.run(userMessage: "What's the weather in SF?", context: EmptyContext())
if let content = result.content {
print(content)
}result.content is optional. Completed runs return finish-tool content, while structural terminal reasons such as max iterations or token budget exhaustion surface through result.finishReason with no final content.
Full documentation including guides and API reference is available on Swift Package Index.
Add to your Package.swift:
dependencies: [
.package(url: "https://github.com/Tom-Ryder/AgentRunKit.git", from: "1.20.1")
].target(name: "YourApp", dependencies: ["AgentRunKit"])For on-device inference, additional targets are available:
AgentRunKitMLXfor MLX on Apple Silicon (requires mlx-swift-lm as a dependency)AgentRunKitFoundationModelsfor Apple Foundation Models (iOS 26+ / macOS 26+, no external dependencies)
- Agent loop with configurable iteration limits and token budgets
- Streaming with
AsyncThrowingStreamand@ObservableSwiftUI wrapper - Type-safe tools with compile-time JSON schema validation
- Sub-agent composition with depth control and streaming propagation
- Context management: automatic compaction, pruning, token budgets
- Structured output with JSON schema constraints
- Multimodal input: images, audio, video, PDF
- Text-to-speech with concurrent chunking and MP3 concatenation
- MCP client: stdio transport, tool discovery, JSON-RPC
- Extended thinking / reasoning model support
| Provider | Description |
|---|---|
OpenAIClient |
OpenAI and compatible APIs (OpenRouter, Groq, Together, Ollama) |
AnthropicClient |
Anthropic Messages API |
GeminiClient |
Google Gemini API |
VertexAnthropicClient |
Anthropic models on Google Vertex AI |
VertexGoogleClient |
Google models on Vertex AI |
ResponsesAPIClient |
OpenAI Responses API with same-substrate continuity replay |
FoundationModelsClient |
Apple on-device (macOS 26+ / iOS 26+) |
MLXClient |
On-device via MLX on Apple Silicon |
| Platform | Version |
|---|---|
| iOS | 18.0+ |
| macOS | 15.0+ |
| Swift | 6.0+ |
| Xcode | 16+ |
MIT License. See LICENSE for details.
