A comprehensive starter app demonstrating RunAnywhere SDK capabilities - privacy-first, on-device AI for iOS and macOS.
This starter app showcases all the core capabilities of the RunAnywhere SDK:
- 🤖 Chat (LLM) - On-device text generation with streaming support
- 🛠️ Tool Calling - Function calling with structured tool definitions
- 👁️ Vision (VLM) - Image understanding with Vision Language Models
- 🎨 Image Generation (Diffusion) - On-device image generation via CoreML Stable Diffusion
- 🎤 Speech to Text (STT) - On-device speech recognition using Whisper
- 🔊 Text to Speech (TTS) - On-device voice synthesis using Piper
- 🎯 Voice Pipeline - Full voice agent: Speak → Transcribe → Generate → Speak
All AI processing runs entirely on-device with no data sent to external servers.
| Platform | Min Version | Architecture | Status |
|---|---|---|---|
| iOS | 17.0+ | arm64 | Fully supported |
| iOS Simulator | 17.0+ | arm64 | Fully supported |
| macOS | 14.0+ | arm64 (Apple Silicon) | Fully supported |
- iOS 17.0+ / macOS 14.0+
- Xcode 15.0+
- Swift 5.9+
- Apple Silicon Mac (for macOS target)
open Swift-Starter-Example.xcodeprojThis project is pre-configured to fetch the RunAnywhere SDK directly from GitHub:
https://github.com/RunanywhereAI/runanywhere-sdks
Version: 0.19.1+
The following SDK products are included:
RunAnywhere- Core SDK (unified API for all AI capabilities)RunAnywhereLlamaCPP- LLM and VLM text generation backend (llama.cpp with Metal GPU)RunAnywhereONNX- Speech-to-text, text-to-speech, VAD (Sherpa-ONNX)
When you open the project, Xcode will automatically fetch and resolve the packages from GitHub.
In Xcode:
- Select the project in the navigator
- Go to Signing & Capabilities
- Select your Team
- Update the Bundle Identifier if needed
- iPhone / iPad: Select a simulator or connected device, press
Cmd + R - Mac (My Mac): Select "My Mac" in the destination picker, press
Cmd + R
Note: The first build may take a few minutes as Xcode downloads the SDK and its dependencies from GitHub. For best AI inference performance, run on a physical device.
This app uses the RunAnywhere Swift SDK v0.19.1 from GitHub releases:
| Module | Import | Description |
|---|---|---|
| Core SDK | import RunAnywhere |
Unified API for all AI capabilities |
| LlamaCPP | import LlamaCPPRuntime |
LLM/VLM text generation (Metal GPU accelerated) |
| ONNX | import ONNXRuntime |
STT/TTS/VAD via Sherpa-ONNX |
| Capability | Model | Framework | Size |
|---|---|---|---|
| LLM (Chat) | LFM2 350M Q4_K_M | LlamaCPP | ~250MB |
| VLM (Vision) | SmolVLM 256M Instruct | LlamaCPP | ~300MB |
| STT | Sherpa Whisper Tiny (English) | ONNX | ~75MB |
| TTS | Piper (US English - Lessac Medium) | ONNX | ~65MB |
| Diffusion | Stable Diffusion 1.5 CoreML Palettized | CoreML | ~1.5GB |
Models are downloaded on-demand and cached locally on the device. No internet required after initial download.
Swift-Starter-Example/
├── Swift_Starter_ExampleApp.swift # App entry point & SDK initialization
├── ContentView.swift # Main content view wrapper
├── Info.plist # Privacy permissions (mic, camera, photos)
├── Theme/
│ └── AppTheme.swift # Colors, fonts, and styling
├── Services/
│ └── ModelService.swift # AI model management & registration
├── Views/
│ ├── HomeView.swift # Home screen with feature cards
│ ├── ChatView.swift # LLM chat interface with streaming
│ ├── ToolCallingView.swift # Tool calling demo (weather, calc, time)
│ ├── VisionView.swift # VLM image understanding
│ ├── ImageGenerationView.swift # Stable Diffusion image generation
│ ├── SpeechToTextView.swift # Speech recognition with audio visualizer
│ ├── TextToSpeechView.swift # Voice synthesis with rate control
│ └── VoicePipelineView.swift # Full voice agent pipeline
└── Components/
├── FeatureCard.swift # Reusable feature card
├── ModelLoaderView.swift # Model download/load UI with progress
├── AudioVisualizer.swift # Audio level visualization
└── ChatMessageBubble.swift # Chat message with metrics display
import RunAnywhere
import LlamaCPPRuntime
import ONNXRuntime
// Initialize SDK (call once at app launch)
try RunAnywhere.initialize(environment: .development)
// Register backends
LlamaCPP.register() // For LLM/VLM text generation
ONNX.register() // For STT, TTS, VAD// Streaming generation with metrics
let result = try await RunAnywhere.generateStream(
prompt,
options: LLMGenerationOptions(maxTokens: 256, temperature: 0.8)
)
for try await token in result.stream {
print(token, terminator: "")
}
let metrics = try await result.result.value
print("Speed: \(metrics.tokensPerSecond) tok/s")// Register tools
RunAnywhere.registerTool(
name: "get_weather",
description: "Get weather for a location",
parameters: ["location": .string("City name")]
) { args in
return "72°F and sunny in \(args["location"] ?? "unknown")"
}
// Generate with tools
let result = try await RunAnywhere.generateWithTools(
"What's the weather in San Francisco?",
options: ToolCallingOptions(maxTokens: 256)
)// Load VLM model
try await RunAnywhere.loadVLMModel(model)
// Process image with prompt
let result = try await RunAnywhere.processImageStream(
VLMImage(image: uiImage),
prompt: "Describe this image in detail.",
maxTokens: 300
)
for try await token in result.stream {
print(token, terminator: "")
}// Load diffusion model
try await RunAnywhere.loadDiffusionModel(model)
// Generate image
let result = try await RunAnywhere.generateImage(
prompt: "A serene mountain landscape at sunset",
options: DiffusionOptions(steps: 20, guidanceScale: 7.5)
) { update in
print("Step \(update.currentStep)/\(update.totalSteps)")
return true // continue
}// Load STT model (once)
try await RunAnywhere.loadSTTModel("sherpa-onnx-whisper-tiny.en")
// Transcribe audio (Data from microphone)
let text = try await RunAnywhere.transcribe(audioData)// Load TTS voice (once)
try await RunAnywhere.loadTTSVoice("vits-piper-en_US-lessac-medium")
// Speak text (synthesis + playback)
try await RunAnywhere.speak("Hello, world!", options: TTSOptions(rate: 1.0))To add the RunAnywhere SDK to a new Swift project:
- In Xcode: File > Add Package Dependencies...
- Enter:
https://github.com/RunanywhereAI/runanywhere-sdks - Select Up to Next Major Version:
0.19.1 - Add all three products:
RunAnywhere,RunAnywhereLlamaCPP,RunAnywhereONNX
dependencies: [
.package(url: "https://github.com/RunanywhereAI/runanywhere-sdks", from: "0.19.1")
],
targets: [
.target(
name: "YourApp",
dependencies: [
.product(name: "RunAnywhere", package: "runanywhere-sdks"),
.product(name: "RunAnywhereLlamaCPP", package: "runanywhere-sdks"),
.product(name: "RunAnywhereONNX", package: "runanywhere-sdks"),
]
),
]The app requires the following permissions (configured in Info.plist):
| Permission | Purpose | Required for |
|---|---|---|
NSMicrophoneUsageDescription |
Recording audio | STT, Voice Pipeline |
NSSpeechRecognitionUsageDescription |
Speech recognition | STT |
NSCameraUsageDescription |
Camera access | VLM (Vision) |
NSPhotoLibraryUsageDescription |
Photo library access | VLM, Diffusion |
- In Xcode: File > Packages > Reset Package Caches
- Clean build: Product > Clean Build Folder (Cmd+Shift+K)
- Close and reopen the project
Ensure all three SDK products are added to your target:
- Select your target in Xcode
- Go to General > Frameworks, Libraries, and Embedded Content
- Verify:
RunAnywhere,RunAnywhereLlamaCPP,RunAnywhereONNX
If you see CodeSign failed when running on Mac:
- Clean build: Product > Clean Build Folder (Cmd+Shift+K)
- Rebuild: Xcode will re-sign the embedded frameworks
Check network connectivity. Models are downloaded from:
- HuggingFace (LLM, VLM, Diffusion models)
- GitHub (RunanywhereAI/sherpa-onnx for STT/TTS models)
All AI processing happens entirely on-device. No data is ever sent to external servers. This ensures:
- Complete data privacy
- Offline functionality (after model download)
- Low latency responses
- No API costs
MIT License - See LICENSE for details.