A comprehensive Android starter app demonstrating the RunAnywhere SDK capabilities - privacy-first, on-device AI for Android with Kotlin and Jetpack Compose.
This starter app showcases all major capabilities of the RunAnywhere SDK:
- On-device text generation using SmolLM2 360M
- Real-time chat interface with message history
- Powered by llama.cpp backend
- Real-time speech recognition using Whisper Tiny
- Microphone permission handling
- Voice activity detection
- Powered by Sherpa-ONNX backend
- Natural voice synthesis using Piper TTS
- Sample texts and custom input
- High-quality US English voice (Lessac)
- Powered by Sherpa-ONNX backend
- Complete voice conversation pipeline
- Combines STT → LLM → TTS
- Real-time conversation flow
- Status indicators for each stage
- Android Studio: Hedgehog (2023.1.1) or later
- Minimum SDK: API 26 (Android 8.0)
- Target SDK: API 35 (Android 15)
- Kotlin: 2.0.21 or later
- Java: 17
-
Clone the repository
git clone <repository-url> cd starter_apps/kotlinstarterexample
-
Open in Android Studio
- Open Android Studio
- Select "Open an Existing Project"
- Navigate to the
kotlinstarterexamplefolder - Click "OK"
-
Sync Gradle
- Android Studio will automatically sync Gradle
- If not, click "Sync Now" in the notification bar
-
Run the app
- Connect an Android device or start an emulator
- Click the "Run" button (
▶️ ) in Android Studio - Select your device/emulator
- The app will build and install
On the first launch:
- Home Screen: You'll see 4 feature cards
- Load Models: Each feature requires downloading AI models:
- LLM: ~400 MB (SmolLM2 360M)
- STT: ~75 MB (Whisper Tiny)
- TTS: ~20 MB (Piper TTS)
- Grant Permissions: STT and Voice Pipeline require microphone permission
- Start Using: Once models are loaded, all features are ready!
app/src/main/java/com/runanywhere/kotlin_starter_example/
├── MainActivity.kt # App entry point
├── services/
│ └── ModelService.kt # Model management (download, load, unload)
└── ui/
├── theme/ # App theme and colors
│ ├── Theme.kt
│ └── Type.kt
├── components/ # Reusable UI components
│ ├── FeatureCard.kt
│ └── ModelLoaderWidget.kt
└── screens/ # Feature screens
├── HomeScreen.kt
├── ChatScreen.kt
├── SpeechToTextScreen.kt
├── TextToSpeechScreen.kt
└── VoicePipelineScreen.kt
- Jetpack Compose: Modern declarative UI
- Material 3: Latest Material Design
- Navigation Compose: Screen navigation
- Coroutines & Flow: Asynchronous operations
- ViewModel: State management
- RunAnywhere SDK v0.16.0-test.39: On-device AI
The app uses three RunAnywhere packages:
// build.gradle.kts (app module)
dependencies {
// Core SDK
implementation("ai.runanywhere:runanywhere-kotlin:0.16.0-test.39")
// Backends
implementation("ai.runanywhere:runanywhere-llamacpp:0.16.0-test.39") // LLM
implementation("ai.runanywhere:runanywhere-onnx:0.16.0-test.39") // STT/TTS
}// MainActivity.kt
RunAnywhere.initialize(environment = SDKEnvironment.DEVELOPMENT)
ModelService.registerDefaultModels()Models are registered in ModelService.kt:
// LLM Model
RunAnywhere.registerModel(
id = "smollm2-360m-instruct-q8_0",
name = "SmolLM2 360M Instruct Q8_0",
url = "https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct-GGUF/resolve/main/smollm2-360m-instruct-q8_0.gguf",
framework = InferenceFramework.LLAMA_CPP,
memoryRequirement = 400_000_000
)
// STT Model
RunAnywhere.registerModel(
id = "sherpa-onnx-whisper-tiny.en",
name = "Sherpa Whisper Tiny (ONNX)",
url = "https://github.com/RunanywhereAI/sherpa-onnx/releases/download/runanywhere-models-v1/sherpa-onnx-whisper-tiny.en.tar.gz",
framework = InferenceFramework.ONNX,
category = ModelCategory.SPEECH_RECOGNITION
)
// TTS Model
RunAnywhere.registerModel(
id = "vits-piper-en_US-lessac-medium",
name = "Piper TTS (US English - Medium)",
url = "https://github.com/RunanywhereAI/sherpa-onnx/releases/download/runanywhere-models-v1/vits-piper-en_US-lessac-medium.tar.gz",
framework = InferenceFramework.ONNX,
category = ModelCategory.SPEECH_SYNTHESIS
)val response = RunAnywhere.chat("Explain AI in simple terms")val audioData: ByteArray = recordAudio()
val transcription = RunAnywhere.transcribe(audioData)RunAnywhere.speak("Hello, world!")RunAnywhere.startVoiceSession().collect { event ->
when (event) {
is VoiceSessionEvent.Listening -> updateUI("Listening...")
is VoiceSessionEvent.Transcribed -> updateUI("You: ${event.text}")
is VoiceSessionEvent.Thinking -> updateUI("Thinking...")
is VoiceSessionEvent.Responded -> updateUI("AI: ${event.text}")
is VoiceSessionEvent.Speaking -> updateUI("Speaking...")
}
}- LLM (SmolLM2 360M): ~400 MB
- STT (Whisper Tiny): ~75 MB
- TTS (Piper): ~20 MB
- Total: ~495 MB
- LLM: 5-15 tokens/sec (device dependent)
- STT: Real-time transcription
- TTS: Real-time synthesis
- RAM: Minimum 2GB recommended
- Storage: 1GB free space for models
- CPU: ARMv8 64-bit recommended (supports ARMv7)
To use different models, update ModelService.kt:
companion object {
const val LLM_MODEL_ID = "your-model-id"
const val STT_MODEL_ID = "your-stt-model-id"
const val TTS_MODEL_ID = "your-tts-model-id"
fun registerDefaultModels() {
RunAnywhere.registerModel(
id = LLM_MODEL_ID,
name = "Your Model Name",
url = "your-model-url",
framework = InferenceFramework.LLAMA_CPP
)
// ... register other models
}
}All UI colors and themes are defined in:
ui/theme/Theme.kt- Color paletteui/theme/Type.kt- Typography
- Check internet connection
- Verify URLs in
ModelService.kt - Check device storage space
- Ensure minimum SDK 26
- Check Gradle sync completed successfully
- Verify all dependencies are downloaded
- Go to Settings → Apps → RunAnywhere Kotlin → Permissions
- Enable "Microphone" permission
- Use a device with at least 2GB RAM
- Close other apps to free memory
- Consider using smaller models
All AI processing happens 100% on-device:
- ✅ No data sent to servers
- ✅ No internet required (after model download)
- ✅ Complete privacy
- ✅ Works offline
See the LICENSE file for details.
For issues and questions:
- GitHub Issues: runanywhere-sdks/issues
- Documentation: RunAnywhere Docs
Built with ❤️ using RunAnywhere SDK v0.16.0-test.39