Skip to content

Conversation

@YoungY620
Copy link
Collaborator

Related Issue

N/A

Description

Motivation

Add support for locally deployed models to enable more users to try Kimi CLI. LM Studio was chosen for its ease of use and broad accessibility.

(To be honest, I really need it. 🆘 I found no way but to change the code.)

Changes

  1. New lm_studio provider (llm.py)

    Added dedicated provider for LM Studio. A separate provider was needed due to httpx compatibility issues (requires explicit AsyncHTTPTransport to work with LM Studio).

  2. LM Studio option in /setup (setup.py)

    Added "LM Studio (Local)" to the setup wizard with auto-detection of loaded models.

Usage

Run /setup and select "LM Studio (Local)", or configure manually:

[providers.local]
type = "lm_studio"
base_url = "http://localhost:1234/v1"
api_key = "any"  # LM Studio does not validate API key, any non-empty value works

[models.my-model]
provider = "local"
model = "model-name"
max_context_size = 32768

The default context size (32768) for LM Studio was below the required
RESERVED_TOKENS (50000), causing an AssertionError on startup.

Changes:
- Import RESERVED_TOKENS from kimisoul to stay in sync
- Set default local context size to 131072 (128k)
- Add minimum context size validation with user-friendly warning
- Auto-correct values below minimum instead of failing
@YoungY620 YoungY620 requested review from Copilot and stdrc and removed request for Copilot January 1, 2026 10:55
@YoungY620 YoungY620 changed the title Feature/add lmstudio support feat(llm): support local llm (lm_studio) Jan 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants