Skip to content

feat: searchable model dropdown + fix missing providers in /models#474

Open
TheDarkSkyXD wants to merge 8 commits intospacedriveapp:mainfrom
TheDarkSkyXD:feat/model-select-dropdown
Open

feat: searchable model dropdown + fix missing providers in /models#474
TheDarkSkyXD wants to merge 8 commits intospacedriveapp:mainfrom
TheDarkSkyXD:feat/model-select-dropdown

Conversation

@TheDarkSkyXD
Copy link
Contributor

Summary

  • Upgraded ModelSelect component to a proper searchable dropdown with chevron indicator, keyboard navigation (Arrow Up/Down, Enter, Escape), selected model info badge below the input, loading spinner, and error/empty states
  • Fixed configured_providers() in models.rs — the /models endpoint was returning empty results for users connected via Anthropic OAuth, Ollama, Nvidia, or GitHub Copilot because those providers were missing from the detection logic

Details

Frontend (interface/src/components/ModelSelect.tsx)

  • Dropdown chevron arrow (rotates when open) so users know it's a select, not just a text input
  • Search icon appears when dropdown is open
  • Keyboard navigation: Arrow Up/Down to browse, Enter to select, Escape to close
  • Selected model info shown below input when closed (provider / name / context window)
  • "active" badge on the currently selected model in the dropdown
  • Loading state while models are being fetched
  • Error/empty state messages ("Failed to load", "No providers", "No matches")
  • Works identically on macOS, Windows, and Docker (browser-rendered React/CSS)

Backend (src/api/models.rs)

  • Added Anthropic OAuth detection via crate::auth::credentials_path() — users signed in via OAuth were getting no models because only anthropic_key in config.toml was checked
  • Added Ollama detection (ollama_base_url/OLLAMA_BASE_URL, ollama_key/OLLAMA_API_KEY)
  • Added Nvidia detection (nvidia_key/NVIDIA_API_KEY)
  • Added GitHub Copilot detection (github_copilot_key/GITHUB_COPILOT_API_KEY)
  • Aligns configured_providers() in models.rs with get_providers() in providers.rs

Test plan

  • Connect a provider via API key → model dropdown shows available models
  • Connect Anthropic via OAuth → model dropdown shows Anthropic models
  • Open agent config → routing tab → verify all 6 model slots (channel, branch, worker, compactor, cortex, voice) show searchable dropdowns
  • Test keyboard navigation in the dropdown (arrow keys, enter, escape)
  • Test search filtering by model name, ID, and provider name
  • Verify on macOS, Windows, and Docker deployments

🤖 Generated with Claude Code

… missing providers

Upgrade ModelSelect to a proper searchable dropdown with chevron indicator,
keyboard navigation (arrow keys + enter), selected model info badge, and
loading/error states. Fix configured_providers in models.rs to detect
Anthropic OAuth, Ollama, Nvidia, and GitHub Copilot so the /models endpoint
returns models for all connected providers.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 22, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Added keyboard navigation, highlighting, scroll-into-view, improved open/close/commit semantics, and explicit loading/error/empty states to ModelSelect UI. Backend provider discovery and routing were extended to recognize Anthropic OAuth, Ollama, Nvidia, and GitHub Copilot provider inputs.

Changes

Cohort / File(s) Summary
Frontend — Model selection UI
interface/src/components/ModelSelect.tsx
Introduced highlightIndex, flatList/providerOrder, keyboard navigation (ArrowUp/ArrowDown/Enter/Escape), scroll-into-view via listRef, mouse/keyboard sync, changed focus/blur/commit semantics (including suppressBlurCommitRef), expanded search matching to include provider labels, explicit loading/error/empty branches, added UI elements (search icon, chevron, selected-model badge).
Backend — Provider mapping & discovery
src/api/models.rs, src/api/providers.rs, src/llm/routing.rs
Expanded direct_provider_mapping and configured_providers to include nvidia, ollama (collapsed variants), and github-copilot; treat Anthropic as enabled when OAuth credentials file exists; added ollama routing defaults/prefix mapping. No public API signature changes.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related issues

Suggested reviewers

  • jamiepine
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the two main changes: a searchable model dropdown UI component and fixing missing provider detection for the /models endpoint.
Description check ✅ Passed The description is directly related to the changeset, providing clear summaries of frontend and backend changes with specific implementation details and a test plan.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
interface/src/components/ModelSelect.tsx (1)

143-152: ⚠️ Potential issue | 🟠 Major

Don't commit the search draft when the popup is dismissed.

handleInputChange() still pushes every keystroke through onChange(), so the new outside-click / Escape / chevron-close paths persist whatever the user typed even when they were only filtering. Typing a few characters to search and then dismissing the popup now overwrites the previously selected model with a partial ID. Keep filter as draft state and only call onChange() on explicit selection or a deliberate raw-ID commit path.

Also applies to: 175-182, 191-197, 249-255

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interface/src/components/ModelSelect.tsx` around lines 143 - 152, The
component currently commits every keystroke via onChange in handleInputChange so
outside-click / Escape / chevron-close handlers (the containerRef outside-click
handler, the Escape handler, and the chevron-close handler referenced in the
comment) end up persisting a partial ID; change the behavior so filter is
treated as draft state: stop calling onChange from handleInputChange and only
update the local filter state and highlight state there, and in the
outside-click/Escape/chevron-close code (where you call setOpen(false),
setFilter(""), setHighlightIndex(-1)) ensure you do NOT call onChange — instead
only call onChange in the explicit selection handler (e.g., the item
click/selection function) or in the deliberate "commit raw ID" path; apply the
same change to the other mentioned handlers to prevent accidental commits.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 269-274: The visible provider badges use PROVIDER_LABELS but the
search/filter uses the raw m.provider, so include the human-friendly label when
matching: change any filter/matching logic that checks m.provider (and where
selectedModel is compared to value) to also compute providerLabel =
PROVIDER_LABELS[m.provider] ?? m.provider and test the query against
providerLabel (in addition to m.provider and model name/id). Update both the
dropdown filter and the selected-model display matching code (references:
PROVIDER_LABELS, selectedModel, value and the model list filtering code where
m.provider is used) so searches for labels like "Google Gemini" or "Z.ai (GLM)"
will match.

In `@src/api/models.rs`:
- Around line 341-345: The provider presence checks that push names like
"ollama", "nvidia", and "github-copilot" are misleading because get_models()
filters by ModelInfo.provider but
fetch_models_dev()/direct_provider_mapping()/extra_models() currently emit
different provider IDs; update either direct_provider_mapping()/extra_models()
to include entries for "ollama", "nvidia", and "github-copilot" (so the catalog
emits matching provider IDs) or change the has_key block to push the provider
IDs that the catalog actually emits (keeping ModelInfo.provider alignment with
get_models()); locate the checks around has_key("ollama_base_url", ...),
has_key("ollama_key", ...), and the functions fetch_models_dev(),
direct_provider_mapping(), and extra_models() and make the provider IDs
consistent across them.
- Around line 367-369: The provider check is using the wrong config key
("github_copilot_key"/"GITHUB_COPILOT_API_KEY") so valid Copilot configs are
skipped; update the has_key call in configured_providers (the block that
currently does providers.push("github-copilot")) to check the canonical config
key used by the config loader (the llm provider key wired in src/config/load.rs)
and its corresponding env var instead of
"github_copilot_key"/"GITHUB_COPILOT_API_KEY" so configured Copilot instances
are detected correctly.

---

Outside diff comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 143-152: The component currently commits every keystroke via
onChange in handleInputChange so outside-click / Escape / chevron-close handlers
(the containerRef outside-click handler, the Escape handler, and the
chevron-close handler referenced in the comment) end up persisting a partial ID;
change the behavior so filter is treated as draft state: stop calling onChange
from handleInputChange and only update the local filter state and highlight
state there, and in the outside-click/Escape/chevron-close code (where you call
setOpen(false), setFilter(""), setHighlightIndex(-1)) ensure you do NOT call
onChange — instead only call onChange in the explicit selection handler (e.g.,
the item click/selection function) or in the deliberate "commit raw ID" path;
apply the same change to the other mentioned handlers to prevent accidental
commits.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3909c248-8c41-40e1-8ee2-75e709ce0b9f

📥 Commits

Reviewing files that changed from the base of the PR and between 6f81f39 and fabbed3.

📒 Files selected for processing (2)
  • interface/src/components/ModelSelect.tsx
  • src/api/models.rs

Comment on lines +367 to +369
if has_key("github_copilot_key", "GITHUB_COPILOT_API_KEY") {
providers.push("github-copilot");
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use the canonical GitHub Copilot config key here.

src/config/load.rs:1-50 wires this provider through llm.n / env n, not github_copilot_key / GITHUB_COPILOT_API_KEY. With the current check, valid Copilot configs will still be skipped by configured_providers().

🐛 Proposed fix
-    if has_key("github_copilot_key", "GITHUB_COPILOT_API_KEY") {
+    if has_key("n", "n") {
         providers.push("github-copilot");
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if has_key("github_copilot_key", "GITHUB_COPILOT_API_KEY") {
providers.push("github-copilot");
}
if has_key("n", "n") {
providers.push("github-copilot");
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/models.rs` around lines 367 - 369, The provider check is using the
wrong config key ("github_copilot_key"/"GITHUB_COPILOT_API_KEY") so valid
Copilot configs are skipped; update the has_key call in configured_providers
(the block that currently does providers.push("github-copilot")) to check the
canonical config key used by the config loader (the llm provider key wired in
src/config/load.rs) and its corresponding env var instead of
"github_copilot_key"/"GITHUB_COPILOT_API_KEY" so configured Copilot instances
are detected correctly.

Copy link
Collaborator

@vsumner vsumner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice direction overall. The searchable dropdown is a real UX improvement, and wiring Anthropic OAuth into /models is the right kind of fix. I left two follow-ups that look important before merge:

  • the new provider checks still do not line up with the provider IDs emitted by the model catalog, so /models can stay empty for some of the newly added providers
  • Anthropic OAuth is now recognized in /models, but /providers still will not mark Anthropic configured, so the Settings UI can drift from the actual selector behavior

Suggested verification after a fix: run the provider/status path and the /models path together for Anthropic OAuth plus at least one of nvidia or github-copilot, so both endpoints agree on what the user can actually select.

I did not run the interface locally for this pass; this review is based on the current diff and the provider catalog/runtime paths.

if has_key("ollama_base_url", "OLLAMA_BASE_URL")
|| has_key("ollama_key", "OLLAMA_API_KEY")
{
providers.push("ollama");
Copy link
Collaborator

@vsumner vsumner Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch adding these provider checks. One thing to tighten up before merge: get_models() filters by ModelInfo.provider, but fetch_models_dev() still only emits direct provider IDs for entries covered by direct_provider_mapping(). Right now nvidia and github-copilot are missing there, and the live catalog exposes ollama-cloud rather than ollama, so these blocks can still mark a provider configured while /models returns an empty list. I’d either add matching mappings or extra models for these providers, or push the provider IDs the catalog already emits, so the filter and config detection stay aligned.

if !providers.contains(&"anthropic") {
if config_path
.parent()
.is_some_and(|instance_dir| crate::auth::credentials_path(instance_dir).exists())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This moves Anthropic OAuth support in the right direction. The follow-up I’d make before merge is to mirror the same credentials-file check in /providers as well. Settings uses that endpoint to decide whether Anthropic is configured, so with the current diff an OAuth user can get Anthropic models in the selector while the Providers screen still shows Anthropic as unconfigured.

…rs, improve ModelSelect

- Add nvidia and ollama/ollama-cloud to direct_provider_mapping() so
  models.dev catalog entries get correct provider IDs matching
  configured_providers()
- Add GitHub Copilot models to extra_models() since they are not in
  models.dev (accessed via token exchange)
- Mirror Anthropic OAuth credentials check in /providers endpoint so
  Settings UI correctly shows Anthropic as configured for OAuth users
- Stop committing search draft on every keystroke in ModelSelect;
  only commit on explicit selection or Enter
- Add PROVIDER_LABELS matching to ModelSelect filter so searching by
  display names like "Google Gemini" works

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/api/providers.rs (1)

350-352: ⚠️ Potential issue | 🔴 Critical

Missing definition for anthropic_oauth_configured variable.

The variable anthropic_oauth_configured is used on lines 426 and 452 but is never defined. It should be defined using crate::auth::credentials_path(&instance_dir) (Anthropic's OAuth credentials), not the OpenAI path. Currently, only openai_oauth_configured is defined on line 352.

🐛 Proposed fix
     let instance_dir = (**state.instance_dir.load()).clone();
     let secrets_store = state.secrets_store.load();
+    let anthropic_oauth_configured = crate::auth::credentials_path(&instance_dir).exists();
     let openai_oauth_configured = crate::openai_auth::credentials_path(&instance_dir).exists();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/providers.rs` around lines 350 - 352, Define the missing boolean
variable anthropic_oauth_configured by checking Anthropic's OAuth credentials
path similar to openai_oauth_configured: use
crate::auth::credentials_path(&instance_dir).exists() and assign it to
anthropic_oauth_configured so the later uses of anthropic_oauth_configured
(where openai_oauth_configured is already defined) compile and correctly reflect
Anthropic OAuth setup.
🧹 Nitpick comments (1)
interface/src/components/ModelSelect.tsx (1)

315-316: Consider caching flat indices in a Map for O(1) lookup.

flatList.indexOf(model) is O(n) and called for every model during render, making the overall complexity O(n²). With typical model counts this is likely fine, but if the list grows large, a precomputed Map would improve performance.

♻️ Optional optimization
  const flatList = useMemo(() => {
    const items: ModelInfo[] = [];
    for (const p of sortedProviders) {
      for (const m of grouped[p]) {
        items.push(m);
      }
    }
    return items;
  }, [sortedProviders, grouped]);

+ const flatIndexMap = useMemo(() => {
+   const map = new Map<string, number>();
+   flatList.forEach((m, i) => map.set(m.id, i));
+   return map;
+ }, [flatList]);

Then use flatIndexMap.get(model.id) ?? -1 instead of flatList.indexOf(model).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interface/src/components/ModelSelect.tsx` around lines 315 - 316, The render
currently calls flatList.indexOf(model) inside the grouped[prov].map in
ModelSelect.tsx which is O(n) per item; precompute a Map (e.g., flatIndexMap)
keyed by model.id to index (built once before rendering/group iteration) and
replace flatList.indexOf(model) with flatIndexMap.get(model.id) ?? -1; ensure
the Map is updated whenever flatList changes (build it near where flatList is
derived) and reference flatIndexMap in the mapping code to get O(1) lookups.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/api/providers.rs`:
- Around line 350-352: Define the missing boolean variable
anthropic_oauth_configured by checking Anthropic's OAuth credentials path
similar to openai_oauth_configured: use
crate::auth::credentials_path(&instance_dir).exists() and assign it to
anthropic_oauth_configured so the later uses of anthropic_oauth_configured
(where openai_oauth_configured is already defined) compile and correctly reflect
Anthropic OAuth setup.

---

Nitpick comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 315-316: The render currently calls flatList.indexOf(model) inside
the grouped[prov].map in ModelSelect.tsx which is O(n) per item; precompute a
Map (e.g., flatIndexMap) keyed by model.id to index (built once before
rendering/group iteration) and replace flatList.indexOf(model) with
flatIndexMap.get(model.id) ?? -1; ensure the Map is updated whenever flatList
changes (build it near where flatList is derived) and reference flatIndexMap in
the mapping code to get O(1) lookups.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e14fea03-e9e2-452a-a917-738ed9299ecc

📥 Commits

Reviewing files that changed from the base of the PR and between fabbed3 and abbb3ea.

📒 Files selected for processing (3)
  • interface/src/components/ModelSelect.tsx
  • src/api/models.rs
  • src/api/providers.rs

The previous commit used anthropic_oauth_configured but never defined
it. Add the definition using crate::auth::credentials_path() to match
the same check used in configured_providers() in models.rs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Collaborator

@vsumner vsumner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing the earlier backend issues. The provider-ID alignment for nvidia/ollama and the Anthropic OAuth parity between /models and /providers both look much better now. I took another pass on the current head and found two follow-ups that still seem worth fixing before merge:

  • ModelSelect now preserves draft search text correctly, but it also stops propagating free-form model IDs unless the user presses Enter first
  • GitHub Copilot is now surfaced, but the endpoint is still backed by a small hardcoded subset even though the upstream Copilot catalog is available and larger

I did not rerun the UI locally for this pass; this is based on the current diff, the consuming forms, and a live check of the models.dev catalog.

e.preventDefault();
if (highlightIndex >= 0 && highlightIndex < flatList.length) {
handleSelect(flatList[highlightIndex].id);
} else if (filter.trim()) {
Copy link
Collaborator

@vsumner vsumner Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The draft-state change fixes the accidental overwrite path, but it also introduces a new behavior regression for manual model IDs. handleInputChange() no longer calls onChange(), so parent state is only updated on explicit selection or Enter. In both the provider modal and agent config, save/test still read the parent value, not the local draft. That means a user can type a custom model ID, click Test model or save without pressing Enter, and the request will still use the previous model. I’d either restore an explicit raw-ID commit path on blur/button actions, or thread the draft value up so the parent stays in sync for manual entry.

reasoning: false,
input_audio: false,
},
// GitHub Copilot - models accessed via Copilot token exchange, not on models.dev
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This closes the empty-list bug for Copilot, but it still leaves /models?provider=github-copilot backed by a hardcoded five-model snapshot. The live models.dev catalog currently exposes a much larger github-copilot set, and direct_provider_mapping() still does not map that provider through the fetched catalog. Unless the intent is to deliberately curate a tested subset, this will drift quickly and still falls short of the PR goal of returning the provider's available models.

…, catalog-backed GitHub Copilot

- Add onBlur handler to ModelSelect so typed custom model IDs are
  committed when the input loses focus (e.g. clicking Save/Test
  without pressing Enter). Escape suppresses the commit via ref.
- Add github-copilot to direct_provider_mapping() so /models pulls
  from the live models.dev catalog (25 models) instead of the
  hardcoded 5-model snapshot that drifts over time.
- Remove hardcoded GitHub Copilot entries from extra_models() to
  avoid duplicates with catalog results.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 192-202: handleBlur currently commits the pre-selection filter
value on input blur causing duplicate onChange when handleSelect or the
chevron-close path programmatically blurs the input; mirror the Escape-key
pattern by setting suppressBlurCommitRef.current = true before calling
input.blur() in both the handleSelect() paths (where handleSelect() then
immediately blurs) and the chevron-close/menu-close handler so handleBlur
returns early and does not fire a second onChange(). Update references to
suppressBlurCommitRef, handleBlur, and handleSelect accordingly.

In `@src/api/models.rs`:
- Line 123: Mapping "ollama" and "ollama-cloud" to provider = "ollama" in
src/api/models.rs exposes providers before the runtime can route them; update
the routing logic in src/llm/routing.rs to recognize and handle the "ollama"
provider and model id prefix (e.g., "ollama/<model>"). Specifically, add a match
arm/branch in the provider matching logic (the function that routes on provider
strings and model id prefixes) to treat provider == "ollama" (and any
"ollama/..." ids) the same way as other supported providers so requests are
dispatched to the Ollama runtime handler.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: bd08a631-ca02-4dee-a35a-f0f73a340007

📥 Commits

Reviewing files that changed from the base of the PR and between 9e0dd31 and 1b3bd60.

📒 Files selected for processing (2)
  • interface/src/components/ModelSelect.tsx
  • src/api/models.rs

TheDarkSkyXD and others added 2 commits March 25, 2026 14:07
- Set suppressBlurCommitRef before blur in Enter and chevron-close
  handlers to prevent handleBlur from firing a second onChange()
  after handleSelect already committed the value.
- Add "ollama" arm to defaults_for_provider() and provider_to_prefix()
  in routing.rs so ollama models are fully routable.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…shot

- Add minimax-cn and moonshotai-cn to direct_provider_mapping() so
  their models come from the live models.dev catalog.
- Remove extra_models() entirely — all providers now use the catalog.
  No more hardcoded model lists that drift over time.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@TheDarkSkyXD
Copy link
Contributor Author

Fixes applied in latest commits

1b3bd60 — Address vsumner's second review

  • ModelSelect onBlur commit: Added onBlur handler so typed custom model IDs are committed when the input loses focus (e.g. clicking Save/Test without pressing Enter). Uses a suppressBlurCommitRef to prevent Escape from accidentally committing draft text.
  • GitHub Copilot catalog: Added "github-copilot" to direct_provider_mapping() so /models?provider=github-copilot pulls from the live models.dev catalog (25 models) instead of the old hardcoded 5-model snapshot.

f8b1429 — Address CodeRabbit review

  • Double onChange fix (Critical): Set suppressBlurCommitRef.current = true before blur() in both the Enter key handler and chevron-close button. This prevents handleBlur from firing a second onChange() after handleSelect already committed the value.
  • Ollama routing: Added "ollama" arm to defaults_for_provider() (default: ollama/llama3) and provider_to_prefix() in routing.rs so ollama models are fully routable.
  • GitHub Copilot config key: CodeRabbit's suggestion to use has_key("n", "n") was incorrect — the actual config uses github_copilot_key / GITHUB_COPILOT_API_KEY per config/load.rs and config/types.rs. No change needed.

9d33013 — Remove all hardcoded models

  • Added minimax-cn and moonshotai-cn to direct_provider_mapping() so their models come from the live catalog.
  • Removed extra_models() entirely — all providers now use the models.dev catalog. No more hardcoded model lists that drift over time.

Provider coverage after these fixes

Provider Catalog mapping Models
GitHub Copilot "github-copilot" 25 (was 5 hardcoded)
NVIDIA "nvidia" ~75 (was 0)
Ollama "ollama" | "ollama-cloud" live catalog (was 0)
MiniMax CN "minimax-cn" 6 (was 1 hardcoded)
Moonshot "moonshotai" | "moonshotai-cn" 6 Kimi models (was 1 deprecated)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
interface/src/components/ModelSelect.tsx (2)

17-37: ⚠️ Potential issue | 🟡 Minor

Add nvidia to the provider label/order tables.

This PR makes /models surface nvidia, but PROVIDER_LABELS and providerOrder still omit it. The dropdown will therefore render the raw nvidia slug in headings/badges and sort that provider after every known entry, so the new backend support feels only partially wired through the UI.

🧭 Suggested follow-up
   zhipu: "Z.ai (GLM)",
   ollama: "Ollama",
+  nvidia: "NVIDIA",
   "opencode-zen": "OpenCode Zen",
   "opencode-go": "OpenCode Go",
   minimax: "MiniMax",
   "minimax-cn": "MiniMax CN",
   "github-copilot": "GitHub Copilot",
@@
   "openai",
   "openai-chatgpt",
   "github-copilot",
   "ollama",
+  "nvidia",
   "deepseek",
   "xai",

Also applies to: 45-65

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interface/src/components/ModelSelect.tsx` around lines 17 - 37,
PROVIDER_LABELS and providerOrder are missing an entry for the new "nvidia"
provider so the UI shows the raw slug and sorts it last; add "nvidia": "NVIDIA"
(or desired display text) to the PROVIDER_LABELS Record and insert "nvidia" into
the providerOrder array in the same logical position you want it to appear in
lists/dropdowns (match other provider naming conventions and ensure any
components referencing PROVIDER_LABELS or providerOrder, e.g., ModelSelect, will
now display the proper label and order).

249-289: ⚠️ Potential issue | 🟠 Major

Expose combobox/listbox semantics for the custom keyboard navigation.

The new ArrowUp/ArrowDown/highlight behavior is implemented visually, but the input and popup never advertise combobox/listbox state to assistive tech. Screen readers will not know that the menu opened or which option is currently highlighted, so the control remains effectively inaccessible despite the keyboard handling.

Also applies to: 308-392

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interface/src/components/ModelSelect.tsx` around lines 249 - 289, The input
and popup lack ARIA combobox/listbox semantics so screen readers can't detect
open state or the highlighted option; update the Input and the popup/list
rendering to expose correct attributes: give the input role="combobox",
aria-haspopup="listbox", aria-expanded={open}, aria-controls pointing to the
listbox id, and set aria-activedescendant to the id of the currently highlighted
option (derived from highlightIndex); ensure the popup container uses
role="listbox" with a stable id and each option element uses role="option", a
stable id, and aria-selected={highlightIndex === index}; keep updates in the
same components/functions (Input usage, inputRef handlers, the popup render code
that uses setOpen/setHighlightIndex/highlightIndex/filter) so keyboard
navigation and assistive tech reflect the visual state.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 192-202: The current blur/commit logic in handleBlur (and the
similar commit path around the Enter/tab handler) uses filter.trim() to persist
whatever the user typed even when it’s only a search query; update handleBlur
and the Enter/tab commit path to only call onChange(trimmed) when the trimmed
filter exactly matches a known model ID or when the component is in explicit
custom-ID mode, otherwise keep the existing value and just close/reset the menu;
use suppressBlurCommitRef as-is to prevent duplicate commits but ensure you
clear/close the popup state (reset filter/dropdown) without persisting when the
selection was only a search; refer to handleBlur, suppressBlurCommitRef, filter,
value, onChange and the Enter/tab commit handler to make these changes.

In `@src/llm/routing.rs`:
- Line 394: The mapping that hardcodes "ollama" =>
RoutingConfig::for_model("ollama/llama3".into()) should be removed: instead make
defaults_for_provider() return no default for Ollama (i.e., leave routing unset)
or derive the default dynamically from the Ollama /models catalog before
persisting; update the code that builds the provider defaults (the branch
producing RoutingConfig::for_model for "ollama") to either query the discovered
model list and pick a present model or return None/empty routing so users must
select an installed Ollama model; touch RoutingConfig::for_model usage and
defaults_for_provider() to ensure downstream calls in src/config/providers.rs
and src/config/onboarding.rs handle the unset case safely.

---

Outside diff comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 17-37: PROVIDER_LABELS and providerOrder are missing an entry for
the new "nvidia" provider so the UI shows the raw slug and sorts it last; add
"nvidia": "NVIDIA" (or desired display text) to the PROVIDER_LABELS Record and
insert "nvidia" into the providerOrder array in the same logical position you
want it to appear in lists/dropdowns (match other provider naming conventions
and ensure any components referencing PROVIDER_LABELS or providerOrder, e.g.,
ModelSelect, will now display the proper label and order).
- Around line 249-289: The input and popup lack ARIA combobox/listbox semantics
so screen readers can't detect open state or the highlighted option; update the
Input and the popup/list rendering to expose correct attributes: give the input
role="combobox", aria-haspopup="listbox", aria-expanded={open}, aria-controls
pointing to the listbox id, and set aria-activedescendant to the id of the
currently highlighted option (derived from highlightIndex); ensure the popup
container uses role="listbox" with a stable id and each option element uses
role="option", a stable id, and aria-selected={highlightIndex === index}; keep
updates in the same components/functions (Input usage, inputRef handlers, the
popup render code that uses setOpen/setHighlightIndex/highlightIndex/filter) so
keyboard navigation and assistive tech reflect the visual state.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 274a5bb3-c02a-49de-8a9c-3bbd2bdc07d8

📥 Commits

Reviewing files that changed from the base of the PR and between 1b3bd60 and f8b1429.

📒 Files selected for processing (2)
  • interface/src/components/ModelSelect.tsx
  • src/llm/routing.rs

Comment on lines +192 to 202
const handleBlur = () => {
if (suppressBlurCommitRef.current) {
suppressBlurCommitRef.current = false;
return;
}
// Commit typed custom model ID when input loses focus (e.g. clicking Save/Test)
const trimmed = filter.trim();
if (trimmed && trimmed !== value) {
onChange(trimmed);
}
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't persist partial search text as the model ID.

Lines 199-200 and 236-238 commit filter.trim() whenever nothing is highlighted. That means typing haiku or anthropic just to search, then pressing Enter or tabbing away, overwrites the saved value with that partial query even if the dropdown still contains real model matches. The normal blur path also leaves the popup state intact, so tabbing away can strand stale results on screen. Reserve the raw-ID path for an exact model ID or an explicit custom-ID mode; otherwise keep the previous value and just close/reset the menu.

Also applies to: 231-239

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interface/src/components/ModelSelect.tsx` around lines 192 - 202, The current
blur/commit logic in handleBlur (and the similar commit path around the
Enter/tab handler) uses filter.trim() to persist whatever the user typed even
when it’s only a search query; update handleBlur and the Enter/tab commit path
to only call onChange(trimmed) when the trimmed filter exactly matches a known
model ID or when the component is in explicit custom-ID mode, otherwise keep the
existing value and just close/reset the menu; use suppressBlurCommitRef as-is to
prevent duplicate commits but ensure you clear/close the popup state (reset
filter/dropdown) without persisting when the selection was only a search; refer
to handleBlur, suppressBlurCommitRef, filter, value, onChange and the Enter/tab
commit handler to make these changes.

..RoutingConfig::default()
}
}
"ollama" => RoutingConfig::for_model("ollama/llama3".into()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't hardcode a local Ollama model as the default route.

Line 394 assumes every Ollama instance has ollama/llama3, but Ollama model availability is host-specific. defaults_for_provider() is consumed directly by src/config/providers.rs:280-295 and src/config/onboarding.rs:266-271, so this value can be persisted into config and then fail every request for users who have Ollama enabled but have not pulled that exact model. Prefer deriving the default from the discovered /models catalog, or leave Ollama routing unset until the user picks an installed model.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/llm/routing.rs` at line 394, The mapping that hardcodes "ollama" =>
RoutingConfig::for_model("ollama/llama3".into()) should be removed: instead make
defaults_for_provider() return no default for Ollama (i.e., leave routing unset)
or derive the default dynamically from the Ollama /models catalog before
persisting; update the code that builds the provider defaults (the branch
producing RoutingConfig::for_model for "ollama") to either query the discovered
model list and pick a present model or return None/empty routing so users must
select an installed Ollama model; touch RoutingConfig::for_model usage and
defaults_for_provider() to ensure downstream calls in src/config/providers.rs
and src/config/onboarding.rs handle the unset case safely.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants