feat: searchable model dropdown + fix missing providers in /models#474
feat: searchable model dropdown + fix missing providers in /models#474TheDarkSkyXD wants to merge 8 commits intospacedriveapp:mainfrom
Conversation
… missing providers Upgrade ModelSelect to a proper searchable dropdown with chevron indicator, keyboard navigation (arrow keys + enter), selected model info badge, and loading/error states. Fix configured_providers in models.rs to detect Anthropic OAuth, Ollama, Nvidia, and GitHub Copilot so the /models endpoint returns models for all connected providers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdded keyboard navigation, highlighting, scroll-into-view, improved open/close/commit semantics, and explicit loading/error/empty states to ModelSelect UI. Backend provider discovery and routing were extended to recognize Anthropic OAuth, Ollama, Nvidia, and GitHub Copilot provider inputs. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related issues
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
interface/src/components/ModelSelect.tsx (1)
143-152:⚠️ Potential issue | 🟠 MajorDon't commit the search draft when the popup is dismissed.
handleInputChange()still pushes every keystroke throughonChange(), so the new outside-click / Escape / chevron-close paths persist whatever the user typed even when they were only filtering. Typing a few characters to search and then dismissing the popup now overwrites the previously selected model with a partial ID. Keepfilteras draft state and only callonChange()on explicit selection or a deliberate raw-ID commit path.Also applies to: 175-182, 191-197, 249-255
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/components/ModelSelect.tsx` around lines 143 - 152, The component currently commits every keystroke via onChange in handleInputChange so outside-click / Escape / chevron-close handlers (the containerRef outside-click handler, the Escape handler, and the chevron-close handler referenced in the comment) end up persisting a partial ID; change the behavior so filter is treated as draft state: stop calling onChange from handleInputChange and only update the local filter state and highlight state there, and in the outside-click/Escape/chevron-close code (where you call setOpen(false), setFilter(""), setHighlightIndex(-1)) ensure you do NOT call onChange — instead only call onChange in the explicit selection handler (e.g., the item click/selection function) or in the deliberate "commit raw ID" path; apply the same change to the other mentioned handlers to prevent accidental commits.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 269-274: The visible provider badges use PROVIDER_LABELS but the
search/filter uses the raw m.provider, so include the human-friendly label when
matching: change any filter/matching logic that checks m.provider (and where
selectedModel is compared to value) to also compute providerLabel =
PROVIDER_LABELS[m.provider] ?? m.provider and test the query against
providerLabel (in addition to m.provider and model name/id). Update both the
dropdown filter and the selected-model display matching code (references:
PROVIDER_LABELS, selectedModel, value and the model list filtering code where
m.provider is used) so searches for labels like "Google Gemini" or "Z.ai (GLM)"
will match.
In `@src/api/models.rs`:
- Around line 341-345: The provider presence checks that push names like
"ollama", "nvidia", and "github-copilot" are misleading because get_models()
filters by ModelInfo.provider but
fetch_models_dev()/direct_provider_mapping()/extra_models() currently emit
different provider IDs; update either direct_provider_mapping()/extra_models()
to include entries for "ollama", "nvidia", and "github-copilot" (so the catalog
emits matching provider IDs) or change the has_key block to push the provider
IDs that the catalog actually emits (keeping ModelInfo.provider alignment with
get_models()); locate the checks around has_key("ollama_base_url", ...),
has_key("ollama_key", ...), and the functions fetch_models_dev(),
direct_provider_mapping(), and extra_models() and make the provider IDs
consistent across them.
- Around line 367-369: The provider check is using the wrong config key
("github_copilot_key"/"GITHUB_COPILOT_API_KEY") so valid Copilot configs are
skipped; update the has_key call in configured_providers (the block that
currently does providers.push("github-copilot")) to check the canonical config
key used by the config loader (the llm provider key wired in src/config/load.rs)
and its corresponding env var instead of
"github_copilot_key"/"GITHUB_COPILOT_API_KEY" so configured Copilot instances
are detected correctly.
---
Outside diff comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 143-152: The component currently commits every keystroke via
onChange in handleInputChange so outside-click / Escape / chevron-close handlers
(the containerRef outside-click handler, the Escape handler, and the
chevron-close handler referenced in the comment) end up persisting a partial ID;
change the behavior so filter is treated as draft state: stop calling onChange
from handleInputChange and only update the local filter state and highlight
state there, and in the outside-click/Escape/chevron-close code (where you call
setOpen(false), setFilter(""), setHighlightIndex(-1)) ensure you do NOT call
onChange — instead only call onChange in the explicit selection handler (e.g.,
the item click/selection function) or in the deliberate "commit raw ID" path;
apply the same change to the other mentioned handlers to prevent accidental
commits.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3909c248-8c41-40e1-8ee2-75e709ce0b9f
📒 Files selected for processing (2)
interface/src/components/ModelSelect.tsxsrc/api/models.rs
| if has_key("github_copilot_key", "GITHUB_COPILOT_API_KEY") { | ||
| providers.push("github-copilot"); | ||
| } |
There was a problem hiding this comment.
Use the canonical GitHub Copilot config key here.
src/config/load.rs:1-50 wires this provider through llm.n / env n, not github_copilot_key / GITHUB_COPILOT_API_KEY. With the current check, valid Copilot configs will still be skipped by configured_providers().
🐛 Proposed fix
- if has_key("github_copilot_key", "GITHUB_COPILOT_API_KEY") {
+ if has_key("n", "n") {
providers.push("github-copilot");
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if has_key("github_copilot_key", "GITHUB_COPILOT_API_KEY") { | |
| providers.push("github-copilot"); | |
| } | |
| if has_key("n", "n") { | |
| providers.push("github-copilot"); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/models.rs` around lines 367 - 369, The provider check is using the
wrong config key ("github_copilot_key"/"GITHUB_COPILOT_API_KEY") so valid
Copilot configs are skipped; update the has_key call in configured_providers
(the block that currently does providers.push("github-copilot")) to check the
canonical config key used by the config loader (the llm provider key wired in
src/config/load.rs) and its corresponding env var instead of
"github_copilot_key"/"GITHUB_COPILOT_API_KEY" so configured Copilot instances
are detected correctly.
vsumner
left a comment
There was a problem hiding this comment.
Nice direction overall. The searchable dropdown is a real UX improvement, and wiring Anthropic OAuth into /models is the right kind of fix. I left two follow-ups that look important before merge:
- the new provider checks still do not line up with the provider IDs emitted by the model catalog, so
/modelscan stay empty for some of the newly added providers - Anthropic OAuth is now recognized in
/models, but/providersstill will not mark Anthropic configured, so the Settings UI can drift from the actual selector behavior
Suggested verification after a fix: run the provider/status path and the /models path together for Anthropic OAuth plus at least one of nvidia or github-copilot, so both endpoints agree on what the user can actually select.
I did not run the interface locally for this pass; this review is based on the current diff and the provider catalog/runtime paths.
| if has_key("ollama_base_url", "OLLAMA_BASE_URL") | ||
| || has_key("ollama_key", "OLLAMA_API_KEY") | ||
| { | ||
| providers.push("ollama"); |
There was a problem hiding this comment.
Nice catch adding these provider checks. One thing to tighten up before merge: get_models() filters by ModelInfo.provider, but fetch_models_dev() still only emits direct provider IDs for entries covered by direct_provider_mapping(). Right now nvidia and github-copilot are missing there, and the live catalog exposes ollama-cloud rather than ollama, so these blocks can still mark a provider configured while /models returns an empty list. I’d either add matching mappings or extra models for these providers, or push the provider IDs the catalog already emits, so the filter and config detection stay aligned.
| if !providers.contains(&"anthropic") { | ||
| if config_path | ||
| .parent() | ||
| .is_some_and(|instance_dir| crate::auth::credentials_path(instance_dir).exists()) |
There was a problem hiding this comment.
This moves Anthropic OAuth support in the right direction. The follow-up I’d make before merge is to mirror the same credentials-file check in /providers as well. Settings uses that endpoint to decide whether Anthropic is configured, so with the current diff an OAuth user can get Anthropic models in the selector while the Providers screen still shows Anthropic as unconfigured.
…rs, improve ModelSelect - Add nvidia and ollama/ollama-cloud to direct_provider_mapping() so models.dev catalog entries get correct provider IDs matching configured_providers() - Add GitHub Copilot models to extra_models() since they are not in models.dev (accessed via token exchange) - Mirror Anthropic OAuth credentials check in /providers endpoint so Settings UI correctly shows Anthropic as configured for OAuth users - Stop committing search draft on every keystroke in ModelSelect; only commit on explicit selection or Enter - Add PROVIDER_LABELS matching to ModelSelect filter so searching by display names like "Google Gemini" works Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/api/providers.rs (1)
350-352:⚠️ Potential issue | 🔴 CriticalMissing definition for
anthropic_oauth_configuredvariable.The variable
anthropic_oauth_configuredis used on lines 426 and 452 but is never defined. It should be defined usingcrate::auth::credentials_path(&instance_dir)(Anthropic's OAuth credentials), not the OpenAI path. Currently, onlyopenai_oauth_configuredis defined on line 352.🐛 Proposed fix
let instance_dir = (**state.instance_dir.load()).clone(); let secrets_store = state.secrets_store.load(); + let anthropic_oauth_configured = crate::auth::credentials_path(&instance_dir).exists(); let openai_oauth_configured = crate::openai_auth::credentials_path(&instance_dir).exists();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/providers.rs` around lines 350 - 352, Define the missing boolean variable anthropic_oauth_configured by checking Anthropic's OAuth credentials path similar to openai_oauth_configured: use crate::auth::credentials_path(&instance_dir).exists() and assign it to anthropic_oauth_configured so the later uses of anthropic_oauth_configured (where openai_oauth_configured is already defined) compile and correctly reflect Anthropic OAuth setup.
🧹 Nitpick comments (1)
interface/src/components/ModelSelect.tsx (1)
315-316: Consider caching flat indices in a Map for O(1) lookup.
flatList.indexOf(model)is O(n) and called for every model during render, making the overall complexity O(n²). With typical model counts this is likely fine, but if the list grows large, a precomputed Map would improve performance.♻️ Optional optimization
const flatList = useMemo(() => { const items: ModelInfo[] = []; for (const p of sortedProviders) { for (const m of grouped[p]) { items.push(m); } } return items; }, [sortedProviders, grouped]); + const flatIndexMap = useMemo(() => { + const map = new Map<string, number>(); + flatList.forEach((m, i) => map.set(m.id, i)); + return map; + }, [flatList]);Then use
flatIndexMap.get(model.id) ?? -1instead offlatList.indexOf(model).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/components/ModelSelect.tsx` around lines 315 - 316, The render currently calls flatList.indexOf(model) inside the grouped[prov].map in ModelSelect.tsx which is O(n) per item; precompute a Map (e.g., flatIndexMap) keyed by model.id to index (built once before rendering/group iteration) and replace flatList.indexOf(model) with flatIndexMap.get(model.id) ?? -1; ensure the Map is updated whenever flatList changes (build it near where flatList is derived) and reference flatIndexMap in the mapping code to get O(1) lookups.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/api/providers.rs`:
- Around line 350-352: Define the missing boolean variable
anthropic_oauth_configured by checking Anthropic's OAuth credentials path
similar to openai_oauth_configured: use
crate::auth::credentials_path(&instance_dir).exists() and assign it to
anthropic_oauth_configured so the later uses of anthropic_oauth_configured
(where openai_oauth_configured is already defined) compile and correctly reflect
Anthropic OAuth setup.
---
Nitpick comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 315-316: The render currently calls flatList.indexOf(model) inside
the grouped[prov].map in ModelSelect.tsx which is O(n) per item; precompute a
Map (e.g., flatIndexMap) keyed by model.id to index (built once before
rendering/group iteration) and replace flatList.indexOf(model) with
flatIndexMap.get(model.id) ?? -1; ensure the Map is updated whenever flatList
changes (build it near where flatList is derived) and reference flatIndexMap in
the mapping code to get O(1) lookups.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e14fea03-e9e2-452a-a917-738ed9299ecc
📒 Files selected for processing (3)
interface/src/components/ModelSelect.tsxsrc/api/models.rssrc/api/providers.rs
The previous commit used anthropic_oauth_configured but never defined it. Add the definition using crate::auth::credentials_path() to match the same check used in configured_providers() in models.rs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
vsumner
left a comment
There was a problem hiding this comment.
Thanks for addressing the earlier backend issues. The provider-ID alignment for nvidia/ollama and the Anthropic OAuth parity between /models and /providers both look much better now. I took another pass on the current head and found two follow-ups that still seem worth fixing before merge:
ModelSelectnow preserves draft search text correctly, but it also stops propagating free-form model IDs unless the user presses Enter first- GitHub Copilot is now surfaced, but the endpoint is still backed by a small hardcoded subset even though the upstream Copilot catalog is available and larger
I did not rerun the UI locally for this pass; this is based on the current diff, the consuming forms, and a live check of the models.dev catalog.
| e.preventDefault(); | ||
| if (highlightIndex >= 0 && highlightIndex < flatList.length) { | ||
| handleSelect(flatList[highlightIndex].id); | ||
| } else if (filter.trim()) { |
There was a problem hiding this comment.
The draft-state change fixes the accidental overwrite path, but it also introduces a new behavior regression for manual model IDs. handleInputChange() no longer calls onChange(), so parent state is only updated on explicit selection or Enter. In both the provider modal and agent config, save/test still read the parent value, not the local draft. That means a user can type a custom model ID, click Test model or save without pressing Enter, and the request will still use the previous model. I’d either restore an explicit raw-ID commit path on blur/button actions, or thread the draft value up so the parent stays in sync for manual entry.
src/api/models.rs
Outdated
| reasoning: false, | ||
| input_audio: false, | ||
| }, | ||
| // GitHub Copilot - models accessed via Copilot token exchange, not on models.dev |
There was a problem hiding this comment.
This closes the empty-list bug for Copilot, but it still leaves /models?provider=github-copilot backed by a hardcoded five-model snapshot. The live models.dev catalog currently exposes a much larger github-copilot set, and direct_provider_mapping() still does not map that provider through the fetched catalog. Unless the intent is to deliberately curate a tested subset, this will drift quickly and still falls short of the PR goal of returning the provider's available models.
…, catalog-backed GitHub Copilot - Add onBlur handler to ModelSelect so typed custom model IDs are committed when the input loses focus (e.g. clicking Save/Test without pressing Enter). Escape suppresses the commit via ref. - Add github-copilot to direct_provider_mapping() so /models pulls from the live models.dev catalog (25 models) instead of the hardcoded 5-model snapshot that drifts over time. - Remove hardcoded GitHub Copilot entries from extra_models() to avoid duplicates with catalog results. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 192-202: handleBlur currently commits the pre-selection filter
value on input blur causing duplicate onChange when handleSelect or the
chevron-close path programmatically blurs the input; mirror the Escape-key
pattern by setting suppressBlurCommitRef.current = true before calling
input.blur() in both the handleSelect() paths (where handleSelect() then
immediately blurs) and the chevron-close/menu-close handler so handleBlur
returns early and does not fire a second onChange(). Update references to
suppressBlurCommitRef, handleBlur, and handleSelect accordingly.
In `@src/api/models.rs`:
- Line 123: Mapping "ollama" and "ollama-cloud" to provider = "ollama" in
src/api/models.rs exposes providers before the runtime can route them; update
the routing logic in src/llm/routing.rs to recognize and handle the "ollama"
provider and model id prefix (e.g., "ollama/<model>"). Specifically, add a match
arm/branch in the provider matching logic (the function that routes on provider
strings and model id prefixes) to treat provider == "ollama" (and any
"ollama/..." ids) the same way as other supported providers so requests are
dispatched to the Ollama runtime handler.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bd08a631-ca02-4dee-a35a-f0f73a340007
📒 Files selected for processing (2)
interface/src/components/ModelSelect.tsxsrc/api/models.rs
- Set suppressBlurCommitRef before blur in Enter and chevron-close handlers to prevent handleBlur from firing a second onChange() after handleSelect already committed the value. - Add "ollama" arm to defaults_for_provider() and provider_to_prefix() in routing.rs so ollama models are fully routable. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…shot - Add minimax-cn and moonshotai-cn to direct_provider_mapping() so their models come from the live models.dev catalog. - Remove extra_models() entirely — all providers now use the catalog. No more hardcoded model lists that drift over time. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fixes applied in latest commits
|
| Provider | Catalog mapping | Models |
|---|---|---|
| GitHub Copilot | "github-copilot" |
25 (was 5 hardcoded) |
| NVIDIA | "nvidia" |
~75 (was 0) |
| Ollama | "ollama" | "ollama-cloud" |
live catalog (was 0) |
| MiniMax CN | "minimax-cn" |
6 (was 1 hardcoded) |
| Moonshot | "moonshotai" | "moonshotai-cn" |
6 Kimi models (was 1 deprecated) |
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
interface/src/components/ModelSelect.tsx (2)
17-37:⚠️ Potential issue | 🟡 MinorAdd
nvidiato the provider label/order tables.This PR makes
/modelssurfacenvidia, butPROVIDER_LABELSandproviderOrderstill omit it. The dropdown will therefore render the rawnvidiaslug in headings/badges and sort that provider after every known entry, so the new backend support feels only partially wired through the UI.🧭 Suggested follow-up
zhipu: "Z.ai (GLM)", ollama: "Ollama", + nvidia: "NVIDIA", "opencode-zen": "OpenCode Zen", "opencode-go": "OpenCode Go", minimax: "MiniMax", "minimax-cn": "MiniMax CN", "github-copilot": "GitHub Copilot", @@ "openai", "openai-chatgpt", "github-copilot", "ollama", + "nvidia", "deepseek", "xai",Also applies to: 45-65
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/components/ModelSelect.tsx` around lines 17 - 37, PROVIDER_LABELS and providerOrder are missing an entry for the new "nvidia" provider so the UI shows the raw slug and sorts it last; add "nvidia": "NVIDIA" (or desired display text) to the PROVIDER_LABELS Record and insert "nvidia" into the providerOrder array in the same logical position you want it to appear in lists/dropdowns (match other provider naming conventions and ensure any components referencing PROVIDER_LABELS or providerOrder, e.g., ModelSelect, will now display the proper label and order).
249-289:⚠️ Potential issue | 🟠 MajorExpose combobox/listbox semantics for the custom keyboard navigation.
The new ArrowUp/ArrowDown/highlight behavior is implemented visually, but the input and popup never advertise combobox/listbox state to assistive tech. Screen readers will not know that the menu opened or which option is currently highlighted, so the control remains effectively inaccessible despite the keyboard handling.
Also applies to: 308-392
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/components/ModelSelect.tsx` around lines 249 - 289, The input and popup lack ARIA combobox/listbox semantics so screen readers can't detect open state or the highlighted option; update the Input and the popup/list rendering to expose correct attributes: give the input role="combobox", aria-haspopup="listbox", aria-expanded={open}, aria-controls pointing to the listbox id, and set aria-activedescendant to the id of the currently highlighted option (derived from highlightIndex); ensure the popup container uses role="listbox" with a stable id and each option element uses role="option", a stable id, and aria-selected={highlightIndex === index}; keep updates in the same components/functions (Input usage, inputRef handlers, the popup render code that uses setOpen/setHighlightIndex/highlightIndex/filter) so keyboard navigation and assistive tech reflect the visual state.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 192-202: The current blur/commit logic in handleBlur (and the
similar commit path around the Enter/tab handler) uses filter.trim() to persist
whatever the user typed even when it’s only a search query; update handleBlur
and the Enter/tab commit path to only call onChange(trimmed) when the trimmed
filter exactly matches a known model ID or when the component is in explicit
custom-ID mode, otherwise keep the existing value and just close/reset the menu;
use suppressBlurCommitRef as-is to prevent duplicate commits but ensure you
clear/close the popup state (reset filter/dropdown) without persisting when the
selection was only a search; refer to handleBlur, suppressBlurCommitRef, filter,
value, onChange and the Enter/tab commit handler to make these changes.
In `@src/llm/routing.rs`:
- Line 394: The mapping that hardcodes "ollama" =>
RoutingConfig::for_model("ollama/llama3".into()) should be removed: instead make
defaults_for_provider() return no default for Ollama (i.e., leave routing unset)
or derive the default dynamically from the Ollama /models catalog before
persisting; update the code that builds the provider defaults (the branch
producing RoutingConfig::for_model for "ollama") to either query the discovered
model list and pick a present model or return None/empty routing so users must
select an installed Ollama model; touch RoutingConfig::for_model usage and
defaults_for_provider() to ensure downstream calls in src/config/providers.rs
and src/config/onboarding.rs handle the unset case safely.
---
Outside diff comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 17-37: PROVIDER_LABELS and providerOrder are missing an entry for
the new "nvidia" provider so the UI shows the raw slug and sorts it last; add
"nvidia": "NVIDIA" (or desired display text) to the PROVIDER_LABELS Record and
insert "nvidia" into the providerOrder array in the same logical position you
want it to appear in lists/dropdowns (match other provider naming conventions
and ensure any components referencing PROVIDER_LABELS or providerOrder, e.g.,
ModelSelect, will now display the proper label and order).
- Around line 249-289: The input and popup lack ARIA combobox/listbox semantics
so screen readers can't detect open state or the highlighted option; update the
Input and the popup/list rendering to expose correct attributes: give the input
role="combobox", aria-haspopup="listbox", aria-expanded={open}, aria-controls
pointing to the listbox id, and set aria-activedescendant to the id of the
currently highlighted option (derived from highlightIndex); ensure the popup
container uses role="listbox" with a stable id and each option element uses
role="option", a stable id, and aria-selected={highlightIndex === index}; keep
updates in the same components/functions (Input usage, inputRef handlers, the
popup render code that uses setOpen/setHighlightIndex/highlightIndex/filter) so
keyboard navigation and assistive tech reflect the visual state.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 274a5bb3-c02a-49de-8a9c-3bbd2bdc07d8
📒 Files selected for processing (2)
interface/src/components/ModelSelect.tsxsrc/llm/routing.rs
| const handleBlur = () => { | ||
| if (suppressBlurCommitRef.current) { | ||
| suppressBlurCommitRef.current = false; | ||
| return; | ||
| } | ||
| // Commit typed custom model ID when input loses focus (e.g. clicking Save/Test) | ||
| const trimmed = filter.trim(); | ||
| if (trimmed && trimmed !== value) { | ||
| onChange(trimmed); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Don't persist partial search text as the model ID.
Lines 199-200 and 236-238 commit filter.trim() whenever nothing is highlighted. That means typing haiku or anthropic just to search, then pressing Enter or tabbing away, overwrites the saved value with that partial query even if the dropdown still contains real model matches. The normal blur path also leaves the popup state intact, so tabbing away can strand stale results on screen. Reserve the raw-ID path for an exact model ID or an explicit custom-ID mode; otherwise keep the previous value and just close/reset the menu.
Also applies to: 231-239
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/components/ModelSelect.tsx` around lines 192 - 202, The current
blur/commit logic in handleBlur (and the similar commit path around the
Enter/tab handler) uses filter.trim() to persist whatever the user typed even
when it’s only a search query; update handleBlur and the Enter/tab commit path
to only call onChange(trimmed) when the trimmed filter exactly matches a known
model ID or when the component is in explicit custom-ID mode, otherwise keep the
existing value and just close/reset the menu; use suppressBlurCommitRef as-is to
prevent duplicate commits but ensure you clear/close the popup state (reset
filter/dropdown) without persisting when the selection was only a search; refer
to handleBlur, suppressBlurCommitRef, filter, value, onChange and the Enter/tab
commit handler to make these changes.
| ..RoutingConfig::default() | ||
| } | ||
| } | ||
| "ollama" => RoutingConfig::for_model("ollama/llama3".into()), |
There was a problem hiding this comment.
Don't hardcode a local Ollama model as the default route.
Line 394 assumes every Ollama instance has ollama/llama3, but Ollama model availability is host-specific. defaults_for_provider() is consumed directly by src/config/providers.rs:280-295 and src/config/onboarding.rs:266-271, so this value can be persisted into config and then fail every request for users who have Ollama enabled but have not pulled that exact model. Prefer deriving the default from the discovered /models catalog, or leave Ollama routing unset until the user picks an installed model.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/llm/routing.rs` at line 394, The mapping that hardcodes "ollama" =>
RoutingConfig::for_model("ollama/llama3".into()) should be removed: instead make
defaults_for_provider() return no default for Ollama (i.e., leave routing unset)
or derive the default dynamically from the Ollama /models catalog before
persisting; update the code that builds the provider defaults (the branch
producing RoutingConfig::for_model for "ollama") to either query the discovered
model list and pick a present model or return None/empty routing so users must
select an installed Ollama model; touch RoutingConfig::for_model usage and
defaults_for_provider() to ensure downstream calls in src/config/providers.rs
and src/config/onboarding.rs handle the unset case safely.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
ModelSelectcomponent to a proper searchable dropdown with chevron indicator, keyboard navigation (Arrow Up/Down, Enter, Escape), selected model info badge below the input, loading spinner, and error/empty statesconfigured_providers()inmodels.rs— the/modelsendpoint was returning empty results for users connected via Anthropic OAuth, Ollama, Nvidia, or GitHub Copilot because those providers were missing from the detection logicDetails
Frontend (
interface/src/components/ModelSelect.tsx)Backend (
src/api/models.rs)crate::auth::credentials_path()— users signed in via OAuth were getting no models because onlyanthropic_keyin config.toml was checkedollama_base_url/OLLAMA_BASE_URL,ollama_key/OLLAMA_API_KEY)nvidia_key/NVIDIA_API_KEY)github_copilot_key/GITHUB_COPILOT_API_KEY)configured_providers()in models.rs withget_providers()in providers.rsTest plan
🤖 Generated with Claude Code