Skip to content

Making llm models resilient to backend discovery failures#1344

Open
saksham-jain177 wants to merge 1 commit intosimonw:mainfrom
saksham-jain177:fix-ollama-missing-model-info
Open

Making llm models resilient to backend discovery failures#1344
saksham-jain177 wants to merge 1 commit intosimonw:mainfrom
saksham-jain177:fix-ollama-missing-model-info

Conversation

@saksham-jain177
Copy link

Summary

Before this patch, llm models would crash with a pydantic.ValidationError if any single backend failed during model discovery. For example, certain Ollama models with missing metadata caused the entire command to terminate, preventing all other models from being listed.

After this change, backend failures are properly isolated in the CLI layer. Errors from individual backends are downgraded to warnings, and model discovery continues as long as at least one backend succeeds. The command now completes successfully, lists all healthy models, and reports backend failures without aborting execution.

Fixes #1330

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

llm models throwing ValidationError for certain ollama models.

1 participant