Skip to content

Pass --model CLI override to create_llm_client log line#69

Merged
daniel-thom merged 1 commit into
mainfrom
fix/cli-model-override-logging
May 10, 2026
Merged

Pass --model CLI override to create_llm_client log line#69
daniel-thom merged 1 commit into
mainfrom
fix/cli-model-override-logging

Conversation

@daniel-thom
Copy link
Copy Markdown
Contributor

Summary

The ask, generate, and verify commands all accept a --model flag that resolve_settings returns as resolved_model. Each command then calls create_llm_client with settings.llm.model (the .env value) instead of resolved_model, so the "Using {provider} LLM backend (model=…)" log line in llm.py:952 shows the .env model rather than the override the user actually requested.

This was logging-only — the actual API calls in run_agent_loop already use resolved_model correctly (cli.py:231, generate.py:411, verify.py:118/125) — but the misleading log line made it look like the override wasn't being applied.

Files changed

  • src/datasight/cli.py:133run_ask_pipeline
  • src/datasight/cli_commands/generate.py:312
  • src/datasight/cli_commands/verify.py:100

Test plan

  • prek run --all-files passes
  • pytest -m \"not integration\" passes (1481 tests)
  • Manual verification: datasight ask --model=X --file q.txt shows model=X in the startup log

🤖 Generated with Claude Code

The `ask`, `generate`, and `verify` commands all accept a `--model`
flag whose value `resolve_settings` returns as `resolved_model`. Each
command then constructs an LLM client by calling create_llm_client
with `settings.llm.model` (the .env value) instead of `resolved_model`,
so the "Using {provider} LLM backend (model=…)" log line on the next
line shows the .env model rather than the override the user actually
asked for.

This was logging-only — the actual API calls in run_agent_loop and
similar already use `resolved_model` correctly — but the misleading log
line made it look like the override wasn't being applied.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@codecov
Copy link
Copy Markdown

codecov Bot commented May 10, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 86.90%. Comparing base (3282c3b) to head (dffe174).

Additional details and impacted files
@@           Coverage Diff           @@
##             main      #69   +/-   ##
=======================================
  Coverage   86.90%   86.90%           
=======================================
  Files          61       61           
  Lines       11462    11462           
=======================================
  Hits         9961     9961           
  Misses       1501     1501           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@daniel-thom daniel-thom merged commit b635119 into main May 10, 2026
8 checks passed
github-actions Bot pushed a commit that referenced this pull request May 10, 2026
Pass --model CLI override to create_llm_client log line
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant