Skip to content

test: add coverage for multi-provider config and LLM client#5

Open
vitali87 wants to merge 1 commit intomainfrom
test/provider-coverage
Open

test: add coverage for multi-provider config and LLM client#5
vitali87 wants to merge 1 commit intomainfrom
test/provider-coverage

Conversation

@vitali87
Copy link
Owner

@vitali87 vitali87 commented Mar 1, 2026

Summary

  • Add tests/test_config.py (10 tests): Settings defaults per provider, custom model override, API key routing, context token routing, env var loading
  • Add tests/test_client.py (23 tests): raw output extraction, group parsing, chunk merging, token ratio computation, tiktoken counting, provider dispatch for token counting and LLM calls, OpenAI call success/error paths, retry logic
  • All tests isolate from real env vars via autouse fixture that clears ANTHROPIC_API_KEY, OPENAI_API_KEY, PR_SPLIT_PROVIDER, and PR_SPLIT_MODEL

Test plan

  • All 79 tests pass (46 existing + 33 new)
  • ruff check and format clean

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the test suite by introducing new unit tests for the core configuration and LLM client components. The added tests validate multi-provider settings, API key management, model selection, tokenization, and the reliability of LLM interactions, including error handling and retry mechanisms. This enhancement improves the robustness and maintainability of the pr_split project by ensuring critical functionalities behave as expected under various conditions.

Highlights

  • New Test Files: Two new test files, tests/test_config.py and tests/test_client.py, have been added to enhance test coverage for configuration and LLM client functionalities.
  • Configuration Testing: Comprehensive tests for pr_split.config ensure correct handling of provider-specific defaults, custom model overrides, API key routing, context token limits, and environment variable loading.
  • LLM Client Testing: Extensive tests for pr_split.planner.client cover raw output extraction, group parsing and merging, token ratio computation, token counting for different providers, OpenAI API call success/error paths, and robust retry logic.
  • Isolated Testing Environment: An autouse pytest fixture has been implemented in both new test files to clear relevant environment variables, ensuring tests run in an isolated and predictable state.
Changelog
  • tests/test_client.py
    • Added new test file for LLM client functionalities.
    • Implemented tests for raw output extraction, group parsing, and chunk merging.
    • Included tests for token ratio computation and provider-specific token counting.
    • Added tests for OpenAI API call success, error handling, and retry logic.
  • tests/test_config.py
    • Added new test file for configuration settings.
    • Implemented tests for default model selection based on provider.
    • Included tests for custom model overrides and API key retrieval.
    • Added tests for context token limits and environment variable loading.
Activity
  • 33 new tests were added, bringing the total to 79 passing tests.
  • The codebase passed ruff check and format clean.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds comprehensive test coverage for the multi-provider configuration and LLM client functionalities. The tests are well-structured and cover a good range of scenarios, including default settings, API key routing, error paths, and retry logic. I've identified a couple of minor areas for improvement in tests/test_client.py related to code style and maintainability, for which I've left specific comments. Overall, this is a great addition that significantly improves the project's test suite and robustness.

mock_count.assert_called_once()


class TestCallOpenai:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Imports should be at the top of the file as per PEP 8 guidelines. In the methods of this test class (test_success, test_no_tool_calls, test_invalid_json), json and _call_openai are imported locally. Please move these imports to the top of the file for better readability and to avoid potential issues.

import json should be with other standard library imports, and _call_openai can be added to the existing import from pr_split.planner.client.

_call_chunk_with_retry(
"sys", "usr", settings=_settings(), chunk_index=1, total_chunks=1
)
assert mock_llm.call_count == 2

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The number of calls is hardcoded to 2. This test will break if CHUNK_RETRY_LIMIT from pr_split.constants is changed. To make the test more robust, consider importing CHUNK_RETRY_LIMIT from pr_split.constants and using it in the assertion.

Suggested change
assert mock_llm.call_count == 2
assert mock_llm.call_count == CHUNK_RETRY_LIMIT

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant