test: add coverage for multi-provider config and LLM client#5
test: add coverage for multi-provider config and LLM client#5
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the test suite by introducing new unit tests for the core configuration and LLM client components. The added tests validate multi-provider settings, API key management, model selection, tokenization, and the reliability of LLM interactions, including error handling and retry mechanisms. This enhancement improves the robustness and maintainability of the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds comprehensive test coverage for the multi-provider configuration and LLM client functionalities. The tests are well-structured and cover a good range of scenarios, including default settings, API key routing, error paths, and retry logic. I've identified a couple of minor areas for improvement in tests/test_client.py related to code style and maintainability, for which I've left specific comments. Overall, this is a great addition that significantly improves the project's test suite and robustness.
| mock_count.assert_called_once() | ||
|
|
||
|
|
||
| class TestCallOpenai: |
There was a problem hiding this comment.
Imports should be at the top of the file as per PEP 8 guidelines. In the methods of this test class (test_success, test_no_tool_calls, test_invalid_json), json and _call_openai are imported locally. Please move these imports to the top of the file for better readability and to avoid potential issues.
import json should be with other standard library imports, and _call_openai can be added to the existing import from pr_split.planner.client.
| _call_chunk_with_retry( | ||
| "sys", "usr", settings=_settings(), chunk_index=1, total_chunks=1 | ||
| ) | ||
| assert mock_llm.call_count == 2 |
There was a problem hiding this comment.
The number of calls is hardcoded to 2. This test will break if CHUNK_RETRY_LIMIT from pr_split.constants is changed. To make the test more robust, consider importing CHUNK_RETRY_LIMIT from pr_split.constants and using it in the assertion.
| assert mock_llm.call_count == 2 | |
| assert mock_llm.call_count == CHUNK_RETRY_LIMIT |
Summary
tests/test_config.py(10 tests): Settings defaults per provider, custom model override, API key routing, context token routing, env var loadingtests/test_client.py(23 tests): raw output extraction, group parsing, chunk merging, token ratio computation, tiktoken counting, provider dispatch for token counting and LLM calls, OpenAI call success/error paths, retry logicautousefixture that clearsANTHROPIC_API_KEY,OPENAI_API_KEY,PR_SPLIT_PROVIDER, andPR_SPLIT_MODELTest plan