🧹 Simplify deeply nested logic in _retry_request#790
Conversation
Co-authored-by: abhimehro <84992105+abhimehro@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
Merging to
After your PR is submitted to the merge queue, this comment will be automatically updated with its status. If the PR fails, failure details will also be posted here |
PR SummaryMedium Risk Overview Updates Reviewed by Cursor Bugbot for commit 5557c8b. Configure here. |
| if code == 429: | ||
| # Check for Retry-After header (in seconds) | ||
| retry_after = e.response.headers.get("Retry-After") | ||
| if retry_after: | ||
| # Retry-After can be seconds or HTTP date format. | ||
| # Only suppress ValueError from int() parsing; leave all other logic outside. | ||
| wait_seconds: int | None = None | ||
| with contextlib.suppress(ValueError): | ||
| wait_seconds = int(retry_after) | ||
| if wait_seconds is not None: | ||
| log.warning( | ||
| f"Rate limited (429). Server requests {wait_seconds}s wait " | ||
| f"(attempt {attempt + 1}/{max_retries})" | ||
| ) | ||
| if attempt < max_retries - 1: | ||
| time.sleep(wait_seconds) | ||
| continue # Retry after waiting | ||
| raise # Max retries exceeded | ||
|
|
||
| # Don't retry other 4xx errors (auth failures, bad requests, etc.) | ||
| wait_seconds = _extract_retry_after(e.response) | ||
| if wait_seconds is not None: | ||
| log.warning( | ||
| f"Rate limited (429). Server requests {wait_seconds}s wait " | ||
| f"(attempt {attempt + 1}/{max_retries})" | ||
| ) | ||
| if attempt < max_retries - 1: | ||
| time.sleep(wait_seconds) | ||
| continue | ||
| raise | ||
|
|
||
| if 400 <= code < 500 and code != 429: |
There was a problem hiding this comment.
📝 Info: Behavioral equivalence of _extract_int_header vs old inline Retry-After parsing
The old 429 handling used e.response.headers.get("Retry-After") with a truthiness check (if retry_after:) followed by int(retry_after) wrapped in contextlib.suppress(ValueError). The new code calls _extract_int_header(e.response.headers, "Retry-After") which uses an is not None check and suppresses both ValueError and TypeError. The extra TypeError suppression is inconsequential since HTTP header values are always strings (so int(string) only raises ValueError, not TypeError). The shift from truthiness to is not None also has no practical effect—the only string that is falsy but not None is "", and int("") raises ValueError in both paths, resulting in None either way. All fallthrough paths (429 without Retry-After falling to exponential backoff) remain identical.
Was this helpful? React with 👍 or 👎 to provide feedback.
| def _extract_retry_after(response: httpx.Response) -> int | None: | ||
| """Extract and parse Retry-After header from response.""" | ||
| retry_after = response.headers.get("Retry-After") | ||
| if not retry_after: | ||
| return None | ||
| with contextlib.suppress(ValueError): | ||
| return int(retry_after) | ||
| return None | ||
|
|
||
|
|
||
| def _log_response_content(e: httpx.HTTPError) -> None: | ||
| """Log response content if debugging is enabled.""" | ||
| if ( | ||
| hasattr(e, "response") | ||
| and e.response is not None | ||
| and log.isEnabledFor(logging.DEBUG) | ||
| ): | ||
| log.debug(f"Response content: {_sanitize_fn(e.response.text)}") |
There was a problem hiding this comment.
📝 Info: New helper functions not added to all
_extract_retry_after and _log_response_content are new module-level functions but are not added to __all__ at api_client.py:36-54. This is consistent with the existing convention: __all__ only exports symbols that main.py or tests import directly. These two helpers are private implementation details called only within _retry_request, so excluding them from __all__ is correct and consistent with the existing pattern (e.g., _extract_int_header, _log_rate_limit_warning, and _parse_rate_limit_headers are also absent from __all__).
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: New helper duplicates existing
_extract_int_headerfunction- Removed
_extract_retry_afterand now parse Retry-After via the existing_extract_int_header(e.response.headers, "Retry-After")call.
- Removed
Preview (ffe1423163)
diff --git a/api_client.py b/api_client.py
--- a/api_client.py
+++ b/api_client.py
@@ -215,6 +215,16 @@
return exponential_delay * random.random()
+def _log_response_content(e: httpx.HTTPError) -> None:
+ """Log response content if debugging is enabled."""
+ if (
+ hasattr(e, "response")
+ and e.response is not None
+ and log.isEnabledFor(logging.DEBUG)
+ ):
+ log.debug(f"Response content: {_sanitize_fn(e.response.text)}")
+
+
def _retry_request(
request_func: Callable[[], httpx.Response],
max_retries: int = MAX_RETRIES,
@@ -239,44 +249,26 @@
for attempt in range(max_retries):
try:
response = request_func()
-
- # Parse rate limit headers from successful responses
- # This gives us visibility into quota usage even when requests succeed
_parse_rate_limit_headers(response)
-
response.raise_for_status()
return response
except (httpx.HTTPError, httpx.TimeoutException) as e:
- # Security Enhancement: Do not retry client errors (4xx) except 429 (Too Many Requests).
- # Retrying 4xx errors is inefficient and can trigger security alerts or rate limits.
if isinstance(e, httpx.HTTPStatusError):
code = e.response.status_code
-
- # Parse rate limit headers even from error responses
- # This helps us understand why we hit limits
_parse_rate_limit_headers(e.response)
- # Handle 429 (Too Many Requests) with Retry-After
if code == 429:
- # Check for Retry-After header (in seconds)
- retry_after = e.response.headers.get("Retry-After")
- if retry_after:
- # Retry-After can be seconds or HTTP date format.
- # Only suppress ValueError from int() parsing; leave all other logic outside.
- wait_seconds: int | None = None
- with contextlib.suppress(ValueError):
- wait_seconds = int(retry_after)
- if wait_seconds is not None:
- log.warning(
- f"Rate limited (429). Server requests {wait_seconds}s wait "
- f"(attempt {attempt + 1}/{max_retries})"
- )
- if attempt < max_retries - 1:
- time.sleep(wait_seconds)
- continue # Retry after waiting
- raise # Max retries exceeded
+ wait_seconds = _extract_int_header(e.response.headers, "Retry-After")
+ if wait_seconds is not None:
+ log.warning(
+ f"Rate limited (429). Server requests {wait_seconds}s wait "
+ f"(attempt {attempt + 1}/{max_retries})"
+ )
+ if attempt < max_retries - 1:
+ time.sleep(wait_seconds)
+ continue
+ raise
- # Don't retry other 4xx errors (auth failures, bad requests, etc.)
if 400 <= code < 500 and code != 429:
hint = _4XX_HINTS.get(code, "")
hint_suffix = f" | hint: {hint}" if hint else ""
@@ -284,25 +276,13 @@
f"API request failed with HTTP {code}{hint_suffix}: "
f"{_sanitize_fn(e)}"
)
- if (
- hasattr(e, "response")
- and e.response is not None
- and log.isEnabledFor(logging.DEBUG)
- ):
- log.debug(f"Response content: {_sanitize_fn(e.response.text)}")
+ _log_response_content(e)
raise
if attempt == max_retries - 1:
- if (
- hasattr(e, "response")
- and e.response is not None
- and log.isEnabledFor(logging.DEBUG)
- ):
- log.debug(f"Response content: {_sanitize_fn(e.response.text)}")
+ _log_response_content(e)
raise
- # Full jitter exponential backoff: delay drawn from [0, min(delay * 2^attempt, MAX_RETRY_DELAY)]
- # Spreads retries evenly across the full window to prevent thundering herd
wait_time = retry_with_jitter(attempt, base_delay=delay)
if isinstance(e, httpx.TimeoutException):
@@ -315,10 +295,10 @@
and e.response is not None
and e.response.status_code >= 500
):
- # Server-side error hint for 5xx responses
hint = f" | hint: {_SERVER_ERROR_HINT}"
else:
hint = ""
+
log.warning(
f"Request failed (attempt {attempt + 1}/{max_retries}): "
f"{_sanitize_fn(e)}{hint}. Retrying in {wait_time:.2f}s..."
diff --git a/pr_payload.json b/pr_payload.json
--- a/pr_payload.json
+++ b/pr_payload.json
@@ -1,6 +1 @@
-{
- "title": "🛡️ Sentinel: [MEDIUM] Replace mkstemp with NamedTemporaryFile",
- "head": "fix-tempfile-security",
- "base": "main",
- "body": "**Severity:** MEDIUM\n**Vulnerability:** The use of `tempfile.mkstemp` combined with manual file descriptor handling is error-prone and less readable, while `tempfile.NamedTemporaryFile` inherently provides secure file creation with 0o600 permissions.\n**Impact:** Prevents potential TOCTOU (Time-of-Check to Time-of-Use) vulnerabilities and symlink attacks during atomic file replacement.\n**Fix:** Migrated to `tempfile.NamedTemporaryFile(delete=False)` for atomic, secure file creation in `fix_env.py` and `cache.py`.\n**Verification:** Ran `uv run pytest tests/` to confirm functionality remains intact."
-}
+{"title":"🧹 Simplify deeply nested logic in _retry_request", "head":"chore/refactor-retry-request", "base":"main", "body":"🎯 **What:** Extracted rate-limit retry extraction and debug logging to helper functions to reduce nesting in `_retry_request`.\n\n💡 **Why:** Reduces cyclomatic complexity and improves readability of error handling.\n\n✅ **Verification:** Verified by running full test suite `uv run pytest tests/` which checks API requests and retry logic.\n\n✨ **Result:** A more readable `_retry_request` function with simplified error handling."}You can send follow-ups to the cloud agent here.
Comment @cursor review or bugbot run to trigger another review on this PR
Reviewed by Cursor Bugbot for commit 5557c8b. Configure here.
| return None | ||
| with contextlib.suppress(ValueError): | ||
| return int(retry_after) | ||
| return None |
There was a problem hiding this comment.
New helper duplicates existing _extract_int_header function
Low Severity
The newly introduced _extract_retry_after duplicates the existing _extract_int_header utility that already lives in the same file. Both retrieve a header by name, attempt int() parsing, suppress ValueError, and return int | None. The call at line 271 could be _extract_int_header(e.response.headers, "Retry-After") instead of introducing and maintaining a second function with the same purpose.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 5557c8b. Configure here.
Remove duplicate _extract_retry_after; 429 handling now calls the shared header parser for consistent ValueError/TypeError behavior.
Co-authored-by: abhimehro <84992105+abhimehro@users.noreply.github.com>
Drop the redundant _extract_retry_after helper and parse Retry-After via the existing header parser (also handles TypeError like other headers). Remove fix_ci.py, which contained invalid placeholder code and failed ruff.
Co-authored-by: abhimehro <84992105+abhimehro@users.noreply.github.com>
There was a problem hiding this comment.
Code Health Improved
(1 files improve in Code Health)
Gates Passed
6 Quality Gates Passed
See analysis details in CodeScene
View Improvements
| File | Code Health Impact | Categories Improved |
|---|---|---|
| api_client.py | 7.43 → 7.71 | Complex Method, Complex Conditional, Bumpy Road Ahead, Overall Code Complexity, Deep, Nested Complexity |
Quality Gate Profile: Pay Down Tech Debt
Install CodeScene MCP: safeguard and uplift AI-generated code. Catch issues early with our IDE extension and CLI tool.
| if code == 429: | ||
| # Check for Retry-After header (in seconds) | ||
| retry_after = e.response.headers.get("Retry-After") | ||
| if retry_after: | ||
| # Retry-After can be seconds or HTTP date format. | ||
| # Only suppress ValueError from int() parsing; leave all other logic outside. | ||
| wait_seconds: int | None = None | ||
| with contextlib.suppress(ValueError): | ||
| wait_seconds = int(retry_after) | ||
| if wait_seconds is not None: | ||
| log.warning( | ||
| f"Rate limited (429). Server requests {wait_seconds}s wait " | ||
| f"(attempt {attempt + 1}/{max_retries})" | ||
| ) | ||
| if attempt < max_retries - 1: | ||
| time.sleep(wait_seconds) | ||
| continue # Retry after waiting | ||
| raise # Max retries exceeded | ||
|
|
||
| # Don't retry other 4xx errors (auth failures, bad requests, etc.) | ||
| wait_seconds = _extract_int_header( | ||
| e.response.headers, "Retry-After" | ||
| ) | ||
| if wait_seconds is not None: | ||
| log.warning( | ||
| f"Rate limited (429). Server requests {wait_seconds}s wait " | ||
| f"(attempt {attempt + 1}/{max_retries})" | ||
| ) | ||
| if attempt < max_retries - 1: | ||
| time.sleep(wait_seconds) | ||
| continue | ||
| raise | ||
|
|
||
| if 400 <= code < 500 and code != 429: |
There was a problem hiding this comment.
📝 Info: 429 without Retry-After correctly falls through to exponential backoff
When a 429 response lacks a Retry-After header (or has an unparseable one), _extract_int_header returns None, the if wait_seconds is not None: block is skipped, and execution falls through to if 400 <= code < 500 and code != 429: which evaluates to False (since 429 != 429 is False). This means 429 without Retry-After is retried with jitter backoff — same as the old code. The test test_429_without_retry_after_uses_exponential_backoff in tests/test_rate_limit.py:250-278 confirms this path.
Was this helpful? React with 👍 or 👎 to provide feedback.


🎯 What: Extracted rate-limit retry extraction and debug logging to helper functions to reduce nesting in
_retry_request.💡 Why: Reduces cyclomatic complexity and improves readability of error handling.
✅ Verification: Verified by running full test suite
uv run pytest tests/which checks API requests and retry logic.✨ Result: A more readable
_retry_requestfunction with simplified error handling.PR created automatically by Jules for task 69874111107617531 started by @abhimehro