🧹 [code health improvement] Refactor push_rules to improve maintainability#804
🧹 [code health improvement] Refactor push_rules to improve maintainability#804abhimehro wants to merge 1 commit into
Conversation
Co-authored-by: abhimehro <84992105+abhimehro@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
Merging to
After your PR is submitted to the merge queue, this comment will be automatically updated with its status. If the PR fails, failure details will also be posted here |
There was a problem hiding this comment.
Code Health Improved
(1 files improve in Code Health)
Gates Passed
6 Quality Gates Passed
See analysis details in CodeScene
View Improvements
| File | Code Health Impact | Categories Improved |
|---|---|---|
| main.py | 1.66 → 1.70 | Complex Method, Complex Conditional, Bumpy Road Ahead, Overall Code Complexity |
Quality Gate Profile: Pay Down Tech Debt
Install CodeScene MCP: safeguard and uplift AI-generated code. Catch issues early with our IDE extension and CLI tool.
There was a problem hiding this comment.
📝 Info: Thread safety of ctx.existing_rules.update() is unchanged
In _push_rule_batches (line 2341), ctx.existing_rules.update(result) is called from the main thread inside the as_completed loop, not from worker threads. This is the same pattern as the old code. The _push_single_batch function no longer has access to ctx.existing_rules at all, which is actually slightly safer than the old closure approach. However, note that set.update() in CPython is atomic due to the GIL, so even concurrent access would not corrupt the set — but the sequential update pattern is cleaner.
(Refers to lines 2337-2347)
Was this helpful? React with 👍 or 👎 to provide feedback.
| def _push_single_batch( | ||
| client: httpx.Client, | ||
| profile_id: str, | ||
| sanitized_folder_name: str, | ||
| str_do: str, | ||
| str_status: str, | ||
| str_group: str, | ||
| batch_idx: int, | ||
| batch_data: list[str], | ||
| ) -> list[str] | None: | ||
| """Processes a single batch of rules by sending API request.""" | ||
| data = { | ||
| "do": str_do, | ||
| "status": str_status, | ||
| "group": str_group, | ||
| } | ||
| # Optimization: Use pre-calculated keys and zip for faster dict update | ||
| # strict=False is intentional: batch_data may be shorter than BATCH_KEYS for final batch | ||
| data.update(zip(BATCH_KEYS, batch_data, strict=False)) | ||
|
|
||
| try: | ||
| _api_post_form(client, f"{API_BASE}/{profile_id}/rules", data=data) | ||
| if not USE_COLORS: | ||
| log.info( | ||
| "Folder %s – batch %d: added %d %s", | ||
| sanitized_folder_name, | ||
| batch_idx, | ||
| len(batch_data), | ||
| pluralize(len(batch_data), "rule"), | ||
| ) | ||
| return batch_data | ||
| except httpx.HTTPError as e: | ||
| if USE_COLORS: | ||
| sys.stderr.write("\r\033[K") | ||
| sys.stderr.flush() | ||
| hint = "" | ||
| if isinstance(e, httpx.HTTPStatusError): | ||
| # Use a more specific name to avoid confusion with the rule "status" payload | ||
| status_code = e.response.status_code | ||
| hint = f" ({_STATUS_HINTS.get(status_code, f'HTTP {status_code}')})" | ||
| log.error( | ||
| f"Failed to push batch {batch_idx} for folder {sanitized_folder_name}{hint}: {sanitize_for_log(e)}" | ||
| ) | ||
| return True | ||
| if ( | ||
| hasattr(e, "response") | ||
| and e.response is not None | ||
| and log.isEnabledFor(logging.DEBUG) | ||
| ): | ||
| log.debug(f"Response content: {sanitize_for_log(e.response.text)}") | ||
| return None |
There was a problem hiding this comment.
📝 Info: Closure-to-module-function extraction preserves test compatibility
The old process_batch was a nested closure inside push_rules that captured ctx, sanitized_folder_name, str_do, str_status, and str_group from the enclosing scope. The new _push_single_batch takes all of these as explicit parameters. Multiple tests (e.g., tests/test_status_hints.py:155, tests/test_security.py:21, tests/test_security_hardening.py:70) patch main._api_post_form or main.log at the module level. Since _push_single_batch is now a module-level function that resolves _api_post_form and log as globals at call time, these patches continue to work correctly. No test breakage expected.
Was this helpful? React with 👍 or 👎 to provide feedback.
🎯 What:
The
push_rulesfunction inmain.pywas too large and complex, handling everything from deduplication, string filtering against a restricted charset, to batch splitting and multi-threaded execution. It was broken down into_filter_rules_for_folder,_push_single_batch, and_push_rule_batches.💡 Why:
Decomposing complex functions with multiple responsibilities ("Brain Methods") into smaller, highly cohesive helper functions reduces cyclomatic complexity, makes the core orchestrating logic easier to read at a glance, and improves modular testability.
✅ Verification:
push_ruleslocally by replacing logic blocks with calls to newly defined private helper functionsuv tool run ruff format . && uv tool run ruff check .uv run mypy .uv run pytest) and executing performance tests intests/test_push_rules_perf.py. All tests passed successfully.✨ Result:
The
push_rulesfunction is now cleaner and delegates responsibilities. Existing performance optimizations, like the dictionary comprehension for_filter_rules_for_folderandThreadPoolExecutorfor API batches, have been successfully preserved and isolated.PR created automatically by Jules for task 4638151365976133235 started by @abhimehro