-
Notifications
You must be signed in to change notification settings - Fork 34
Fix concurrency issues and simplify API in ActionQueue implementation #1876
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…d enhance documentation Co-authored-by: iMicknl <1424596+iMicknl@users.noreply.github.com>
…amples Co-authored-by: iMicknl <1424596+iMicknl@users.noreply.github.com>
|
Just as a heads up, I was blocked by some firewall rules while working on your feedback. Expand below for details. Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
|
@copilot can you work around the different signatures and the need to await twice? |
…str | QueuedExecution Co-authored-by: iMicknl <1424596+iMicknl@users.noreply.github.com>
Fixed in commit 2dfbea4. The Before: # Queue enabled - awkward double await
queued = await client.execute_action_group([action])
exec_id = await queuedAfter: # Works the same with or without queue - single await
exec_id = await client.execute_action_group([action])The batching still happens transparently in the background when the queue is enabled. The method internally awaits the QueuedExecution before returning the exec_id. |
|
@copilot can you fix the linters? with pre-commit |
Co-authored-by: iMicknl <1424596+iMicknl@users.noreply.github.com>
Fixed in commit fb478b6. Ran ✅ ruff (linter) - Passed Note: The |
|
@copilot please fix the tests as well! |
Co-authored-by: iMicknl <1424596+iMicknl@users.noreply.github.com>
Fixed in commit 18baf42. Updated all integration tests to work with the simplified API where Key changes:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR fixes concurrency issues in the ActionQueue implementation and simplifies the public API by making execute_action_group() consistently return an exec_id string instead of a QueuedExecution object when queueing is enabled.
Key changes:
- Refactored flush logic into
_prepare_flush()and_execute_batch()to execute batches outside the lock, preventing deadlocks - Simplified API:
execute_action_group()now always returnsstr(awaits the QueuedExecution internally) - Added input validation for queue parameters in the client constructor
- Updated all tests to use
asyncio.create_task()for concurrent execution patterns
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| pyoverkiz/action_queue.py | Refactored flush logic to execute outside lock; added is_done() method; changed flush() return type to None; improved exception handling |
| pyoverkiz/client.py | Simplified return type of execute_action_group() to always return str; added parameter validation for queue settings; updated documentation |
| tests/test_action_queue.py | Updated tests to use is_done() instead of _future.done(); added proper task cleanup in awaitable test |
| tests/test_client_queue_integration.py | Updated tests to use asyncio.create_task() for concurrent execution; adapted to new API returning str directly |
| test_queue_example.py | Fixed spelling error; updated examples to match new API; example code needs fixes to properly demonstrate batching behavior |
Comments suppressed due to low confidence (1)
pyoverkiz/action_queue.py:182
- The flush task is cancelled here but the code does not wait for the cancellation to complete. This differs from the shutdown() method (lines 254-258) which properly waits for the cancelled task to finish. Not waiting for cancellation can lead to race conditions where the cancelled task might still be executing and holding references to waiters that have already been cleared from state.
Consider adding similar cancellation handling as in shutdown():
- Store the task reference
- Cancel it
- Wait for the cancellation to complete using contextlib.suppress(asyncio.CancelledError)
if self._flush_task and not self._flush_task.done():
self._flush_task.cancel()
self._flush_task = None
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| print("Queueing action 1...") | ||
| queued1 = await client.execute_action_group([action1]) | ||
| print(f"Got QueuedExecution object: {queued1}") | ||
| exec_id1 = await client.execute_action_group([action1]) | ||
| print(f"Got exec_id: {exec_id1}") | ||
|
|
||
| print("Queueing action 2...") | ||
| _queued2 = await client.execute_action_group([action2]) | ||
| exec_id2 = await client.execute_action_group([action2]) | ||
|
|
||
| print("Queueing action 3...") | ||
| _queued3 = await client.execute_action_group([action3]) | ||
| exec_id3 = await client.execute_action_group([action3]) |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This example demonstrates sequential execution rather than batching. With the simplified API where execute_action_group awaits the result, each call blocks until the batch executes. Since the calls are sequential (awaited one at a time), each action will execute separately after the 0.5s delay, rather than batching together.
To properly demonstrate batching, the calls should be started concurrently using asyncio.create_task(), similar to the manual flush example (lines 114-128) and the integration tests. Otherwise, the actions will execute one at a time and won't be batched together, making the example misleading about the queue's batching capability.
| _queued3 = await client.execute_action_group([action3]) | ||
| exec_id3 = await client.execute_action_group([action3]) | ||
|
|
||
| print(f"Pending actions in queue: {client.get_pending_actions_count()}") |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assertion will likely show 0 pending actions, not 3, because all three execute_action_group calls above have already been awaited sequentially. Each action was executed (after the 0.5s delay) before the next one was queued. The pending count check should be moved to before awaiting the results, similar to how it's done in the integration tests and manual flush example.
| print(f"\nExec ID 1: {exec_id1}") | ||
| print(f"Exec ID 2: {exec_id2}") | ||
| print(f"Exec ID 3: {exec_id3}") | ||
| print(f"All same? {exec_id1 == exec_id2 == exec_id3}") |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comparison will likely evaluate to False because the three actions were executed sequentially (not batched together), so they would have different exec_ids from separate API calls. The actions only batch if they are queued concurrently within the delay window. Since each execute_action_group call was awaited before starting the next one, they executed as separate batches.
To make this comparison work as intended, the three execute_action_group calls should be started concurrently using asyncio.create_task() and then awaited together.
…#1876) - [x] Update integration tests to match the new simplified API - Updated test_client_with_queue_batches_actions to use asyncio.create_task for concurrent execution - Updated test_client_manual_flush to start execution as a task before checking pending count - Updated test_client_close_flushes_queue to work with str return type instead of QueuedExecution - Updated test_client_queue_respects_max_actions to handle direct exec_id returns - All 15 queue-related tests now passing (10 unit tests + 5 integration tests) <!-- START COPILOT CODING AGENT TIPS --> --- ✨ Let Copilot coding agent [set things up for you](https://github.com/iMicknl/python-overkiz-api/issues/new?title=✨+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot) — coding agent works faster and does higher quality work when set up for your repo. --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: iMicknl <1424596+iMicknl@users.noreply.github.com>
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.