Skip to content

Conversation

@iMicknl
Copy link
Owner

@iMicknl iMicknl commented Dec 29, 2025

Fixes #1865

@iMicknl iMicknl added this to the 2.0 milestone Dec 29, 2025
@iMicknl iMicknl added the v2 label Dec 29, 2025
@iMicknl iMicknl force-pushed the v2/improve_action_execution branch from 9b2bc1e to 7ee723f Compare December 29, 2025 15:46
@iMicknl iMicknl force-pushed the v2/improve_action_execution branch from 78fedac to d205616 Compare December 29, 2025 20:48
Base automatically changed from v2/improve_action_execution to v2/main December 29, 2025 20:55
@iMicknl iMicknl marked this pull request as ready for review January 3, 2026 18:32
@iMicknl iMicknl requested a review from tetienne as a code owner January 3, 2026 18:32
Copilot AI review requested due to automatic review settings January 3, 2026 18:32
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements an optional ActionQueue feature for the OverkizClient to address Overkiz API concurrency limits by batching multiple action executions into single API calls within a configurable time window.

Key Changes:

  • Introduces ActionQueue and QueuedExecution classes for batching action executions
  • Adds optional queue configuration to OverkizClient (disabled by default for backward compatibility)
  • Modifies execute_action_group to support both immediate and queued execution modes

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 19 comments.

Show a summary per file
File Description
pyoverkiz/action_queue.py New module implementing ActionQueue for batching actions and QueuedExecution for awaitable results
pyoverkiz/client.py Integrates ActionQueue with new constructor parameters, modified execute_action_group method, and helper methods for queue management
pyoverkiz/models.py Expands Command.name type to accept str or OverkizCommand
tests/test_action_queue.py Comprehensive unit tests for ActionQueue functionality including batching, flushing, and error handling
tests/test_client_queue_integration.py Integration tests validating queue behavior with OverkizClient
test_queue_example.py Demonstration script showing queue feature usage patterns
Comments suppressed due to low confidence (1)

pyoverkiz/models.py:479

  • The Command.name field type has been changed from OverkizCommand to str | OverkizCommand, but the init method still expects only OverkizCommand. This inconsistency could lead to type checking issues. The init signature should be updated to accept str | OverkizCommand to match the field type definition, or the field type should be reverted if string support is not needed.
    name: str | OverkizCommand
    parameters: list[str | int | float | OverkizCommandParam] | None

    def __init__(
        self,
        name: OverkizCommand,

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@iMicknl
Copy link
Owner Author

iMicknl commented Jan 3, 2026

@copilot please look extensively at all review comments and address them

Copy link

Copilot AI commented Jan 3, 2026

@iMicknl I've opened a new pull request, #1876, to work on those changes. Once the pull request is ready, I'll request review from you.

iMicknl and others added 4 commits January 3, 2026 21:38
…er and enhance example script with type hints
…#1876)

- [x] Update integration tests to match the new simplified API
- Updated test_client_with_queue_batches_actions to use
asyncio.create_task for concurrent execution
- Updated test_client_manual_flush to start execution as a task before
checking pending count
- Updated test_client_close_flushes_queue to work with str return type
instead of QueuedExecution
- Updated test_client_queue_respects_max_actions to handle direct
exec_id returns
- All 15 queue-related tests now passing (10 unit tests + 5 integration
tests)

<!-- START COPILOT CODING AGENT TIPS -->
---

✨ Let Copilot coding agent [set things up for
you](https://github.com/iMicknl/python-overkiz-api/issues/new?title=✨+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: iMicknl <1424596+iMicknl@users.noreply.github.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 6 out of 6 changed files in this pull request and generated 14 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +214 to +218
except Exception as exc:
# Propagate exception to all waiters
for waiter in waiters:
waiter.set_exception(exc)
raise
Copy link

Copilot AI Jan 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After propagating the exception to all waiters, the exception is re-raised. However, this might cause issues because if flush() or shutdown() calls this method, the exception will bubble up and potentially interrupt the cleanup process. Consider whether re-raising is necessary here, or if it should be handled differently when called from shutdown/flush contexts to ensure graceful cleanup.

Copilot uses AI. Check for mistakes.
if action_queue_delay <= 0:
raise ValueError("action_queue_delay must be positive")
if action_queue_max_actions < 1:
raise ValueError("action_queue_max_actions must be at least 1")
Copy link

Copilot AI Jan 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message should include the actual value that was provided to help with debugging. Consider changing to "action_queue_max_actions must be at least 1, got {action_queue_max_actions}".

Suggested change
raise ValueError("action_queue_max_actions must be at least 1")
raise ValueError(
f"action_queue_max_actions must be at least 1, got {action_queue_max_actions}"
)

Copilot uses AI. Check for mistakes.
Comment on lines 158 to 165
"""Constructor.
:param server: ServerConfig
:param session: optional ClientSession
:param action_queue_enabled: enable action batching queue (default False)
:param action_queue_delay: seconds to wait before flushing queue (default 0.5)
:param action_queue_max_actions: maximum actions per batch before auto-flush (default 20)
"""
Copy link

Copilot AI Jan 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring is missing documentation for the credentials and verify_ssl parameters. These parameters should be documented to provide complete information about all constructor parameters.

Copilot uses AI. Check for mistakes.
# Initialize action queue if enabled
if action_queue_enabled:
if action_queue_delay <= 0:
raise ValueError("action_queue_delay must be positive")
Copy link

Copilot AI Jan 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message should include the actual value that was provided to help with debugging. Consider changing to "action_queue_delay must be positive, got {action_queue_delay}".

Suggested change
raise ValueError("action_queue_delay must be positive")
raise ValueError(f"action_queue_delay must be positive, got {action_queue_delay}")

Copilot uses AI. Check for mistakes.
Comment on lines +119 to +122
# Execute the first batch if it exists, then the second
if batch_to_execute:
await self._execute_batch(*batch_to_execute)
batch_to_execute = new_batch
Copy link

Copilot AI Jan 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Executing a batch while holding the lock at line 121 could cause long lock hold times and potential deadlocks. The batch execution involves an async network call via _executor, which could take significant time. This blocks all other queue operations during execution.

Consider refactoring to execute both batches outside the lock. You could collect both batches to execute and then execute them sequentially after releasing the lock.

Copilot uses AI. Check for mistakes.
…ernamePasswordCredentials for improved clarity and consistency
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +187 to +189
raise ValueError("action_queue_delay must be positive")
if action_queue_max_actions < 1:
raise ValueError("action_queue_max_actions must be at least 1")
Copy link

Copilot AI Jan 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error messages should specify the actual invalid values that were provided to help users debug configuration issues. For example, "action_queue_delay must be positive, got {action_queue_delay}" would be more helpful.

Suggested change
raise ValueError("action_queue_delay must be positive")
if action_queue_max_actions < 1:
raise ValueError("action_queue_max_actions must be at least 1")
raise ValueError(
f"action_queue_delay must be positive, got {action_queue_delay!r}"
)
if action_queue_max_actions < 1:
raise ValueError(
"action_queue_max_actions must be at least 1, "
f"got {action_queue_max_actions!r}"
)

Copilot uses AI. Check for mistakes.
Comment on lines +20 to +38
self._future: asyncio.Future[str] = asyncio.Future()

def set_result(self, exec_id: str) -> None:
"""Set the execution ID result."""
if not self._future.done():
self._future.set_result(exec_id)

def set_exception(self, exception: BaseException) -> None:
"""Set an exception if the batch execution failed."""
if not self._future.done():
self._future.set_exception(exception)

def is_done(self) -> bool:
"""Check if the execution has completed (either with result or exception)."""
return self._future.done()

def __await__(self):
"""Make this awaitable."""
return self._future.__await__()
Copy link

Copilot AI Jan 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Creating asyncio.Future() in init (a synchronous method) can cause issues. The Future should be created lazily when first needed in an async context. Consider creating the future in the add() method the first time it's called, or switching to using asyncio.Event and managing the result separately.

Suggested change
self._future: asyncio.Future[str] = asyncio.Future()
def set_result(self, exec_id: str) -> None:
"""Set the execution ID result."""
if not self._future.done():
self._future.set_result(exec_id)
def set_exception(self, exception: BaseException) -> None:
"""Set an exception if the batch execution failed."""
if not self._future.done():
self._future.set_exception(exception)
def is_done(self) -> bool:
"""Check if the execution has completed (either with result or exception)."""
return self._future.done()
def __await__(self):
"""Make this awaitable."""
return self._future.__await__()
self._future: asyncio.Future[str] | None = None
def _ensure_future(self) -> asyncio.Future[str]:
"""Create the underlying future lazily, bound to the running event loop."""
if self._future is None:
loop = asyncio.get_running_loop()
self._future = loop.create_future()
return self._future
def set_result(self, exec_id: str) -> None:
"""Set the execution ID result."""
future = self._ensure_future()
if not future.done():
future.set_result(exec_id)
def set_exception(self, exception: BaseException) -> None:
"""Set an exception if the batch execution failed."""
future = self._ensure_future()
if not future.done():
future.set_exception(exception)
def is_done(self) -> bool:
"""Check if the execution has completed (either with result or exception)."""
return self._future.done() if self._future is not None else False
def __await__(self):
"""Make this awaitable."""
return self._ensure_future().__await__()

Copilot uses AI. Check for mistakes.
Comment on lines +239 to +246
def get_pending_count(self) -> int:
"""Get the (approximate) number of actions currently waiting in the queue.
This method does not acquire the internal lock and therefore returns a
best-effort snapshot that may be slightly out of date if the queue is
being modified concurrently by other coroutines.
"""
return len(self._pending_actions)
Copy link

Copilot AI Jan 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the documentation acknowledges this is a best-effort read, accessing self._pending_actions without the lock in a concurrent environment can lead to subtle bugs in Python due to list operations not being atomic. Consider acquiring the lock briefly to get an accurate count, or document more clearly that this method may return stale data and should not be relied upon for critical logic.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add optional action queue to group executions and avoid Overkiz concurrency limits

2 participants