-
Notifications
You must be signed in to change notification settings - Fork 0
new types, updated rate limit service #102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughConsolidates typing and domain models (ResourceUsageDomain moved; new ExecutionResultSummary, ReplayError, EndpointUsageStats), refactors rate-limit metrics and service, replaces DLQ response assembly with pydantic model_validate using DLQRetryResult, and modernizes annotations/typing across many modules. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues found across 32 files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/app/services/sse/redis_bus.py (1)
37-41: Useaclose()instead of deprecatedclose()for PubSub cleanup.In redis-py 7.1.0,
PubSub.close()is deprecated and marked with "Useaclose()instead". Althoughclose()internally callsaclose()(so it still works), the code should useawait self._pubsub.aclose()to follow current best practices and align with how the asyncio Redis client handles cleanup.backend/app/domain/execution/models.py (1)
35-43: Metadata field is not persisted to the database and will be silently lost.Line 42 changes
ExecutionResultDomain.metadatatoEventMetadata | None, but thewrite_terminal_resultmethod (backend/app/db/repositories/execution_repository.py, line 48) does not persist the metadata field. Whileresource_usageis correctly serialized with.model_dump()anderror_typeis set,metadatais completely omitted from the database update. This causes metadata to be silently discarded whenever anExecutionResultDomainis saved. Either add metadata persistence towrite_terminal_result(which requires adding ametadatafield toExecutionDocument) or remove the field fromExecutionResultDomainif it's not intended for storage.
🤖 Fix all issues with AI agents
In `@backend/app/core/providers.py`:
- Around line 118-121: The async Redis client cleanup uses the deprecated
close() alias; update the finally block that currently does "await
client.close()" to call "await client.aclose()" instead so the
redis.asyncio.Redis client is closed with the modern API (locate the provider
that yields client and replace the await client.close() call with await
client.aclose()).
🧹 Nitpick comments (3)
backend/app/domain/replay/models.py (1)
11-19: Consider usingdatetimetype fortimestampfield.The
timestampfield is typed asstrbut stores ISO-formatted datetime strings. Other models in this file (e.g.,ReplaySessionState.created_at,started_at,completed_at) use thedatetimetype directly. Usingstrhere creates inconsistency and loses type safety benefits.♻️ Suggested refactor
class ReplayError(BaseModel): """Error details for replay operations.""" model_config = ConfigDict(from_attributes=True) - timestamp: str + timestamp: datetime error: str type: str | None = None # Present for session-level errors event_id: str | None = None # Present for event-level errorsThis would also require updating the call sites in
replay_service.pyto passdatetimeobjects directly instead of calling.isoformat().backend/app/services/event_replay/replay_service.py (1)
174-177: Consider includingtypefor event-level errors as well.For consistency with
_handle_session_error(line 149) and improved debugging, consider populating thetypefield with the exception class name.♻️ Suggested refactor
err = ReplayError( - timestamp=datetime.now(timezone.utc).isoformat(), event_id=str(event.event_id), error=str(error) + timestamp=datetime.now(timezone.utc).isoformat(), + event_id=str(event.event_id), + error=str(error), + type=type(error).__name__, )backend/app/settings.py (1)
124-124: Document the requirement or hardcode the Redis decode setting.Using
Literal[True]is supported in pydantic-settings 2.5 and enforces the value at both type-check and runtime. However, this introduces a validation-time breaking change: any existing.envwithREDIS_DECODE_RESPONSES=falsewill fail on startup.Since
decode_responses=Trueappears required by the Redis client instantiation inbackend/core/providers.py, either:
- Add a comment explaining why it must be
True(e.g., downstream code assumes decoded strings).- Remove it as a configurable setting and hardcode
decode_responses=Truedirectly in the Redis client setup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/infrastructure/kafka/mappings.py (1)
38-45: Add missingNOTIFICATION_ALL_READevent type to mapping.The
EventType.NOTIFICATION_ALL_READis defined in the enum (line 44 of events.py) but not included inEVENT_TYPE_TO_TOPIC. This causes it to fall back toSYSTEM_EVENTS(line 99), inconsistent with all otherNOTIFICATION_*events that route toNOTIFICATION_EVENTS.Proposed fix
EventType.NOTIFICATION_READ: KafkaTopic.NOTIFICATION_EVENTS, + EventType.NOTIFICATION_ALL_READ: KafkaTopic.NOTIFICATION_EVENTS, EventType.NOTIFICATION_CLICKED: KafkaTopic.NOTIFICATION_EVENTS,
🤖 Fix all issues with AI agents
In `@backend/app/db/repositories/admin/admin_events_repository.py`:
- Line 146: The list comprehension building events_by_type can crash on unknown
DB values because EventType(t["_id"]) raises for invalid enums; update the logic
in the events_by_type construction (where EventTypeCount and EventType are used
with top_types) to safely convert or skip unknowns — e.g., check
EventType._value2member_map_.get(t["_id"]) (or use a try/except around
EventType(...)) and only create EventTypeCount for valid enum members, skipping
or logging legacy/unknown types so the stats endpoint doesn't fail.
In `@backend/app/db/repositories/event_repository.py`:
- Around line 247-254: The current conversion of aggregation keys to EventType
instances risks using an incorrect filter; replace any suggested membership
check against EventType.__members__.values() with a check against the enum
string values (e.g., test k in [e.value for e in EventType]) before calling
EventType(k), or simply rely on the existing Pydantic validation
(EventDocument(**data)) and leave the current EventType(k) conversion in the
events_by_type mapping as-is; update the logic around events_by_type (and any
related EventTypeCount construction) to use the correct membership check
(EventType) so invalid strings are filtered correctly.
In `@backend/app/services/user_settings_service.py`:
- Around line 72-76: Replace the wildcard subscription pattern with the exact
event type: update the call to bus.subscribe currently using
f"{EventType.USER_SETTINGS_UPDATED}*" to subscribe to
EventType.USER_SETTINGS_UPDATED directly; keep the existing _handle function and
its isinstance(evt, UserSettingsUpdatedEvent) check and ensure the returned
subscription id is still assigned to self._subscription_id and that _handle
continues to call self.invalidate_cache(evt.user_id) as before.
In `@docs/reference/openapi.json`:
- Around line 8494-8505: The examples for EventStatistics and related schemas
use uppercase EventType strings that don't match the EventType enum (which uses
lowercase); update all example values in the EventStatistics examples and any
EventTypeCountSchema/ServiceEventCountSchema example objects to use the
lowercase enum values (e.g., "request", "response", etc.) so generated SDKs and
clients get correct casing; search for examples referencing EventType in
EventStatistics, EventTypeCountSchema and ServiceEventCountSchema and replace
uppercase strings with the corresponding lowercase enum values consistently
across the file.
In `@frontend/src/lib/api/types.gen.ts`:
- Around line 3676-3680: The type for the field container_statuses has changed
from string to Array<ContainerStatusInfo>, so update any code that reads or
assigns this property to handle an array of ContainerStatusInfo objects rather
than a string; locate usages of container_statuses and adjust parsing/iteration
logic (e.g., map/filter loops, type annotations, and any JSON parsing) and
update any variable/type declarations or function signatures that consume it to
expect ContainerStatusInfo[] instead of string so callers of the affected types
(including functions referencing ContainerStatusInfo or container_statuses)
compile and handle the new structure correctly.
♻️ Duplicate comments (1)
backend/app/core/providers.py (1)
118-121: Useaclose()instead ofclose()for async Redis client cleanup.This was previously flagged:
close()is deprecated since redis-py 5.0.1 and is an alias foraclose(). The code should use the modern API.🔧 Suggested fix
try: yield client finally: - await client.close() + await client.aclose()
🧹 Nitpick comments (4)
backend/tests/e2e/services/rate_limit/test_rate_limit_service.py (1)
141-142: Consider testing via public API rather than private attributes.Asserting on
_requests(a private attribute by Python convention) couples this test to internal implementation details. If the metrics internals are refactored, this test could break even though the public behavior remains correct.Since the metrics refactor introduced public recording methods (
record_request, etc.), consider verifying wiring by invoking the public API or simply removing this assertion if the other tests already exercise the metrics path.♻️ Alternative approach
- # Verify metrics object has _requests counter for checks - assert hasattr(svc.metrics, '_requests') + # Verify metrics object exposes public recording API + assert callable(getattr(svc.metrics, 'record_request', None))Or remove entirely if metric recording is already validated by other tests.
backend/app/schemas_pydantic/user_settings.py (1)
20-20: Avoid mutable default list forchannels.
A list literal can be shared/mutated across instances; prefer adefault_factoryfor isolation.♻️ Suggested change
- channels: list[NotificationChannel] = [NotificationChannel.IN_APP] + channels: list[NotificationChannel] = Field(default_factory=lambda: [NotificationChannel.IN_APP])backend/tests/e2e/services/events/test_event_bus.py (1)
28-28: Prefer explicit enum values for the pattern/key.
Avoids reliance on Enum__str__/encodebehavior and keeps the intent unambiguous.♻️ Suggested change
- await bus.subscribe(f"{EventType.USER_SETTINGS_UPDATED}*", handler) + await bus.subscribe(f"{EventType.USER_SETTINGS_UPDATED.value}*", handler) ... - key=EventType.USER_SETTINGS_UPDATED.encode("utf-8"), + key=EventType.USER_SETTINGS_UPDATED.value.encode("utf-8"),Also applies to: 45-45
backend/app/services/rate_limit_service.py (1)
310-344: Consider handling missing rules in token bucket stats.For sliding window stats (line 332-337), when no matching rule is found,
limitis set to 0, resulting inremaining = max(0, 0 - count) = 0. However, for token bucket stats (lines 338-344), the bucket's stored tokens are used directly without considering whether a rule exists.This inconsistency could be intentional (token bucket stores actual remaining tokens), but consider adding a comment explaining the difference, or returning
Nonefor endpoints without matching rules if that's more appropriate.
|



Summary by cubic
Typed core models and streamlined the rate limit service and metrics to reduce complexity and make admin usage stats consistent across algorithms. Updated Grafana, OpenAPI, and docs to match the new metrics and types.
Refactors
Migration
Written for commit b8dda69. Summary will update on new commits.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.