Conversation
- System Console: optional review target (days) and overdue alert channel ID - /github todo and daily reminder: per-review due/overdue from PR open date - Left sidebar: color review counter when SLA enabled (clear vs overdue) - Post to configured channel once per user/day on daily reminder when overdue - Handle KV read errors for overdue dedupe; unit tests for SLA math
…annel post to a single daily summary
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughAdds review-SLA tracking, enrichment, and a daily overdue digest: new plugin settings, KV storage of per-review-request start times, enrichment of PR data with SLA start timestamps, a background scheduler that aggregates overdue reviews and posts a daily Mattermost digest, plus UI and type updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Scheduler as Scheduler (plugin background)
participant Plugin as Plugin
participant KV as KV Store
participant GH as GitHub API
participant MM as Mattermost Server
rect rgba(200,200,255,0.5)
Scheduler->>Plugin: Trigger maybePostDailyOverdueSLADigest(ctx)
end
Plugin->>KV: Acquire distributed mutex (github_sla_digest_mutex)
Plugin->>KV: Check/Write day marker (github_sla_digest_local_day)
Plugin->>Plugin: collectAllOverdueSLAItems(ctx)
Plugin->>KV: List user token keys
loop per user
Plugin->>KV: Read token for user
Plugin->>GH: Search PRs needing review (paginated)
GH-->>Plugin: Return issues/PRs
Plugin->>KV: Read per-review SLA start keys
end
Plugin->>Plugin: Build grouped markdown message
alt entries found
Plugin->>MM: Create post in configured channel (author=bot)
MM-->>Plugin: Post created
Plugin->>KV: Write day marker (to avoid repost)
else no entries
Plugin->>KV: Write day marker (no-post)
end
Plugin->>KV: Release mutex
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
|
Note, I've not been able to test this. |
|
@jgheithcock I don't know how easy this is but could the SLA be for acknowledging the PR (aka commenting on it) and not necessarily for having a review in? The goal isn't to get people to finish reviews within X days but to at least have them acknowledge the requests within X days
@avasconcelos114 can probably help you with steps to set up a test env if needed |
|
@jgheithcock Yep! I would love to help test and review this once ready :) |
|
Thanks @avasconcelos114 - I'm setting this to ready for review as @jwilander is on board with the target being "reviewed" vs "acknowledged" we can always change the #days or revisit this if we change our minds later! Much obliged! |
|
(@jwilander )
For posterity, we have agreed to go with the current implementation - to measure against actually reviewed. We can revisit later if need be. |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
server/plugin/utils.go (1)
372-379: Consider handling empty owner/repo from malformed URLs.If
htmlURLis malformed,parseOwnerAndRepomay return empty strings, resulting in a digest line like"- user - / : title". This is a minor edge case but could affect readability.🔧 Optional: Add fallback for malformed URLs
func formatChannelOverdueReviewLine(githubLogin, title, htmlURL, baseURL string) string { owner, repo := parseOwnerAndRepo(htmlURL, baseURL) + repoDisplay := fmt.Sprintf("%s/%s", owner, repo) + if owner == "" || repo == "" { + repoDisplay = htmlURL // fallback to raw URL + } if len(title) > 200 { title = strings.TrimSpace(title[:200]) + "..." } - return fmt.Sprintf("- %s - %s/%s : %s", githubLogin, owner, repo, title) + return fmt.Sprintf("- %s - %s : %s", githubLogin, repoDisplay, title) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/plugin/utils.go` around lines 372 - 379, formatChannelOverdueReviewLine can produce empty owner/repo when parseOwnerAndRepo returns empty strings for a malformed htmlURL; update formatChannelOverdueReviewLine to detect empty owner or repo after calling parseOwnerAndRepo and substitute a safe fallback (e.g., "unknown" for each missing part or derive a readable value from htmlURL/baseURL) before building the formatted string so you never emit lines like " / " — reference the function formatChannelOverdueReviewLine and the call to parseOwnerAndRepo when making the change.server/plugin/sla_digest_scheduler.go (1)
37-42: Server local timezone assumption may cause unexpected behavior in containers.The scheduler uses
time.Localwhich depends on the server'sTZenvironment variable. In containerized deployments, this may default to UTC unless explicitly configured, potentially posting the digest at unexpected times.Consider documenting this behavior in the admin settings or logging the detected timezone on startup.
📝 Add timezone logging for visibility
func (p *Plugin) runSLADigestScheduler() { + p.client.Log.Info("SLA digest scheduler started", "timezone", time.Local.String()) for { if !p.slaDigestSchedulingEnabled() {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/plugin/sla_digest_scheduler.go` around lines 37 - 42, The scheduler currently uses time.Local in durationUntilNextLocalMidnight(), which can vary by the container's TZ and cause unexpected post times; update the startup/scheduler initialization to (1) read and log the effective timezone (e.g., time.Local.String() or os.Getenv("TZ")) so admins can see what timezone the service is using, and (2) make the scheduler timezone explicit/configurable (use a configured location instead of time.Local when computing the next midnight in durationUntilNextLocalMidnight()) so deployments can override the default; add a clear startup log message referencing the scheduler start and effective timezone.webapp/src/components/sidebar_buttons/sidebar_buttons.jsx (1)
214-242: Date calculation works but could be simplified.The overdue detection logic is correct but the calculation is more complex than necessary. You're creating a UTC date from the start time, then recreating UTC dates for comparison.
♻️ Simplified alternative
function reviewHasOverdue(reviews, targetDays) { if (!targetDays || !reviews || reviews.length === 0) { return false; } + const msPerDay = 24 * 60 * 60 * 1000; + const todayStart = Date.UTC( + new Date().getUTCFullYear(), + new Date().getUTCMonth(), + new Date().getUTCDate(), + ); for (const pr of reviews) { const startIso = pr.review_sla_start || pr.created_at; if (!startIso) { continue; } const created = new Date(startIso); if (Number.isNaN(created.getTime())) { continue; } - const due = new Date(Date.UTC( - created.getUTCFullYear(), - created.getUTCMonth(), - created.getUTCDate(), - )); - due.setUTCDate(due.getUTCDate() + targetDays); - const today = new Date(); - const todayUTC = Date.UTC(today.getUTCFullYear(), today.getUTCMonth(), today.getUTCDate()); - const dueUTC = Date.UTC(due.getUTCFullYear(), due.getUTCMonth(), due.getUTCDate()); - const diffDays = Math.round((dueUTC - todayUTC) / (24 * 60 * 60 * 1000)); - if (diffDays < 0) { + const startDay = Date.UTC(created.getUTCFullYear(), created.getUTCMonth(), created.getUTCDate()); + const dueDay = startDay + (targetDays * msPerDay); + if (todayStart > dueDay) { return true; } } return false; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@webapp/src/components/sidebar_buttons/sidebar_buttons.jsx` around lines 214 - 242, The reviewHasOverdue function uses an unnecessarily roundabout UTC due-date construction; replace it by normalizing the start date to UTC midnight once (compute startUTC = Date.UTC(created.getUTCFullYear(), created.getUTCMonth(), created.getUTCDate())), compute todayUTC the same way, then compute daysElapsed = Math.floor((todayUTC - startUTC) / (24*60*60*1000)) and return true when daysElapsed > targetDays; update variable names (startIso, created, startUTC, todayUTC, daysElapsed, targetDays) accordingly and remove the extra due/date recreation logic to simplify and clarify the calculation.server/plugin/sla_digest.go (1)
90-151: Consider checking context cancellation during iteration.The function receives a context but doesn't check
ctx.Done()during the potentially long iteration over all users. If the 45-minute timeout fires mid-iteration, the function will continue processing instead of returning early.♻️ Add context cancellation check
for page := 0; ; page++ { + select { + case <-ctx.Done(): + return out + default: + } keys, err := p.store.ListKeys(page, keysPerPage, pluginapi.WithChecker(checker))And inside the inner loop:
for _, key := range keys { + if ctx.Err() != nil { + return out + } userID := strings.TrimSuffix(key, githubTokenKey)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/plugin/sla_digest.go` around lines 90 - 151, collectAllOverdueSLAItems ignores ctx cancellation and may continue long-running work; add periodic checks of ctx.Done() to abort early: at the start of the outer loop over pages and inside the inner loops (before processing each key, after getGitHubUserInfo, before/after useGitHubClient and inside the PR loop) return immediately when ctx is canceled (or propagate context error) so functions like getGitHubUserInfo, useGitHubClient, githubClient.Search and sleep (delayBetweenUsers) don't keep running after cancellation; reference symbols: collectAllOverdueSLAItems, ctx, keysPerPage, delayBetweenUsers, getGitHubUserInfo, useGitHubClient, githubClient.Search, effectiveReviewSLAStart.server/plugin/review_sla.go (1)
28-46: Consider cleanup strategy for stale SLA keys.SLA start times are stored indefinitely but never cleaned up when PRs are merged or closed. Over time, this could lead to KV store growth. This isn't urgent but worth tracking for a future enhancement.
Would you like me to open an issue to track implementing cleanup of stale SLA keys when PRs are closed/merged?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/plugin/review_sla.go` around lines 28 - 46, The code in recordReviewRequestSLAStart persists SLA start times indefinitely; add a cleanup path to remove stale keys when PRs are closed/merged. Add logic in your PR event handler (the function that processes PullRequestEvent close/merge states) to construct the same key prefix for that (owner, repo, number) using reviewSLAStartKey or the underlying prefix and iterate/delete matching keys from p.store (or call p.store.Delete for each reviewer if you track reviewer names) so entries for that PR are removed on closed/merged; alternatively implement a background janitor goroutine that scans the reviewSLAStartKey prefix and deletes entries older than a threshold. Use the same identifying symbols (recordReviewRequestSLAStart, reviewSLAStartKey, p.store.Set/Delete and the PullRequestEvent handler) to locate where to add the deletion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/plugin/plugin.go`:
- Around line 300-302: The SLA scheduler goroutine started by go
p.runSLADigestScheduler() lacks a cancellation mechanism; add a context with
cancellation to the Plugin struct (e.g., ctx context.Context and cancel
context.CancelFunc), initialize it in OnActivate before launching
runSLADigestScheduler, and call the cancel function in OnDeactivate alongside
closing brokers; update runSLADigestScheduler to accept/use the plugin context
and exit its loop by selecting on ctx.Done() so the goroutine stops when the
plugin is deactivated.
---
Nitpick comments:
In `@server/plugin/review_sla.go`:
- Around line 28-46: The code in recordReviewRequestSLAStart persists SLA start
times indefinitely; add a cleanup path to remove stale keys when PRs are
closed/merged. Add logic in your PR event handler (the function that processes
PullRequestEvent close/merge states) to construct the same key prefix for that
(owner, repo, number) using reviewSLAStartKey or the underlying prefix and
iterate/delete matching keys from p.store (or call p.store.Delete for each
reviewer if you track reviewer names) so entries for that PR are removed on
closed/merged; alternatively implement a background janitor goroutine that scans
the reviewSLAStartKey prefix and deletes entries older than a threshold. Use the
same identifying symbols (recordReviewRequestSLAStart, reviewSLAStartKey,
p.store.Set/Delete and the PullRequestEvent handler) to locate where to add the
deletion.
In `@server/plugin/sla_digest_scheduler.go`:
- Around line 37-42: The scheduler currently uses time.Local in
durationUntilNextLocalMidnight(), which can vary by the container's TZ and cause
unexpected post times; update the startup/scheduler initialization to (1) read
and log the effective timezone (e.g., time.Local.String() or os.Getenv("TZ")) so
admins can see what timezone the service is using, and (2) make the scheduler
timezone explicit/configurable (use a configured location instead of time.Local
when computing the next midnight in durationUntilNextLocalMidnight()) so
deployments can override the default; add a clear startup log message
referencing the scheduler start and effective timezone.
In `@server/plugin/sla_digest.go`:
- Around line 90-151: collectAllOverdueSLAItems ignores ctx cancellation and may
continue long-running work; add periodic checks of ctx.Done() to abort early: at
the start of the outer loop over pages and inside the inner loops (before
processing each key, after getGitHubUserInfo, before/after useGitHubClient and
inside the PR loop) return immediately when ctx is canceled (or propagate
context error) so functions like getGitHubUserInfo, useGitHubClient,
githubClient.Search and sleep (delayBetweenUsers) don't keep running after
cancellation; reference symbols: collectAllOverdueSLAItems, ctx, keysPerPage,
delayBetweenUsers, getGitHubUserInfo, useGitHubClient, githubClient.Search,
effectiveReviewSLAStart.
In `@server/plugin/utils.go`:
- Around line 372-379: formatChannelOverdueReviewLine can produce empty
owner/repo when parseOwnerAndRepo returns empty strings for a malformed htmlURL;
update formatChannelOverdueReviewLine to detect empty owner or repo after
calling parseOwnerAndRepo and substitute a safe fallback (e.g., "unknown" for
each missing part or derive a readable value from htmlURL/baseURL) before
building the formatted string so you never emit lines like " / " — reference the
function formatChannelOverdueReviewLine and the call to parseOwnerAndRepo when
making the change.
In `@webapp/src/components/sidebar_buttons/sidebar_buttons.jsx`:
- Around line 214-242: The reviewHasOverdue function uses an unnecessarily
roundabout UTC due-date construction; replace it by normalizing the start date
to UTC midnight once (compute startUTC = Date.UTC(created.getUTCFullYear(),
created.getUTCMonth(), created.getUTCDate())), compute todayUTC the same way,
then compute daysElapsed = Math.floor((todayUTC - startUTC) / (24*60*60*1000))
and return true when daysElapsed > targetDays; update variable names (startIso,
created, startUTC, todayUTC, daysElapsed, targetDays) accordingly and remove the
extra due/date recreation logic to simplify and clarify the calculation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 84069505-cb49-43df-a3ac-ec30b421e213
📒 Files selected for processing (16)
plugin.jsonserver/plugin/api.goserver/plugin/configuration.goserver/plugin/graphql/lhs_request.goserver/plugin/plugin.goserver/plugin/review_sla.goserver/plugin/review_sla_test.goserver/plugin/sla_digest.goserver/plugin/sla_digest_scheduler.goserver/plugin/utils.goserver/plugin/utils_test.goserver/plugin/webhook.gowebapp/src/components/sidebar_buttons/index.jswebapp/src/components/sidebar_buttons/sidebar_buttons.jsxwebapp/src/reducers/index.tswebapp/src/types/github_types.ts
avasconcelos114
left a comment
There was a problem hiding this comment.
Thank you for this awesome new feature! This will be a valuable addition to making sure we can stay on top of PR reviews :)
I had a few comments in addition to the existing one by coderabbit, I'll get a server set up so we can have a shared test environment as so far I've just tested it in a local environment
| } | ||
|
|
||
| // reviewSLAMarkdown returns HTML-styled SLA suffix for Mattermost markdown and whether the review is overdue. | ||
| func reviewSLAMarkdown(createdAt github.Timestamp, targetDays int, now time.Time) (suffix string, overdue bool) { |
There was a problem hiding this comment.
This function seems to be unused (only tests seem to reference it), I think it can be safely removed
| if (reviewHasOverdue(list, targetDays)) { | ||
| return {...base, color: '#c62828'}; | ||
| } | ||
| return {...base, color: '#2e7d32'}; |
There was a problem hiding this comment.
I think that we'll want to apply theme color variables to these so that they can remain theme-aware. I think var(--dnd-indicator) for overdue items and var(--online-indicator) for the ones that still have time left would be appropriate colors given their underlying positive/negative connotations
I've also installed this in a private test server to do some preliminary testing, and I was wondering if we'd also want to highlight PRs of interest in the list of PRs awaiting reviews as well. I fear that the color of the sidebar button alone may not offer enough visibility. Maybe applying an "Overdue" tag next to the name of the PR would allow users to quickly view and prioritize
There was a problem hiding this comment.
Great, (although I see some more issues with color below). But this was resolved in af5e7e6
| "help_text": "Optional. Number of calendar days until a review counts as due. The start time is when you were requested as a reviewer (recorded from GitHub pull_request review_requested webhooks to this server); if unknown, the PR open date is used. When greater than zero, /github todo shows due/overdue text and the sidebar review counter can be color-coded. Set to 0 to disable." | ||
| }, | ||
| { | ||
| "key": "OverdueReviewsChannelID", |
There was a problem hiding this comment.
This is probably fine for a first iteration of this feature, but I'm wondering if we'd want to introduce a slash command to make this a bit more configurable.
I feel like in real use-cases, organizations might want to be able to configure these digests into each channel with a list of repositories that might be most relevant for the members of that channel
|
Plugin Spinwick PR #983 🎉 Test server created! Access here: https://github-pr-983-ab4s3.test.mattermost.cloud
The test server was created successfully, but there was an issue installing or enabling the plugin automatically:
You can manually install the plugin:
Future commits will still attempt to automatically update the plugin. Installation ID: Credentials: Posted securely in this Mattermost channel - Look for PR #983 |
|
Installed and completed the setup needed to test the plugin in the linked spinwick server |
…anup - Add cancellable context to SLA digest scheduler so the goroutine stops on plugin deactivation - Check ctx.Done() in collectAllOverdueSLAItems to abort on cancellation - Log detected timezone on scheduler startup for container debugging - Use var(--dnd-indicator) and var(--online-indicator) instead of hardcoded hex colors so the sidebar button respects the active theme - Delete SLA start-time KV entries when a PR is closed or merged - Fall back to raw HTML URL in digest lines when owner/repo parsing fails
|
@avasconcelos114 - In addition to the changes made, for |
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
server/plugin/review_sla.go (1)
48-69:⚠️ Potential issue | 🟡 MinorThis close-path cleanup still leaves orphaned SLA keys.
RequestedReviewerson the close webhook only contains reviewers still pending at close time. Anyone who reviewed earlier or was removed from the request list keeps their KV entry forever, andreviewSLAStartKeyis one-way so those orphaned keys cannot be recovered later for deletion. Please keep a per-PR reviewer index (or switch to a prefixable key scheme) so close/removal/review events can clean up every entry deterministically.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/plugin/review_sla.go` around lines 48 - 69, cleanupReviewSLAKeys only iterates event.GetPullRequest().RequestedReviewers so SLA KV entries for reviewers who already reviewed or were removed remain orphaned; update the design so every per-PR reviewer key can be deterministically enumerated and deleted (either maintain a per-PR reviewer index stored under a known key like reviewSLAIndex(owner,repo,number) that lists all reviewer logins to iterate and delete, or change reviewSLAStartKey to use a predictable per-PR prefix so you can call store.List/iterate and delete all keys for that PR); modify places that create reviewSLAStartKey entries to also push the reviewer into the per-PR index (or use the new prefix scheme) and change cleanupReviewSLAKeys to enumerate that index/prefix and call p.store.Delete for each key instead of relying on RequestedReviewers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/plugin/sla_digest_scheduler.go`:
- Around line 23-37: The scheduler loop currently waits until the next midnight
when scheduling becomes enabled, which skips the current day's digest; modify
the loop so that immediately after detecting p.slaDigestSchedulingEnabled()
returns true you invoke the digest runner (call maybePostDailyOverdueSLADigest
or the function used to post the daily digest) once before computing
durationUntilNextLocalMidnight() and sleeping; this ensures a same-day catch-up
run on enable/restart while still relying on the existing day-marker
deduplication to prevent duplicates.
In `@server/plugin/sla_digest.go`:
- Around line 66-80: The digest currently uses p.collectAllOverdueSLAItems and
buildSLADigestMessage to post PR titles/URLs into the single configured
OverdueReviewsChannelID, which can leak unrelated/private repos; update the
logic to scope entries to only repositories subscribed for that channel (reuse
the same subscription resolution used in GetSubscribedChannelsForRepository)
before calling buildSLADigestMessage, or alternatively change
buildSLADigestMessage to produce counts-only summaries for the channel; ensure
you filter the entries slice by repository subscription for
cfg.OverdueReviewsChannelID (or switch to counts-only) prior to creating the
model.Post so the channel only receives allowed repo data.
- Around line 127-131: The current call to githubClient.Search.Issues inside
p.useGitHubClient stores only the first page into issueResults and ignores the
*github.Response pagination info; change the logic in the block that calls
githubClient.Search.Issues (the closure passed to p.useGitHubClient) to capture
the response (resp), loop while resp.NextPage != 0, and append results into
issueResults across pages by re-calling githubClient.Search.Issues with updated
github.SearchOptions{ListOptions: github.ListOptions{Page: resp.NextPage}} (use
getReviewSearchQuery for the same query) until no next page remains so all
matching issues are collected.
---
Duplicate comments:
In `@server/plugin/review_sla.go`:
- Around line 48-69: cleanupReviewSLAKeys only iterates
event.GetPullRequest().RequestedReviewers so SLA KV entries for reviewers who
already reviewed or were removed remain orphaned; update the design so every
per-PR reviewer key can be deterministically enumerated and deleted (either
maintain a per-PR reviewer index stored under a known key like
reviewSLAIndex(owner,repo,number) that lists all reviewer logins to iterate and
delete, or change reviewSLAStartKey to use a predictable per-PR prefix so you
can call store.List/iterate and delete all keys for that PR); modify places that
create reviewSLAStartKey entries to also push the reviewer into the per-PR index
(or use the new prefix scheme) and change cleanupReviewSLAKeys to enumerate that
index/prefix and call p.store.Delete for each key instead of relying on
RequestedReviewers.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 267907b7-060d-4ef3-91fd-8812282a4b16
📒 Files selected for processing (7)
server/plugin/plugin.goserver/plugin/review_sla.goserver/plugin/sla_digest.goserver/plugin/sla_digest_scheduler.goserver/plugin/utils.goserver/plugin/webhook.gowebapp/src/components/sidebar_buttons/sidebar_buttons.jsx
🚧 Files skipped from review as they are similar to previous changes (1)
- server/plugin/utils.go
|
Plugin test server update attempted, but encountered an issue: The test server is still available. You can manually download and install the updated plugin using the artifact link below. Updated with git commit
|
Run the SLA digest immediately when scheduling becomes enabled so a restart or config change after midnight doesn't skip the current day. Paginate Search.Issues in collectAllOverdueSLAItems (100 per page) instead of silently truncating at the default 30 results.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
server/plugin/sla_digest_scheduler.go (2)
11-13: Update the function comment to reflect the immediate catch-up behavior.The docstring currently describes only “sleep-until-midnight then run,” but the implementation also runs an immediate catch-up (Line 32). Keeping this comment aligned will avoid future maintenance confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/plugin/sla_digest_scheduler.go` around lines 11 - 13, Update the comment for runSLADigestScheduler to mention that it performs an immediate catch-up run when SLA channel and target days are configured (in addition to sleeping until next local midnight and then running daily), so the docstring accurately reflects the behavior implemented at the immediate-run point around the code referenced on line 32.
32-52: Consider removing the redundant post-midnight second digest attempt.In steady state, the scheduler runs at midnight (Line 45), then loops and runs catch-up again a couple seconds later (Line 34). Day-marker dedupe prevents duplicate posting, but this still creates an avoidable extra execution path each day.
♻️ Proposed tweak (run catch-up once per enable cycle)
func (p *Plugin) runSLADigestScheduler(ctx context.Context) { p.client.Log.Info("SLA digest scheduler started", "timezone", time.Local.String()) + ranCatchUp := false for { select { case <-ctx.Done(): return default: } if !p.slaDigestSchedulingEnabled() { + ranCatchUp = false select { case <-ctx.Done(): return case <-time.After(5 * time.Minute): } continue } - // Catch-up: run immediately for the current day (the day-marker deduplicates). - digestCtx, cancel := context.WithTimeout(ctx, 45*time.Minute) - p.maybePostDailyOverdueSLADigest(digestCtx) - cancel() + if !ranCatchUp { + // Catch-up: run immediately for the current day (the day-marker deduplicates). + digestCtx, cancel := context.WithTimeout(ctx, 45*time.Minute) + p.maybePostDailyOverdueSLADigest(digestCtx) + cancel() + ranCatchUp = true + } d := durationUntilNextLocalMidnight() select { case <-ctx.Done(): return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/plugin/sla_digest_scheduler.go` around lines 32 - 52, The loop currently runs maybePostDailyOverdueSLADigest twice per cycle (once at start and again a few seconds after midnight) using the second short select/time.After(2 * time.Second) block; remove that redundant post-midnight re-run so the scheduler calls maybePostDailyOverdueSLADigest only once per enable cycle (keep the initial catch-up run, wait with durationUntilNextLocalMidnight(), run with a 45-minute context via context.WithTimeout and cancel, then loop back), eliminating the extra select { <-time.After(2 * time.Second) } block and any duplicated invocation logic around maybePostDailyOverdueSLADigest.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@server/plugin/sla_digest_scheduler.go`:
- Around line 11-13: Update the comment for runSLADigestScheduler to mention
that it performs an immediate catch-up run when SLA channel and target days are
configured (in addition to sleeping until next local midnight and then running
daily), so the docstring accurately reflects the behavior implemented at the
immediate-run point around the code referenced on line 32.
- Around line 32-52: The loop currently runs maybePostDailyOverdueSLADigest
twice per cycle (once at start and again a few seconds after midnight) using the
second short select/time.After(2 * time.Second) block; remove that redundant
post-midnight re-run so the scheduler calls maybePostDailyOverdueSLADigest only
once per enable cycle (keep the initial catch-up run, wait with
durationUntilNextLocalMidnight(), run with a 45-minute context via
context.WithTimeout and cancel, then loop back), eliminating the extra select {
<-time.After(2 * time.Second) } block and any duplicated invocation logic around
maybePostDailyOverdueSLADigest.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 623fb4cb-298f-41f4-bdbb-217ebe76dd04
📒 Files selected for processing (2)
server/plugin/sla_digest.goserver/plugin/sla_digest_scheduler.go
🚧 Files skipped from review as they are similar to previous changes (1)
- server/plugin/sla_digest.go
|
Plugin test server update attempted, but encountered an issue: The test server is still available. You can manually download and install the updated plugin using the artifact link below. Updated with git commit
|
avasconcelos114
left a comment
There was a problem hiding this comment.
Ah thank you for clarifying the reviewSLAMarkdown isn't actually dead code, somehow Ctrl + F hadn't shown me the invocation happening in plugin.go when first searching it
I did encounter an issue when testing the feature that will need addressing though
server/plugin/utils.go
Outdated
| if diffDays == 1 { | ||
| unit = "day" | ||
| } | ||
| return fmt.Sprintf(` <span style="color:#2e7d32;font-style:italic">(Due in %d %s)</span>`, diffDays, unit), false |
There was a problem hiding this comment.
The thing about using tags with colors like this is that the markdown parser doesn't actually seem to process them and they are treated as plaintext
We may want to keep this as a plained markdown with bolded text or something else that's supported, and try to make the visibility of overdue markers clear through other means (like through the PR list in the "Pull Requests Needing Review" view
There was a problem hiding this comment.
Great catch, and yes. Maybe just an icon, like:
📆 (Due in 1 day)
There was a problem hiding this comment.
Yep using icons sound like the perfect avenue to grab people's attention while being something that is perfectly supported across platforms!
| if isPrivate && !p.permissionToRepo(authorUserID, repoName) { | ||
| authorUserID = "" | ||
| } | ||
| case actionReopened: |
There was a problem hiding this comment.
Question: If we have a scenario where a PR is closed and then re-opened, do we want to trigger a new recordReviewRequestSLAStart to reset the count? It feels like it could be a desired behavior
There was a problem hiding this comment.
It is a great feature request but I feel that is a v.2 feature - I'd like to get some mileage with the majority of the PRs first.
There was a problem hiding this comment.
Good point! :) this is already a great QoL update to have, just got excited about the possibilities 😄
|
https://github.com/DryRunSecurity fp drs_d50e9434 Mattermost's rendering pipeline eliminates the XSS vector: Mattermost's Markdown renderer escapes raw HTML, the digest doesn't wrap titles in link or HTML syntax, and this matches the existing plugin-wide pattern. |
|
@coderabbitai resolve |
✅ Actions performedComments resolved and changes approved. |
|
Plugin test server update attempted, but encountered an issue: The test server is still available. You can manually download and install the updated plugin using the artifact link below. Updated with git commit
|
avasconcelos114
left a comment
There was a problem hiding this comment.
All changes LGTM! I think we're ready for QA Testing now :)
I'll update the spinwick server to the latest build in this branch
|
Test server destroyed |
Summary
Per-user review due dates in
/github todoand daily reminder DMs. When a review target is configured, each item under "Review Requests" shows how many days remain or how many days overdue the review is. The SLA clock starts when the reviewer was requested (from the existingpull_requestwebhook), falling back to PR open date for reviews requested before the plugin was installed.Color-coded review counter in the left sidebar. When a review target is configured, the review button in the sidebar is color-coded:
When the review target is set to
0(disabled), the button always uses its default styling.Daily overdue digest posted to a channel. Once per day (shortly after midnight, server local time), the plugin posts a single aggregated message to a configured channel listing all overdue reviews across all connected users, grouped by days overdue (most overdue first):
The digest is cluster-safe (mutex + KV deduplication) and posts at most once per calendar day even with multiple app nodes.
Admin setup
Two new settings appear in System Console > Plugins > GitHub:
0(disabled)0to disable all SLA features.To enable:
3).No webhook or GitHub-side configuration changes are needed beyond the plugin's existing setup.
Change Impact: 🟡 Medium
Reasoning: The PR introduces an additive, user-facing SLA feature that spans configuration, plugin core, webhooks, background scheduling, KV persistence, and UI; defaults are safe (disabled) but the changes are cross-cutting and time-sensitive.
Regression Risk: Moderate — most changes are isolated, but new webhook handling, KV-based SLA keys, time/date computations, cluster-safe mutex + KV coordination, and the scheduler/digest paths introduce subtle risks (timezone/day-boundary logic, KV consistency, cross-node deduplication). Some unit tests were added (key stability, SLA markdown), but the digest scheduling and collection code paths appear to lack comprehensive automated coverage.
QA Recommendation: Perform targeted manual and integration QA: validate SLA start recording from review_requested webhooks and cleanup on PR close/merge; confirm /github todo and daily DM SLA labels and sidebar color states across timezones and day boundaries; verify daily digest posts at-most-once per server-local day in single-node and multi-node setups and check digest formatting; exercise runtime configuration changes. Add automated tests for sla_digest scheduling/collection and mutex/day-marker behavior before skipping manual QA.
Generated by CodeRabbitAI