Skip to content

fix: harden error-logging pipeline for #463-#468#470

Merged
bd73-com merged 2 commits into
mainfrom
claude/fix-github-issues-oSAWx
Apr 24, 2026
Merged

fix: harden error-logging pipeline for #463-#468#470
bd73-com merged 2 commits into
mainfrom
claude/fix-github-issues-oSAWx

Conversation

@bd73-com
Copy link
Copy Markdown
Owner

@bd73-com bd73-com commented Apr 24, 2026

Summary

Closes six open bugs clustered around the admin error-logging subsystem (#463, #464, #465, #466, #467, #468). The changes redact secrets before they reach Replit logs, prevent attacker-controlled log-row forgery, enforce the level enum at the DB layer, fix the restore/finalize window-starvation bug, and cap per-user request volume on admin error-log endpoints.

Changes

Secret redaction (server/services/logger.ts, server/index.ts)

DB-level level enum (server/services/ensureTables.ts + test)

  • Adds an idempotent error_logs_level_chk CHECK constraint (level IN ('error','info','warning')) inside ensureErrorLogColumns so a direct SQL insert (INSERT … VALUES ('Error', …)) can no longer land a miscased/typoed level. Historical 'warning' rows remain valid. Fixes Bug: errorLogs.level has no CHECK constraint, allowing typos to land silently #466.
  • Constraint add is wrapped in DO … EXCEPTION WHEN check_violation so a database with pre-existing out-of-range rows logs a WARNING instead of crashing boot.

Restore/finalize ordering (server/routes.ts)

Per-user rate limiter (server/middleware/rateLimiter.ts, server/routes.ts)

  • Adds adminErrorLogsRateLimiter (60 req/min across all tiers) and applies it to the five admin error-log endpoints (count, list, batch-delete, restore, finalize). Caps the amplification when a Power-tier account opens multiple dashboard tabs — the badge poller no longer fans out into N concurrent 500-row scans every 30 s. This is the short-term option called out in Bug: Admin error-log endpoints use JS filter for ownership check on 500-row scans #465.

Test updates

  • Added adminErrorLogsRateLimiter: (_req, _res, next) => next() to every test that mocks ./middleware/rateLimiter (11 files).
  • Bumped ensureErrorLogColumns execute-count assertions from 9 → 10 and 4 → 5 to account for the new DO block; inserted the extra mockResolvedValueOnce({ rows: [] }) in the fast-path + drift-detection tests and in routes.migration.test.ts fallback sequences.

How to test

  • npm run check — type checks clean (tsc, no errors).
  • npm run test — all 2480 tests pass (101 files).
  • npm run build — production bundle succeeds (pre-existing import.meta warnings unchanged).

Manual:

  1. Trigger a Resend or Stripe SDK failure whose raw error.message contains Authorization: Bearer re_…. Confirm the Replit log line shows [REDACTED] and the error_logs row stores the same redacted string (Bug: ErrorLogger console output bypasses sanitization for Replit logs #467).
  2. Create a monitor named "Real\n[ERROR][scheduler] fake" and force a scheduler failure for it. Confirm the stored message contains a space where the newline was (dedup bucket is not forged) (Bug: Log-message forgery via unsanitized monitor.name interpolation #464).
  3. In psql, run INSERT INTO error_logs (level, source, message) VALUES ('Error', 'scraper', 'test'); — the CHECK constraint should reject it (Bug: errorLogs.level has no CHECK constraint, allowing typos to land silently #466).
  4. As admin A, batch-delete 600 rows; as admin B, batch-delete 200 rows; click Undo within 5 s as B. Confirm B's rows are restored (they are the most-recently-deleted, so ORDER BY deleted_at DESC finds them) (Bug: Error-log restore/finalize orderBy(asc(id)).limit(500) can silently skip newest soft-deleted rows #468).
  5. Open /admin-errors in 10 tabs simultaneously as a Power user. Confirm the 11th poll in a 60 s window returns 429 (Bug: Admin error-log endpoints use JS filter for ownership check on 500-row scans #465).
  6. Force an unhandled exception in any route (e.g. temporarily throw new Error(\"postgres://user:pw@host/db\") from a handler). Confirm the Replit log line shows [REDACTED] rather than the DSN (Bug: Top-level error handler logs unsanitized error to Replit logs #463).

https://claude.ai/code/session_017VHC5MqCX5EtZnGC4Mqn5U


Generated by Claude Code

Summary by CodeRabbit

  • New Features

    • Rate limiting implemented on admin error-log management endpoints to prevent excessive usage
    • Soft-deleted error logs now display in reverse chronological order, showing most recent deletions first
  • Improvements

    • Error messages now sanitized for console output to eliminate control character artifacts
    • Database-level validation constraints applied to error log fields for improved data consistency

- #463: route unhandled Express errors through ErrorLogger so DSNs,
  bearer tokens and provider keys in stack traces are redacted before
  they hit Replit logs.
- #467: sanitize ErrorLogger's console output (message + error.message)
  with the same regex pass applied to the DB write, so Replit logs do
  not leak secrets embedded by SDK HTTP response bodies.
- #464: strip C0/C1 control characters in sanitizeString so
  attacker-controlled fields (e.g. `monitor.name`) cannot forge log
  rows or poison the `(level, source, message)` dedup bucket.
- #466: add an idempotent CHECK constraint on `error_logs.level`
  enforcing `error | info | warning` at the DB layer so direct SQL
  inserts cannot land miscased/typoed values.
- #468: order `restore` and `finalize` scans by `deleted_at DESC,
  id DESC` so the caller's just-deleted rows fall in the 500-row
  window instead of being starved by older soft-deletes.
- #465: add a per-user `adminErrorLogsRateLimiter` (60 req/min across
  all tiers) on the five admin error-log endpoints so badge polls from
  multiple tabs cannot amplify the 500-row scan.

https://claude.ai/code/session_017VHC5MqCX5EtZnGC4Mqn5U
@github-actions github-actions Bot added the fix label Apr 24, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 24, 2026

Warning

Rate limit exceeded

@bd73-com has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 11 minutes and 59 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 11 minutes and 59 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: fae32700-89dd-4e45-84aa-a24839bf4270

📥 Commits

Reviewing files that changed from the base of the PR and between 601cecd and 7c53493.

📒 Files selected for processing (5)
  • server/index.ts
  • server/routes.ts
  • server/services/ensureTables.test.ts
  • server/services/ensureTables.ts
  • server/services/logger.ts
📝 Walkthrough

Walkthrough

The PR addresses multiple security vulnerabilities in error logging: the top-level error handler now routes unhandled exceptions through ErrorLogger for sanitization, console output sanitization is enhanced to strip control characters before logging, a database CHECK constraint is added to enforce valid error_logs.level values, admin error-log endpoints are protected with rate limiting, and restore/finalize queries are reordered to prioritize recently soft-deleted entries.

Changes

Cohort / File(s) Summary
Error Handler & Sanitization
server/index.ts, server/services/logger.ts
Error handler routes unhandled errors through dynamically-imported ErrorLogger instead of raw console.error. Logger sanitization pipeline now strips C0/C1 control characters and normalizes tabs before console output, aligning console logging with existing database sanitization semantics.
Database Constraints
server/services/ensureTables.ts, server/services/ensureTables.test.ts
error_logs.level column now has a database-level CHECK constraint restricting values to 'error', 'info', 'warning'. Constraint is idempotent and deployed via migration. Test expectations updated to account for additional DO-block constraint-check execution steps.
Admin Error-Log Rate Limiting
server/middleware/rateLimiter.ts, server/routes.ts
New adminErrorLogsRateLimiter middleware enforces 60 requests per 60 seconds (all tiers) on admin error-log endpoints (/count, /, /batch-delete, /restore, /finalize). Rate limiter is inserted into middleware chain after isAuthenticated.
Restore/Finalize Query Ordering
server/routes.ts
Restore and finalize endpoints now query soft-deleted logs ordered by most-recently deleted first (deletedAt DESC, id DESC) instead of oldest-first (id ASC), ensuring 500-row window captures user's recent batch-deletes before older system-wide soft-deletes.
Test Mock Updates
server/routes*.test.ts, server/services/ensureTables.test.ts
All route test files updated to export adminErrorLogsRateLimiter mock middleware (pass-through next() handler). Migration test expectations adjusted for new DO-block CHECK constraint execution step.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Handler as Route Handler
    participant ErrorMiddleware as Error Middleware
    participant ErrorLogger
    participant Console as Replit Logs
    participant DB as error_logs Table

    Client->>Handler: Request
    Handler->>Handler: Throws unhandled error
    Handler-->>ErrorMiddleware: Error propagates

    ErrorMiddleware->>ErrorMiddleware: Compute HTTP status
    ErrorMiddleware->>ErrorMiddleware: Normalize err to Error object
    
    ErrorMiddleware->>ErrorLogger: Import dynamically
    ErrorMiddleware->>ErrorLogger: error(source, message, err, context)
    
    ErrorLogger->>ErrorLogger: sanitizeString(message)<br/>strip control chars
    ErrorLogger->>ErrorLogger: sanitizeString(error.message)<br/>strip control chars
    ErrorLogger->>Console: console.error(sanitized)
    ErrorLogger->>DB: Write sanitized<br/>message/stack/context
    
    ErrorMiddleware->>Client: If !headersSent:<br/>res.status(status)<br/>.json({message})
    ErrorMiddleware->>ErrorMiddleware: Call next(err)<br/>if headersSent
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

  • #324: Modifies server/index.ts to replace direct console.error with dynamically-imported ErrorLogger for centralized error reporting
  • #454: Updates ErrorLogger persistence logic and ensureErrorLogColumns database migration behavior for error-log table schema
  • #56: Modifies server/services/logger.ts ErrorLogger.log sanitization behavior and related admin error-log route usage patterns

Suggested labels

security

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix: harden error-logging pipeline for #463-#468' directly and specifically describes the PR's main objective of addressing six interconnected error-logging bugs.
Linked Issues check ✅ Passed The PR implements all six coding requirements: sanitizes console output (#467), strips control characters (#464), routes top-level errors through ErrorLogger (#463), adds CHECK constraint (#466), and fixes restore/finalize ordering (#468). Rate limiting (#465) is also implemented.
Out of Scope Changes check ✅ Passed All changes are within scope of the six linked issues. Test file updates (mocks) and query adjustments (ensureErrorLogColumns) directly support the primary objectives without introducing unrelated modifications.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch claude/fix-github-issues-oSAWx

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
server/routes.ts (2)

3063-3071: ⚠️ Potential issue | 🟡 Minor

Catch-all error handler now properly sanitizes — but response body lacks code.

Routing through ErrorLogger.error("api", ...) correctly addresses #463 — DSNs, bearer tokens, and provider keys in err.message/err.stack now get redacted before hitting Replit logs. Good.

Minor: the 500 response { message: "Internal server error" } omits the code field. Per the server/routes/** guideline, the { message, code } shape is standard. Consider adding code: "INTERNAL_ERROR" for client consistency, though this predates the PR.

As per coding guidelines: "API routes in Express should return JSON responses with { message, code } format for errors"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/routes.ts` around lines 3063 - 3071, The catch-all Express error
handler registered via app.use currently returns res.status(err.status ||
500).json({ message: "Internal server error" }) without the standardized code
field; update that response to include the code (e.g., code: "INTERNAL_ERROR")
while keeping the existing status selection and CORS branch intact, so modify
the error handler that calls ErrorLogger.error("api", ...) to return
res.status(err.status || 500).json({ message: "Internal server error", code:
"INTERNAL_ERROR" }).

1689-1689: ⚠️ Potential issue | 🟡 Minor

Asymmetric ordering vs. restore/finalize — worth a second look.

The filters branch of batch-delete still uses orderBy(asc(errorLogs.id)) while restore and finalize were flipped to desc(deletedAt), desc(id) in this PR. For a concurrent-admins scenario with a large backlog, this means:

  • Admin A sends a bulk batch-delete by filter → hits the oldest 500 matching rows first (fine for "drain the queue" semantics).
  • Admin B then hits Restore → gets the 500 most-recently-deleted (includes A's deletions via desc ordering). ✔
  • But if Admin A wants to delete their newest matching errors first (which aligns with the /api/admin/error-logs list view, which is desc(timestamp)), the asc-id scan starves the UI-visible rows.

Not strictly a bug since #468 scopes only restore/finalize, but the inconsistency with the list endpoint's desc(timestamp) display order is a foot-gun waiting to happen. Consider either flipping to desc(errorLogs.timestamp) for parity with the list view, or at minimum add a comment documenting that this branch intentionally drains oldest-first.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/routes.ts` at line 1689, The batch-delete "filters" branch is ordering
by asc(errorLogs.id) which is inconsistent with restore/finalize
(desc(deletedAt), desc(id)) and the UI's desc(timestamp) view; update the query
that assigns entries (the
db.select().from(errorLogs).where(and(...conditions)).orderBy(...).limit(500)
statement) to use newest-first ordering — e.g., replace the asc(...) ordering
with orderBy(desc(errorLogs.timestamp), desc(errorLogs.id)) to match the list
view and restore/finalize behavior while keeping the same limit, so concurrent
admins see the most-recent matching rows first.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/index.ts`:
- Around line 378-379: The error response returned in the global error handler
uses res.status(status).json({ message: "Internal Server Error" }) which breaks
the expected `{ message, code }` error shape; update the handler so the JSON
includes a consistent error code (e.g., `code: "INTERNAL_ERROR"`) alongside the
message and keep the existing header-sent check (res.headersSent) and next(err)
behavior intact so non-API/static requests still follow existing flow.
- Line 369: Duplicate import: remove the second dynamic import that destructures
ErrorLogger as TopLevelErrorLogger and instead reuse the existing ErrorLogger
binding already imported earlier; replace any uses of TopLevelErrorLogger with
ErrorLogger and delete the redundant `const { ErrorLogger: TopLevelErrorLogger }
= await import("./services/logger");` statement so there is a single ErrorLogger
symbol in scope.
- Around line 363-380: The error middleware that calls TopLevelErrorLogger.error
should be moved so it is registered after the Vite/serveStatic middleware (e.g.
after setupVite() and any serveStatic() registration) and after
registerRoutes(), ensuring it will catch errors propagated via next(e) from the
Vite SSR middleware and static file handlers; locate the existing app.use((err,
req, res, next) => ...) block and relocate it to run after
setupVite()/serveStatic() but before the server starts listening
(httpServer.listen()), keeping the same error-normalization and .catch()
behavior on ErrorLogger.

In `@server/middleware/rateLimiter.ts`:
- Around line 127-132: The adminErrorLogsRateLimiter currently inlines the
per-tier max (60) instead of referencing centralized constants; update
adminErrorLogsRateLimiter (created via createTieredRateLimiter) to pull per-tier
limits from a shared config (e.g., add endpoint-specific entries to
shared/models/auth.ts such as API_RATE_LIMITS.adminErrorLogs with free/pro/power
perMinute values or create a new ENDPOINT_RATE_LIMITS constant), then replace
the hardcoded 60 values with those constants so each tier uses the centralized
settings (ensure keys match the tier lookup used by createTieredRateLimiter).

In `@server/routes.deleteErrorLog.test.ts`:
- Line 128: Add assertions to the tests covering the /restore and /finalize
endpoints to verify ordering and middleware wiring: after invoking the route,
assert the mockOrderByFn was called with the expected descending order columns
(check mockOrderByFn.mock.calls[0] includes ['deleted_at', 'id'] with DESC) and
assert the adminErrorLogsRateLimiter mock was invoked or present in the
middleware chain (e.g., verify adminErrorLogsRateLimiter was called before route
handlers or check a call on its stub). Update tests around mockSelectWhereFn →
mockOrderByFn → mockLimitFn to explicitly validate the order-by args and that
adminErrorLogsRateLimiter is applied.

In `@server/routes.ts`:
- Around line 1728-1734: Add a focused test to prevent regression of the
restore/finalize ordering: in server/routes.deleteErrorLog.test.ts augment or
add a test that calls the code path which computes the softDeleted query (the
db.select().from(errorLogs)...orderBy(...) call that assigns to softDeleted) and
assert the orderBy arguments (or the generated SQL) include
desc(errorLogs.deletedAt) followed by desc(errorLogs.id); if your current mocks
use mockLimitFn/mockOrderByFn, update them to capture and assert the actual sort
direction and tiebreaker so flipping to asc would fail the test.

In `@server/services/ensureTables.test.ts`:
- Around line 83-92: The current test in ensureTables.test.ts only checks that
"error_logs_level_chk" appears in emitted SQL but doesn't assert the exact
allowed-values tuple; update the test that builds stmts (using mockExecute and
stmts from mockExecute.mock.calls) to assert that the SQL contains the exact
allowed-levels tuple (e.g. the parenthesized list used in the CHECK) in addition
to the name, and add a new test case that simulates the check_violation path
(exercise the DO block that catches violations) by making mockExecute return
rows indicating out-of-range levels and assert the code logs a warning instead
of throwing to cover the idempotent migration behavior.

In `@server/services/ensureTables.ts`:
- Around line 44-61: The DO block adding the constraint error_logs_level_chk
only rescues check_violation (SQLSTATE 23514) but not duplicate_object (SQLSTATE
42710), so in a rolling-deploy race the second ALTER TABLE fails and aborts the
function; update the inner EXCEPTION in the DO $$...$$ (the block wrapping ALTER
TABLE error_logs ADD CONSTRAINT error_logs_level_chk) to also catch
duplicate_object (SQLSTATE 42710) and treat it as a no-op (same handling as
check_violation) so concurrent ADD CONSTRAINT races don't abort; additionally,
after the DO block add a simple probe (SELECT 1 FROM pg_constraint WHERE conname
= 'error_logs_level_chk' AND conrelid = 'error_logs'::regclass) and emit a
Node-level warning (e.g., via console.warn or processLogger) if the constraint
is still absent so operators get visibility.

In `@server/services/logger.ts`:
- Around line 82-88: The console.error call currently passes
sanitizeString(error?.message || "") as a second positional argument which
results in a trailing empty string when no error is supplied; update the
ErrorLogger.error implementation so that you only pass a second argument to
console.error when an actual error message exists (use a conditional to call
console.error(logMsg, sanitizedError) vs console.error(logMsg)), and add a brief
inline comment near the sanitized stack persistence explaining that
sanitizedStack is intentionally not emitted to the console to avoid leaking
stacks into Replit logs; refer to logMsg, sanitizeString, error?.message, and
sanitizedStack to locate the code to change.
- Around line 29-36: The stripControlChars function currently removes C0/C1
controls but misses Unicode separators and BiDi override characters; update the
regex in stripControlChars to also replace U+2028 and U+2029 (line/paragraph
separators) and the BiDi ranges U+202A–U+202E and U+2066–U+2069 with spaces
(keeping the existing separate \t normalization), so the new pattern includes
those code points alongside the existing ranges to neutralize Unicode line
breaks and bidi overrides.

---

Outside diff comments:
In `@server/routes.ts`:
- Around line 3063-3071: The catch-all Express error handler registered via
app.use currently returns res.status(err.status || 500).json({ message:
"Internal server error" }) without the standardized code field; update that
response to include the code (e.g., code: "INTERNAL_ERROR") while keeping the
existing status selection and CORS branch intact, so modify the error handler
that calls ErrorLogger.error("api", ...) to return res.status(err.status ||
500).json({ message: "Internal server error", code: "INTERNAL_ERROR" }).
- Line 1689: The batch-delete "filters" branch is ordering by asc(errorLogs.id)
which is inconsistent with restore/finalize (desc(deletedAt), desc(id)) and the
UI's desc(timestamp) view; update the query that assigns entries (the
db.select().from(errorLogs).where(and(...conditions)).orderBy(...).limit(500)
statement) to use newest-first ordering — e.g., replace the asc(...) ordering
with orderBy(desc(errorLogs.timestamp), desc(errorLogs.id)) to match the list
view and restore/finalize behavior while keeping the same limit, so concurrent
admins see the most-recent matching rows first.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 270ec3d0-484e-4a68-9aa4-b88c418ecde8

📥 Commits

Reviewing files that changed from the base of the PR and between 90dd3fd and 601cecd.

📒 Files selected for processing (18)
  • server/index.ts
  • server/middleware/rateLimiter.ts
  • server/routes.bugfixes.test.ts
  • server/routes.campaignDashboard.test.ts
  • server/routes.campaignRecover.test.ts
  • server/routes.campaignSendTest.test.ts
  • server/routes.conditions.test.ts
  • server/routes.deleteErrorLog.test.ts
  • server/routes.deleteFailedCampaign.test.ts
  • server/routes.errorLogCount.test.ts
  • server/routes.migration.test.ts
  • server/routes.notificationChannels.test.ts
  • server/routes.notificationPreferences.test.ts
  • server/routes.tags.test.ts
  • server/routes.ts
  • server/services/ensureTables.test.ts
  • server/services/ensureTables.ts
  • server/services/logger.ts

Comment thread server/index.ts Outdated
Comment thread server/index.ts Outdated
Comment thread server/index.ts Outdated
Comment on lines +127 to +132
export const adminErrorLogsRateLimiter = createTieredRateLimiter("adminErrorLogs", {
free: { max: 60, windowMs: 60 * 1000 },
pro: { max: 60, windowMs: 60 * 1000 },
power: { max: 60, windowMs: 60 * 1000 },
message: "Too many admin error-log requests. Please slow down."
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify whether API_RATE_LIMITS already exists and what it covers
fd -t f 'auth.ts' -p 'shared/models'
rg -nP -A30 "export const API_RATE_LIMITS" --type=ts
# Check if other rate limiters in this file reference API_RATE_LIMITS
rg -nP "API_RATE_LIMITS" --type=ts

Repository: bd73-com/fetchthechange

Length of output: 3207


🏁 Script executed:

#!/bin/bash
# Check the structure of rateLimiter.ts to understand the pattern
head -n 50 server/middleware/rateLimiter.ts

# Look for any endpoint-specific rate limit constants
rg -n "ENDPOINT.*RATE|LIMIT.*CONFIG" --type=ts shared/models/

# Check if rateLimiter.ts uses any imports from shared/models
rg -n "from.*@shared" server/middleware/rateLimiter.ts

# Search for other files that might define per-endpoint rate limit configuration
fd -t f -name "*rate*" -o -name "*limit*" | head -20

Repository: bd73-com/fetchthechange

Length of output: 1858


🏁 Script executed:

#!/bin/bash
# Check full rateLimiter.ts to see all limiters and their patterns
cat -n server/middleware/rateLimiter.ts | head -200

# Check if there are any endpoint-rate-limit configs in shared/models
ls -la shared/models/ | grep -i rate
rg -i "endpoint.*rate|per.*endpoint.*limit" shared/ --type=ts

# Look at the bottom of rateLimiter.ts to see all exporters
tail -n 100 server/middleware/rateLimiter.ts

Repository: bd73-com/fetchthechange

Length of output: 10494


Per-endpoint rate limits should reference constants, not hardcoded literals.

The guideline requires all tier-based limits be read from centralized constants in shared/models/auth.ts. The hardcoded 60 (max requests/min) across all three tiers in adminErrorLogsRateLimiter violates this: per-endpoint rate limits that vary by tier should be defined in configuration rather than inlined.

However, API_RATE_LIMITS in shared/models/auth.ts currently only covers general API key rate limiting (perMinute: 300, maxKeysPerUser: 5), not per-endpoint limits. Every other limiter in this file (generalRateLimiter, createMonitorRateLimiter, checkMonitorRateLimiter, etc.) also uses literals, suggesting this is an established pattern. This PR perpetuates that convention drift — not a blocker, but worth fixing in a dedicated refactor if endpoint-rate-limit entries are added to API_RATE_LIMITS (or a new endpoint-specific limits constant is created).

The limiter logic itself is correct: per-user keying via resolveUserId, 429 response shape, and tier lookup all work properly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/middleware/rateLimiter.ts` around lines 127 - 132, The
adminErrorLogsRateLimiter currently inlines the per-tier max (60) instead of
referencing centralized constants; update adminErrorLogsRateLimiter (created via
createTieredRateLimiter) to pull per-tier limits from a shared config (e.g., add
endpoint-specific entries to shared/models/auth.ts such as
API_RATE_LIMITS.adminErrorLogs with free/pro/power perMinute values or create a
new ENDPOINT_RATE_LIMITS constant), then replace the hardcoded 60 values with
those constants so each tier uses the centralized settings (ensure keys match
the tier lookup used by createTieredRateLimiter).

emailUpdateRateLimiter: (_req: any, _res: any, next: any) => next(),
contactFormRateLimiter: (_req: any, _res: any, next: any) => next(),
unauthenticatedRateLimiter: (_req: any, _res: any, next: any) => next(),
adminErrorLogsRateLimiter: (_req: any, _res: any, next: any) => next(),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Stub looks correct, but flagging a test-coverage gap worth tracking.

The pass-through is fine and consistent with the rest of this mock block.

However, since this is the test file that covers the /restore and /finalize endpoints whose ORDER BY was changed in this PR (issue #468: ORDER BY deleted_at DESC, id DESC), none of the tests here actually assert that ordering — the chained mock mockSelectWhereFn → mockOrderByFn → mockLimitFn accepts any args and returns canned rows. If a future refactor silently reverts to ASC, these tests will still pass. Not blocking for this PR, but consider adding a targeted assertion (e.g., capture mockOrderByFn.mock.calls[0] and assert it contains the desc-ordered columns) either here or in a follow-up. Same applies to the new adminErrorLogsRateLimiter being applied — no test verifies it's actually wired into the middleware chain for these routes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/routes.deleteErrorLog.test.ts` at line 128, Add assertions to the
tests covering the /restore and /finalize endpoints to verify ordering and
middleware wiring: after invoking the route, assert the mockOrderByFn was called
with the expected descending order columns (check mockOrderByFn.mock.calls[0]
includes ['deleted_at', 'id'] with DESC) and assert the
adminErrorLogsRateLimiter mock was invoked or present in the middleware chain
(e.g., verify adminErrorLogsRateLimiter was called before route handlers or
check a call on its stub). Update tests around mockSelectWhereFn → mockOrderByFn
→ mockLimitFn to explicitly validate the order-by args and that
adminErrorLogsRateLimiter is applied.

Comment thread server/routes.ts
Comment on lines +1728 to +1734
// Order by most-recently-soft-deleted so the user's just-deleted rows
// (which they're trying to undo) fall inside the 500-row window even
// when the table has accumulated older soft-deletes from another admin
// between the 60s safety-net cleanup cycles. The old `asc(id)` scan
// picked oldest rows first and silently returned count=0 to the user
// when their own rows lived past the window. See #468.
const softDeleted = await db.select().from(errorLogs).where(isNotNull(errorLogs.deletedAt)).orderBy(desc(errorLogs.deletedAt), desc(errorLogs.id)).limit(500);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Restore/finalize ordering fix correctly addresses #468.

desc(errorLogs.deletedAt), desc(errorLogs.id) ensures the caller's most-recently-soft-deleted rows fall inside the 500-row window, fixing the starvation issue. The desc(id) tiebreaker is a nice touch for same-millisecond deletes from batch operations.

One caveat worth noting: the existing restore/finalize tests mock mockLimitFn/mockOrderByFn without asserting the actual sort direction, so they won't regress if someone flips this back to asc. Consider adding a focused test that captures the orderBy(...) arguments (or the raw SQL) to lock in ordering. Minor, non-blocking.

Also applies to: 1772-1776

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/routes.ts` around lines 1728 - 1734, Add a focused test to prevent
regression of the restore/finalize ordering: in
server/routes.deleteErrorLog.test.ts augment or add a test that calls the code
path which computes the softDeleted query (the
db.select().from(errorLogs)...orderBy(...) call that assigns to softDeleted) and
assert the orderBy arguments (or the generated SQL) include
desc(errorLogs.deletedAt) followed by desc(errorLogs.id); if your current mocks
use mockLimitFn/mockOrderByFn, update them to capture and assert the actual sort
direction and tiebreaker so flipping to asc would fail the test.

Comment on lines +83 to 92
expect(mockExecute).toHaveBeenCalledTimes(10);
const stmts = mockExecute.mock.calls.map(([arg]: any) => {
try { return JSON.stringify(arg); } catch { return String(arg); }
});
expect(stmts.some((s: string) => s.includes("first_occurrence"))).toBe(true);
expect(stmts.some((s: string) => s.includes("occurrence_count"))).toBe(true);
expect(stmts.some((s: string) => s.includes("error_logs_level_chk"))).toBe(true);
expect(stmts.some((s: string) => s.includes("pg_indexes"))).toBe(true);
expect(stmts.some((s: string) => s.includes("pg_advisory_xact_lock"))).toBe(true);
expect(stmts.some((s: string) => s.includes("CREATE UNIQUE INDEX") && s.includes("CONCURRENTLY") && s.includes("error_logs_unresolved_dedup_idx"))).toBe(true);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Test updates correctly track the new DO block — but the assertion is weaker than it could be.

Line 89 only asserts that the literal string error_logs_level_chk appears somewhere in the emitted statements. That passes even if someone later narrows the allowed set to ('error') or widens it to ('error','info','warning','debug','trace'). Since the whole point of #466 is enforcing the exact allowed-values contract, consider strengthening the assertion to also verify the allowed level tuple:

🛡️ Proposed strengthened assertion
     expect(stmts.some((s: string) => s.includes("error_logs_level_chk"))).toBe(true);
+    // Lock in the exact allowed-values contract so an accidental widening/
+    // narrowing of the level set is caught at CI rather than at runtime.
+    expect(stmts.some((s: string) =>
+      s.includes("error_logs_level_chk") &&
+      s.includes("'error'") && s.includes("'info'") && s.includes("'warning'")
+    )).toBe(true);

Also worth adding: a test for the check_violation catch path (the wrapping DO block that warns instead of crashing when out-of-range rows exist) — that's the whole idempotency story for this migration and there's no coverage of it.

As per coding guidelines: "Tests must cover edge cases and error paths - include assertions for edge cases (empty inputs, boundary values, null/undefined), error paths (invalid input, unauthorized access, not found)"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/services/ensureTables.test.ts` around lines 83 - 92, The current test
in ensureTables.test.ts only checks that "error_logs_level_chk" appears in
emitted SQL but doesn't assert the exact allowed-values tuple; update the test
that builds stmts (using mockExecute and stmts from mockExecute.mock.calls) to
assert that the SQL contains the exact allowed-levels tuple (e.g. the
parenthesized list used in the CHECK) in addition to the name, and add a new
test case that simulates the check_violation path (exercise the DO block that
catches violations) by making mockExecute return rows indicating out-of-range
levels and assert the code logs a warning instead of throwing to cover the
idempotent migration behavior.

Comment thread server/services/ensureTables.ts
Comment thread server/services/logger.ts Outdated
Comment thread server/services/logger.ts
- server/index.ts: move the top-level Express error handler AFTER
  setupVite/serveStatic. Express's 4-arg error middleware only catches
  errors from middleware registered earlier in the chain, so the prior
  placement left Vite SSR errors and static-file failures flowing to the
  default handler (raw stack → Replit logs), defeating #463. Also:
  - Drop the duplicate dynamic import — reuse the ErrorLogger binding
    already imported at line 313 instead of re-importing as
    TopLevelErrorLogger.
  - Add `code: "INTERNAL_ERROR"` to the 500 response so it matches the
    `{ message, code }` contract used by other error routes.

- server/services/ensureTables.ts: the DO block's inner EXCEPTION also
  catches `duplicate_object` (SQLSTATE 42710) now. In a rolling deploy,
  two instances can both pass the `IF NOT EXISTS` probe against
  pg_constraint concurrently; without this catch, the loser raises
  42710 out of the DO block and aborts the whole migration function
  (index build included).

- server/services/logger.ts:
  - stripControlChars now also replaces U+2028/U+2029 (Unicode line and
    paragraph separators — not line terminators to Node's console but
    DO split lines in terminals and downstream log aggregators) and
    U+202A-U+202E / U+2066-U+2069 (BiDi overrides, Trojan-Source
    mitigation).
  - console.error no longer receives a trailing empty-string second
    argument when the caller omits the optional `error` — branch to
    single-arg `console.error(logMsg)` in that case.
  - Add a comment noting that sanitizedStack is deliberately NOT
    emitted to the console (DB copy is enough for admin triage).

- server/routes.ts batch-delete filters branch: newest-first ordering
  (`desc(timestamp), desc(id)`) so it matches the list view and the
  restore/finalize endpoints. The old `asc(id)` drained the oldest 500
  rows, starving the rows the admin was actually looking at.

- server/services/ensureTables.test.ts: strengthen the
  error_logs_level_chk assertion to also verify the exact allowed-level
  tuple (`'error'`, `'info'`, `'warning'`) so silent widening/narrowing
  fails CI.

https://claude.ai/code/session_017VHC5MqCX5EtZnGC4Mqn5U
@bd73-com bd73-com merged commit 56abab8 into main Apr 24, 2026
1 check passed
@github-actions github-actions Bot deleted the claude/fix-github-issues-oSAWx branch April 24, 2026 12:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment