Add Redis cache, origin validation, SSRF hardening, and code quality refactor#1
Add Redis cache, origin validation, SSRF hardening, and code quality refactor#1adiologydev merged 11 commits intomainfrom
Conversation
- Add "redis" cache mode using Bun's built-in RedisClient (no npm dep) - Supports TTL via native Redis SETEX, key prefixing, auto-reconnect - Graceful degradation: Redis failures return cache misses, not errors - New config: REDIS_URL, REDIS_KEY_PREFIX, REDIS_CONNECTION_TIMEOUT, REDIS_MAX_RETRIES - Update dependencies: elysia 1.4.28, @elysiajs/cors 1.4.1, satori 0.18.4, @types/bun 1.3.11 - Fix SSRF: reject hostnames with zero DNS records instead of allowing - Fix self-reference: allow /image path in addition to /og - Fix health check: check disk dir for hybrid mode (not just disk) - Fix OG generator: prevent infinite font preload retries on failure - Add graceful SIGTERM/SIGINT shutdown for Redis disconnect https://claude.ai/code/session_01HBLMJXsTLSxNLuvGXEnUaZ
Drop the stale boolean flag and rely on Bun's autoReconnect + enableOfflineQueue to handle transient Redis failures. The client is always created at init (even if initial connect fails), so ops go through try/catch — cache misses during downtime, automatic recovery when Redis comes back. Zero per-request overhead from flag checks. https://claude.ai/code/session_01HBLMJXsTLSxNLuvGXEnUaZ
…ests The CORS middleware only sets response headers — it never rejects requests. This means any client (curl, <img> tags, server-to-server) could bypass origin restrictions entirely. Add an onRequest hook that checks Origin and Referer headers against allowedOrigins and returns 403 for unauthorized origins before any processing occurs. https://claude.ai/code/session_01HBLMJXsTLSxNLuvGXEnUaZ
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds Redis cache mode and lifecycle; implements Redis-backed get/set with namespaced keys and TTL; enforces origin/referer allowlist middleware; extends health checks and URL validation (IP handling, parallel DNS); restructures image transforms into modular pipeline; swaps Biome for oxfmt/oxlint/tsdown and updates CI; adds origin tests and graceful shutdown. Changes
Sequence DiagramsequenceDiagram
participant Client
participant App as Application
participant Cache as CacheService
participant Redis as RedisServer
participant Signal as ProcessSignal
Note over App,Cache: Startup
App->>Cache: initRedisCache()
Cache->>Redis: CONNECT (REDIS_URL)
Redis-->>Cache: OK
Cache-->>App: initialized
Client->>App: GET /og?url=...
App->>App: check origin/referer allowlist
alt origin allowed
App->>Cache: getCached(key)
Cache->>Redis: GET ps:{key}
alt cache hit
Redis-->>Cache: data
Cache-->>App: cached buffer
App-->>Client: 200 + image
else cache miss
Redis-->>Cache: nil
Cache-->>App: null
App->>App: generate OG image
App->>Cache: setCache(key, data)
Cache->>Redis: SET ps:{key} EX ttl
Redis-->>Cache: OK
Cache-->>App: stored
App-->>Client: 200 + image
end
else origin forbidden
App-->>Client: 403 { "error": "FORBIDDEN", "message": "Origin not allowed" }
end
Signal->>App: SIGTERM/SIGINT
App->>Cache: disconnectRedisCache()
Cache->>Redis: CLOSE
Redis-->>Cache: closed
Cache-->>App: disconnected
App->>App: process.exit(0)
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/utils/url-validator.ts (1)
74-75:⚠️ Potential issue | 🟠 MajorSkip DNS lookup for IP literals and distinguish error codes.
The current code converts all DNS failures—transient errors, NXDOMAIN, and IP-literal rejection—into empty arrays via
.catch(() => []). This causes legitimate IP literal URLs (e.g.,http://8.8.8.8/image) to be rejected as 403 when bothresolve4()andresolve6()return ENOTFOUND. Additionally, IPv6 literals with brackets (e.g.,[::1]) fail private-IP checks because the patterns expect bare addresses.Detect IP literals using
node:net.isIP()before DNS resolution, validate them directly againstisPrivateIP()with bracket stripping, and usePromise.allSettled()to distinguish definitive failures (ENOTFOUND/ENODATA) from transient errors. Only throw when both families explicitly report no records.Suggested direction
+import { isIP } from "node:net"; import dns from "node:dns/promises"; @@ // Block known dangerous hostnames const hostname = url.hostname.toLowerCase(); + const normalizedHost = hostname.replace(/^\[(.*)\]$/, "$1"); @@ + if (isIP(normalizedHost)) { + if (isPrivateIP(normalizedHost)) { + throw new ForbiddenError("Private IP addresses are not allowed"); + } + return url; + } + // DNS resolution check to prevent SSRF via DNS rebinding try { - // Try IPv4 first - const addresses = await dns.resolve4(hostname).catch(() => []); + const [ipv4Result, ipv6Result] = await Promise.allSettled([ + dns.resolve4(hostname), + dns.resolve6(hostname), + ]); + const addresses = ipv4Result.status === "fulfilled" ? ipv4Result.value : []; + const ipv6Addresses = ipv6Result.status === "fulfilled" ? ipv6Result.value : []; @@ - // Also check IPv6 if available - const ipv6Addresses = await dns.resolve6(hostname).catch(() => []); - @@ - if (addresses.length === 0 && ipv6Addresses.length === 0) { + const noDataCodes = new Set(["ENOTFOUND", "ENODATA"]); + const definitelyNoRecords = + addresses.length === 0 && + ipv6Addresses.length === 0 && + [ipv4Result, ipv6Result].every( + (result) => + result.status === "rejected" && + noDataCodes.has((result.reason as NodeJS.ErrnoException).code ?? ""), + ); + + if (definitelyNoRecords) { throw new ForbiddenError( `DNS resolution failed: no records found for ${hostname}`, ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/url-validator.ts` around lines 74 - 75, The DNS resolution logic in validate URL is swallowing all errors and misclassifying IP literals; before calling dns.resolve4/resolve6, detect literals using node:net.isIP() (strip surrounding brackets for IPv6) and validate them directly with isPrivateIP(), returning appropriately instead of doing DNS lookups; replace the current await dns.resolve4(hostname).catch(() => []) pattern with Promise.allSettled([dns.resolve4(hostname), dns.resolve6(hostname)]) and inspect each result to treat ENOTFOUND/ENODATA from both families as definitive "no records" while surfacing other errors, and only treat the hostname as unresolved if both families explicitly have no records—use the settled outcomes to populate the addresses variable similarly to the previous logic.
🧹 Nitpick comments (4)
src/services/og-generator.ts (1)
14-24: Good improvement for resilience.The change to use
defaultFontsAttemptedwith try/catch and retry-on-failure is a solid pattern for graceful degradation. Setting the flag before the await and resetting on failure correctly handles the retry case.One optional consideration: if font loading is persistently failing (e.g., network issues), every incoming request will trigger a retry attempt and log a warning, potentially causing log noise under load. A rate-limited retry (e.g., using a timestamp to throttle attempts) could be beneficial for production stability.
,
💡 Optional: Rate-limit retry attempts
let defaultFontsAttempted = false; +let lastPreloadAttempt = 0; +const PRELOAD_RETRY_DELAY_MS = 30000; // 30 seconds between retry attempts + async function ensureDefaultFontsLoaded(): Promise<void> { if (defaultFontsAttempted) return; + + const now = Date.now(); + if (now - lastPreloadAttempt < PRELOAD_RETRY_DELAY_MS) return; + lastPreloadAttempt = now; + defaultFontsAttempted = true; try { await preloadDefaultFonts(); } catch (err) { console.warn("Default font preload failed (will retry per-request):", err); defaultFontsAttempted = false; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/og-generator.ts` around lines 14 - 24, Add simple rate-limiting to the font preload retry to avoid log noise under persistent failures: keep a timestamp (e.g., lastDefaultFontAttempt: number) alongside the existing defaultFontsAttempted flag in ensureDefaultFontsLoaded and only attempt preloadDefaultFonts() if enough time has passed (e.g., now - lastDefaultFontAttempt >= RETRY_INTERVAL_MS), update lastDefaultFontAttempt before attempting, and reset defaultFontsAttempted on failure as before; reference ensureDefaultFontsLoaded, defaultFontsAttempted, preloadDefaultFonts and introduce RETRY_INTERVAL_MS/lastDefaultFontAttempt to throttle retries.src/services/cache.ts (1)
347-358: ExportmaskRedisUrlfor reuse.This function is duplicated in
src/index.ts:123. Export it to avoid code duplication and ensure consistent masking behavior.Suggested change
// Mask credentials in Redis URL for display -function maskRedisUrl(url: string): string { +export function maskRedisUrl(url: string): string {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/cache.ts` around lines 347 - 358, The masking helper maskRedisUrl is duplicated; export the existing maskRedisUrl function from this module (make it an exported function with the same signature) and replace the duplicate implementation in the other location by importing maskRedisUrl instead; ensure you keep the try/catch behavior and returned string format identical, update any call sites to import { maskRedisUrl } from this module, and remove the duplicate function definition to avoid inconsistent behavior..env.example (1)
27-31: Consider quoting the Redis URL value.While the current value works fine, quoting the
REDIS_URLvalue would be safer if users add credentials with special characters (e.g.,REDIS_URL="redis://user:p@ss@localhost:6379"). The other values are simple and don't require quotes.Suggested change
# Redis cache settings (only used when CACHE_MODE=redis) -REDIS_URL=redis://localhost:6379 # Redis connection URL (supports redis://, rediss://, redis+unix://) +REDIS_URL="redis://localhost:6379" # Redis connection URL (supports redis://, rediss://, redis+unix://) REDIS_KEY_PREFIX=ps: # Key prefix to namespace cache entries REDIS_CONNECTION_TIMEOUT=5000 # Connection timeout in ms REDIS_MAX_RETRIES=10 # Max reconnection attempts🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.env.example around lines 27 - 31, Update the .env.example to quote the REDIS_URL value so credentials or special characters are handled safely: change the REDIS_URL entry to use quotes around the URL string (e.g., REDIS_URL="redis://user:pass@localhost:6379") and keep other keys (REDIS_KEY_PREFIX, REDIS_CONNECTION_TIMEOUT, REDIS_MAX_RETRIES) unchanged; ensure the variable name REDIS_URL is used exactly so dotenv consumers pick it up and consider updating the inline comment to mention that quoting is recommended when credentials contain special characters.src/index.ts (1)
122-125: Duplicate URL masking logic.This regex duplicates the
maskRedisUrl()function insrc/services/cache.ts:348-358. Consider importing and reusing that function to avoid divergence.Suggested change
Export
maskRedisUrlfrom cache.ts, then:import { disconnectRedisCache, initRedisCache, + maskRedisUrl, startCacheCleanup, } from "./services/cache";case "redis": { - const masked = config.redisUrl.replace(/:\/\/[^@]*@/, "://***@"); + const masked = maskRedisUrl(config.redisUrl); return `redis (${masked})`; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 122 - 125, Replace the duplicated regex in the "redis" case with a call to the shared masking helper: export the existing maskRedisUrl function from cache.ts (where it's implemented) and import it into the module containing the switch, then use maskRedisUrl(config.redisUrl) instead of duplicating the regex; update any imports/exports accordingly so the "redis" branch returns `redis (${maskRedisUrl(config.redisUrl)})`.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@package.json`:
- Line 15: The package.json currently pins satori to "0.18.4" which is behind
the latest 0.26.0; either upgrade the dependency to "satori": "0.26.0" (update
package.json, regenerate lockfile by running your package manager, then run
tests/build to catch breaking changes) or add a clear comment in the
repository/docs explaining why satori is intentionally pinned to 0.18.4 (e.g.,
known breaking changes, compatibility with your renderer, or CI test failures)
so reviewers know the pin is deliberate; locate the dependency entry "satori"
and the version string "0.18.4" to apply this change.
In `@src/routes/health.ts`:
- Around line 26-29: The Redis health check is broken because getCacheStats()
currently sets connected = redisClient !== null rather than the client's real
connectivity; update src/services/cache.ts so connected reflects actual
connection state by either (a) performing a real connectivity probe (e.g., await
redisClient.ping() or a PING command inside getCacheStats()) and returning false
on failure, or (b) maintaining a boolean (e.g., redisConnected) updated from the
RedisClient event handlers in initRedisCache() (listen for 'ready'/'connect' ->
true, and 'end'/'error' -> false) and have getCacheStats() return that flag;
modify getCacheStats() and/or initRedisCache() accordingly so the health route's
use of getCacheStats().connected accurately reports Redis availability.
In `@src/services/cache.ts`:
- Around line 110-119: The disconnectRedisCache function calls
redisClient.quit(), but Bun's RedisClient exposes close() instead; update
disconnectRedisCache to call redisClient.close() (and keep the existing
try/catch and setting redisClient = null behavior) so the shutdown path works
with Bun's client; ensure the check remains on the same redisClient symbol and
preserve Promise<void> return type and error swallowing semantics.
- Around line 379-384: The returned Redis status currently uses a truthy check
on the client object (redisClient !== null) which can be true even if the client
disconnected; update the "redis" case to use the client's actual connection
state (redisClient?.connected ?? false) so connected reflects real connectivity,
and adjust the related comment that claims there is "no 'connected' flag" to
note Bun's RedisClient exposes .connected and connection events; locate usages
in the same file around initRedisCache, the redisClient variable, and the
maskRedisUrl return object to make the changes.
---
Outside diff comments:
In `@src/utils/url-validator.ts`:
- Around line 74-75: The DNS resolution logic in validate URL is swallowing all
errors and misclassifying IP literals; before calling dns.resolve4/resolve6,
detect literals using node:net.isIP() (strip surrounding brackets for IPv6) and
validate them directly with isPrivateIP(), returning appropriately instead of
doing DNS lookups; replace the current await dns.resolve4(hostname).catch(() =>
[]) pattern with Promise.allSettled([dns.resolve4(hostname),
dns.resolve6(hostname)]) and inspect each result to treat ENOTFOUND/ENODATA from
both families as definitive "no records" while surfacing other errors, and only
treat the hostname as unresolved if both families explicitly have no records—use
the settled outcomes to populate the addresses variable similarly to the
previous logic.
---
Nitpick comments:
In @.env.example:
- Around line 27-31: Update the .env.example to quote the REDIS_URL value so
credentials or special characters are handled safely: change the REDIS_URL entry
to use quotes around the URL string (e.g.,
REDIS_URL="redis://user:pass@localhost:6379") and keep other keys
(REDIS_KEY_PREFIX, REDIS_CONNECTION_TIMEOUT, REDIS_MAX_RETRIES) unchanged;
ensure the variable name REDIS_URL is used exactly so dotenv consumers pick it
up and consider updating the inline comment to mention that quoting is
recommended when credentials contain special characters.
In `@src/index.ts`:
- Around line 122-125: Replace the duplicated regex in the "redis" case with a
call to the shared masking helper: export the existing maskRedisUrl function
from cache.ts (where it's implemented) and import it into the module containing
the switch, then use maskRedisUrl(config.redisUrl) instead of duplicating the
regex; update any imports/exports accordingly so the "redis" branch returns
`redis (${maskRedisUrl(config.redisUrl)})`.
In `@src/services/cache.ts`:
- Around line 347-358: The masking helper maskRedisUrl is duplicated; export the
existing maskRedisUrl function from this module (make it an exported function
with the same signature) and replace the duplicate implementation in the other
location by importing maskRedisUrl instead; ensure you keep the try/catch
behavior and returned string format identical, update any call sites to import {
maskRedisUrl } from this module, and remove the duplicate function definition to
avoid inconsistent behavior.
In `@src/services/og-generator.ts`:
- Around line 14-24: Add simple rate-limiting to the font preload retry to avoid
log noise under persistent failures: keep a timestamp (e.g.,
lastDefaultFontAttempt: number) alongside the existing defaultFontsAttempted
flag in ensureDefaultFontsLoaded and only attempt preloadDefaultFonts() if
enough time has passed (e.g., now - lastDefaultFontAttempt >=
RETRY_INTERVAL_MS), update lastDefaultFontAttempt before attempting, and reset
defaultFontsAttempted on failure as before; reference ensureDefaultFontsLoaded,
defaultFontsAttempted, preloadDefaultFonts and introduce
RETRY_INTERVAL_MS/lastDefaultFontAttempt to throttle retries.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d3e106a5-f9bb-4c68-8093-567ea3380039
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
.env.examplepackage.jsonsrc/config.tssrc/index.tssrc/routes/health.tssrc/services/cache.tssrc/services/og-generator.tssrc/utils/url-validator.ts
…alidation tests - Fix TS2339: replace `redisClient.quit()` with `redisClient.close()` (Bun's RedisClient API has no `.quit()` method) - Remove Biome, add oxfmt for formatting, oxlint for linting, tsdown for building - Add `.oxfmtrc.json` (migrated from biome.json), `.oxlintrc.json`, `tsdown.config.ts` - Add lint/format/build scripts to package.json - Add origin validation unit tests (13 test cases covering allowed/blocked origins, referer fallback, health exemption, subdomain matching) https://claude.ai/code/session_01HBLMJXsTLSxNLuvGXEnUaZ
There was a problem hiding this comment.
🧹 Nitpick comments (3)
tests/unit/origin-validation.test.ts (2)
91-107: Add explicit tests for header precedence and malformed Referer fallbackPlease add cases where both headers exist (
Originshould win) and whereRefereris malformed (catch/fallback branch). These are high-value regression guards for the security path.Suggested test additions
+ test("uses Origin precedence when both Origin and Referer are present", async () => { + const response = await app.handle( + new Request("http://localhost/", { + headers: { + Origin: "https://evil.com", + Referer: "https://example.com/page", + }, + }), + ); + expect(response.status).toBe(403); + }); + + test("handles malformed Referer via fallback path", async () => { + const response = await app.handle( + new Request("http://localhost/", { + headers: { Referer: "not a valid url" }, + }), + ); + expect(response.status).toBe(403); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/origin-validation.test.ts` around lines 91 - 107, Add two explicit unit tests in tests/unit/origin-validation.test.ts using app.handle: one that sends both Origin and Referer headers (e.g., Origin: "https://example.com", Referer: "https://evil.com/page") and asserts the Origin value takes precedence (expect 200 or the allowed-origin behavior); and one that sends a malformed Referer header (e.g., "not-a-url") with no Origin and asserts the fallback path is handled as the code intends (parse error path — typically treated as absent and results in 403 or the same safe behavior as missing Origin). Locate and extend the existing tests that call app.handle in these blocks to add these assertions so we have regression coverage for header precedence and malformed Referer handling.
4-47: Avoid duplicating origin-validation logic inside the test helperLines 4–44 reimplement the same decision tree as runtime middleware. This can drift and produce false confidence if production logic changes independently. Prefer importing a shared validator (or extracting one) so tests exercise the real logic path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/origin-validation.test.ts` around lines 4 - 47, The test helper createAppWithOriginValidation duplicates production origin-checking logic; replace the inline decision tree by invoking the shared origin validator used in runtime (or extract that validator into a common function) inside the onRequest handler so tests exercise the real logic; specifically, remove the inline origin/referer parsing and isAllowed computation in createAppWithOriginValidation and call the exported validateOrigin (or middleware function) from the production module (used by Elysia middleware) to determine isAllowed and set status 403 when appropriate.src/services/cache.ts (1)
84-86: Comment is inaccurate—Bun's RedisClient exposes a.connectedproperty.Per Bun's API reference,
RedisClientprovides a readonlyconnectedboolean property and connection events (onconnect,onclose). Update the comment to reflect this:Suggested comment update
-// Redis client — no "connected" flag. Bun's autoReconnect + enableOfflineQueue -// handle transient failures; try/catch on each op provides graceful degradation. +// Redis client — Bun's autoReconnect + enableOfflineQueue handle transient +// failures; try/catch on each op provides graceful degradation. let redisClient: RedisClient | null = null;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/cache.ts` around lines 84 - 86, The comment above the Redis client declaration incorrectly states there is no "connected" flag; update the comment near the redisClient declaration (symbol: redisClient, type: RedisClient) to state that Bun's RedisClient exposes a readonly .connected boolean and connection events (onconnect, onclose) and that Bun still supports autoReconnect/enableOfflineQueue and try/catch per-op for graceful degradation—keep it brief and accurate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/services/cache.ts`:
- Around line 84-86: The comment above the Redis client declaration incorrectly
states there is no "connected" flag; update the comment near the redisClient
declaration (symbol: redisClient, type: RedisClient) to state that Bun's
RedisClient exposes a readonly .connected boolean and connection events
(onconnect, onclose) and that Bun still supports
autoReconnect/enableOfflineQueue and try/catch per-op for graceful
degradation—keep it brief and accurate.
In `@tests/unit/origin-validation.test.ts`:
- Around line 91-107: Add two explicit unit tests in
tests/unit/origin-validation.test.ts using app.handle: one that sends both
Origin and Referer headers (e.g., Origin: "https://example.com", Referer:
"https://evil.com/page") and asserts the Origin value takes precedence (expect
200 or the allowed-origin behavior); and one that sends a malformed Referer
header (e.g., "not-a-url") with no Origin and asserts the fallback path is
handled as the code intends (parse error path — typically treated as absent and
results in 403 or the same safe behavior as missing Origin). Locate and extend
the existing tests that call app.handle in these blocks to add these assertions
so we have regression coverage for header precedence and malformed Referer
handling.
- Around line 4-47: The test helper createAppWithOriginValidation duplicates
production origin-checking logic; replace the inline decision tree by invoking
the shared origin validator used in runtime (or extract that validator into a
common function) inside the onRequest handler so tests exercise the real logic;
specifically, remove the inline origin/referer parsing and isAllowed computation
in createAppWithOriginValidation and call the exported validateOrigin (or
middleware function) from the production module (used by Elysia middleware) to
determine isAllowed and set status 403 when appropriate.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3d4e1694-0f34-41a8-b33e-5ec52eaaee81
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (10)
.oxfmtrc.json.oxlintrc.jsonCLAUDE.mdREADME.mdbiome.jsonpackage.jsonsrc/cluster.tssrc/services/cache.tstests/unit/origin-validation.test.tstsdown.config.ts
💤 Files with no reviewable changes (1)
- biome.json
✅ Files skipped from review due to trivial changes (6)
- src/cluster.ts
- .oxfmtrc.json
- .oxlintrc.json
- tsdown.config.ts
- README.md
- CLAUDE.md
🚧 Files skipped from review as they are similar to previous changes (1)
- package.json
…icate helpers - Fix getCacheStats().connected to use redisClient?.connected instead of null check, so health route accurately reports Redis availability - Fix SSRF bypass for IP literals by checking isIP() before DNS resolution and using Promise.allSettled() to distinguish real DNS errors from ENOTFOUND - Export maskRedisUrl from cache.ts and reuse in index.ts instead of duplicating the regex - Rate-limit font preload retries with 60s cooldown to reduce log noise - Quote REDIS_URL in .env.example for safe handling of special chars - Update satori 0.18.4 → 0.26.0 via bun update --latest - Update CI workflow to use oxlint/oxfmt instead of Biome https://claude.ai/code/session_01HBLMJXsTLSxNLuvGXEnUaZ
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/index.ts (1)
174-180: Add graceful HTTP server shutdown before disconnecting Redis.The shutdown handler disconnects Redis but doesn't stop the Elysia server first. This can result in in-flight requests failing if they attempt to access Redis during disconnection. Elysia 1.4.28 provides the
app.stop()method for graceful server shutdown.Note: As of Elysia 1.3.1, there's a known limitation where
app.stop()may not always wait for all in-flight requests to complete. For production deployments requiring stricter guarantees, consider thegraceful-server-elysiaplugin.Proposed improvement
// Graceful shutdown const shutdown = async () => { + await app.stop(); await disconnectRedisCache(); process.exit(0); }; process.on("SIGTERM", shutdown); process.on("SIGINT", shutdown);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 174 - 180, Update the shutdown flow to stop the Elysia server before disconnecting Redis: modify the existing shutdown function (shutdown) to call await app.stop() (the Elysia instance named app) prior to await disconnectRedisCache(), and ensure the same shutdown is used for process.on("SIGTERM", ...) and process.on("SIGINT", ...). If app.stop() might not be present in some environments, guard the call (e.g., check typeof app.stop === "function") so shutdown remains safe. This ensures the server stops accepting new requests before Redis is torn down.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/index.ts`:
- Around line 174-180: Update the shutdown flow to stop the Elysia server before
disconnecting Redis: modify the existing shutdown function (shutdown) to call
await app.stop() (the Elysia instance named app) prior to await
disconnectRedisCache(), and ensure the same shutdown is used for
process.on("SIGTERM", ...) and process.on("SIGINT", ...). If app.stop() might
not be present in some environments, guard the call (e.g., check typeof app.stop
=== "function") so shutdown remains safe. This ensures the server stops
accepting new requests before Redis is torn down.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 685ffa51-c292-4f30-8c65-a3a5bcbe712f
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (7)
.env.example.github/workflows/ci.ymlpackage.jsonsrc/index.tssrc/services/cache.tssrc/services/og-generator.tssrc/utils/url-validator.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/services/og-generator.ts
- package.json
Ensures the server stops accepting new requests before Redis is torn down, preventing in-flight requests from hitting a null Redis client. https://claude.ai/code/session_01HBLMJXsTLSxNLuvGXEnUaZ
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/index.ts`:
- Around line 55-65: The origin check in isAllowed mixes full-URL string
equality with hostname comparisons leading to mismatches when
config.allowedOrigins contains full URLs (e.g., "https://example.com");
normalize both sides to hostnames before comparing: for each allowed entry in
config.allowedOrigins, attempt to parse it as a URL and use its hostname (fall
back to the raw string if parsing fails), then compare the request's
parsed.hostname (from sourceOrigin) to that normalized allowed hostname and use
=== or endsWith(`.${allowedHost}`) for subdomain checks; update the isAllowed
logic to derive allowedHost for each allowed entry and use that for the hostname
comparisons while keeping the original exact-match fallback for completely
unparseable values.
- Line 43: The early return on missing headers ("if (!origin && !referer)
return;") lets requests without Origin/Referer bypass origin checks; update the
logic in the middleware handling origin/referer (the block referencing origin,
referer and allowedOrigins) to reject requests when both headers are absent if
allowedOrigins is configured (e.g., if allowedOrigins exists and is non-empty,
respond with a 403/forbidden), otherwise explicitly allow and document the
server-to-server allowance; replace the unconditional return with a clear branch
that either denies the request or documents/permits it based on allowedOrigins.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0febfd4e-3077-4fec-9e4f-fe28491f6dfa
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (1)
src/index.ts
src/index.ts
Outdated
| const origin = request.headers.get("Origin"); | ||
| const referer = request.headers.get("Referer"); | ||
|
|
||
| if (!origin && !referer) return; |
There was a problem hiding this comment.
Requests without Origin/Referer headers bypass validation entirely.
Returning early when both headers are missing allows direct API calls (curl, server-side requests, tools like Postman) to bypass origin restrictions completely. If the intent is to restrict access to specific origins, this creates a bypass path.
If this is intentional (e.g., allowing server-to-server calls), consider documenting it. Otherwise, consider denying requests without a recognizable origin when allowedOrigins is configured.
🛡️ Proposed fix to deny requests without origin headers
- if (!origin && !referer) return;
+ if (!origin && !referer) {
+ set.status = 403;
+ return { error: "FORBIDDEN", message: "Origin not allowed" };
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!origin && !referer) return; | |
| if (!origin && !referer) { | |
| set.status = 403; | |
| return { error: "FORBIDDEN", message: "Origin not allowed" }; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/index.ts` at line 43, The early return on missing headers ("if (!origin
&& !referer) return;") lets requests without Origin/Referer bypass origin
checks; update the logic in the middleware handling origin/referer (the block
referencing origin, referer and allowedOrigins) to reject requests when both
headers are absent if allowedOrigins is configured (e.g., if allowedOrigins
exists and is non-empty, respond with a 403/forbidden), otherwise explicitly
allow and document the server-to-server allowance; replace the unconditional
return with a clear branch that either denies the request or documents/permits
it based on allowedOrigins.
src/index.ts
Outdated
| const isAllowed = config.allowedOrigins.some((allowed) => { | ||
| if (sourceOrigin === allowed) return true; | ||
| try { | ||
| const parsed = new URL(sourceOrigin); | ||
| return ( | ||
| parsed.hostname === allowed || parsed.hostname.endsWith(`.${allowed}`) | ||
| ); | ||
| } catch { | ||
| return sourceOrigin === allowed; | ||
| } | ||
| }); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Origin matching logic is inconsistent with URL vs hostname comparison.
The matching logic mixes URL and hostname comparisons:
- Line 56: Exact string match (works for full URLs)
- Line 60:
parsed.hostname === allowedassumesallowedis a bare hostname
If ALLOWED_ORIGINS contains https://example.com, the hostname check ("example.com" === "https://example.com") will always fail. The subdomain check has the same issue.
Consider normalizing both sides to hostnames for consistent comparison:
♻️ Proposed fix for consistent hostname extraction
const isAllowed = config.allowedOrigins.some((allowed) => {
if (sourceOrigin === allowed) return true;
try {
const parsed = new URL(sourceOrigin);
+ // Normalize allowed to hostname (handle both "example.com" and "https://example.com")
+ let allowedHostname: string;
+ try {
+ allowedHostname = new URL(allowed).hostname;
+ } catch {
+ allowedHostname = allowed; // Treat as bare hostname
+ }
return (
- parsed.hostname === allowed || parsed.hostname.endsWith(`.${allowed}`)
+ parsed.hostname === allowedHostname || parsed.hostname.endsWith(`.${allowedHostname}`)
);
} catch {
return sourceOrigin === allowed;
}
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const isAllowed = config.allowedOrigins.some((allowed) => { | |
| if (sourceOrigin === allowed) return true; | |
| try { | |
| const parsed = new URL(sourceOrigin); | |
| return ( | |
| parsed.hostname === allowed || parsed.hostname.endsWith(`.${allowed}`) | |
| ); | |
| } catch { | |
| return sourceOrigin === allowed; | |
| } | |
| }); | |
| const isAllowed = config.allowedOrigins.some((allowed) => { | |
| if (sourceOrigin === allowed) return true; | |
| try { | |
| const parsed = new URL(sourceOrigin); | |
| // Normalize allowed to hostname (handle both "example.com" and "https://example.com") | |
| let allowedHostname: string; | |
| try { | |
| allowedHostname = new URL(allowed).hostname; | |
| } catch { | |
| allowedHostname = allowed; // Treat as bare hostname | |
| } | |
| return ( | |
| parsed.hostname === allowedHostname || parsed.hostname.endsWith(`.${allowedHostname}`) | |
| ); | |
| } catch { | |
| return sourceOrigin === allowed; | |
| } | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/index.ts` around lines 55 - 65, The origin check in isAllowed mixes
full-URL string equality with hostname comparisons leading to mismatches when
config.allowedOrigins contains full URLs (e.g., "https://example.com");
normalize both sides to hostnames before comparing: for each allowed entry in
config.allowedOrigins, attempt to parse it as a URL and use its hostname (fall
back to the raw string if parsing fails), then compare the request's
parsed.hostname (from sourceOrigin) to that normalized allowed hostname and use
=== or endsWith(`.${allowedHost}`) for subdomain checks; update the isAllowed
logic to derive allowedHost for each allowed entry and use that for the hostname
comparisons while keeping the original exact-match fallback for completely
unparseable values.
…redirects - Extract origin validation to src/middleware/origin-validator.ts and shared error handler to src/middleware/error-handler.ts, removing duplication across routes and tests - Decompose 309-line processImage() into focused transform modules (crop, resize, adjustments, watermark, output) under src/services/transforms/ - Harden SSRF protection: switch to manual redirect handling with per-hop URL revalidation (max 5 redirects) - Extract magic numbers to src/constants.ts - Add 10s shutdown timeout to prevent hanging on graceful shutdown - Remove unused _format param from setCache(), fix sort() → toSorted() lint warnings - Update CLAUDE.md with architecture patterns and conventions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Switch CI typecheck job from `bunx tsc --noEmit` to `bun run build` (tsdown) to match the project's actual build toolchain - Use structural type for origin guard handler to satisfy Elysia's PreHandler type instead of importing Context Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (1)
CLAUDE.md (1)
34-68: Consider adding a language specifier to the fenced code block.The directory tree structure at line 34 lacks a language identifier, which triggers a markdownlint warning. Adding
textorplaintextwould satisfy the linter while preserving readability.Proposed fix
-``` +```text src/ ├── index.ts # Entry point — server setup, CORS, cache init, shutdown🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CLAUDE.md` around lines 34 - 68, The fenced code block containing the project directory tree in CLAUDE.md should include a language specifier to satisfy markdownlint; change the opening triple backticks for that block from ``` to ```text (or ```plaintext) so the directory listing is treated as plain text and the linter warning is resolved.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/services/cache.ts`:
- Around line 126-135: The getRedisCached function currently converts
redisClient.getBuffer(...) output to a Node Buffer; update it to return the
native binary type instead: change the function signature from Promise<Buffer |
null> to Promise<Uint8Array | null> and return the data directly (i.e., return
data) without wrapping via Buffer.from(data); ensure callers of getRedisCached
and any types that expect Buffer are updated to accept Uint8Array or adapt at
the call site where a Buffer is specifically required.
In `@src/services/transforms/crop.ts`:
- Around line 11-19: The current guard on params.crop (the parts array) only
checks for NaN/undefined but allows decimals and Infinity; update the validation
in src/services/transforms/crop.ts so that after parsing params.crop into parts
you ensure each value is a finite integer (use Number.isFinite and
Number.isInteger on the parsed numbers) and reject values that are non-finite or
non-integer before calling sharp.extract(); keep the existing checks for exactly
four parts and surface a ValidationError when the finite-integer check fails so
malformed coordinates never reach extract().
In `@src/services/transforms/output.ts`:
- Around line 14-15: The PNG branch currently passes quality to sharp via
pipeline.png({ quality }) which has no effect for full-color images; update the
PNG handling in the function that builds the pipeline (the pipeline.png call) to
either enable palette-based quantization explicitly (pipeline.png({ palette:
true, quality, compressionLevel })) if you intend indexed PNGs, or replace the
quality option with a zlib compression setting (pipeline.png({ compressionLevel
})) for typical full-color PNGs; ensure you use a validated compressionLevel
(0-9) and preserve any existing quality input only when palette: true is set.
In `@src/services/transforms/resize.ts`:
- Around line 21-26: This code reads dimensions from the raw imageBuffer
(metadata, originalWidth, originalHeight) which ignores earlier pipeline
transforms and causes percent-based resizing to be wrong and can produce 0
dimensions; change to get metadata from the current Sharp pipeline (call
metadata() on the existing Sharp instance/pipeline used for transforms rather
than sharp(imageBuffer).metadata()), compute the scale from params.size as
before, then compute width/height and clamp each rounded dimension to at least 1
(e.g., width = Math.max(1, Math.round(...))) to avoid passing 0 to Sharp; update
references to metadata, originalWidth, originalHeight, scale, width, height
accordingly.
In `@src/services/transforms/watermark.ts`:
- Around line 91-94: The color construction for text watermarks currently always
prepends '#' and can produce '##' when params.wm_color already includes it;
update the logic that sets the variable color (currently using params.wm_color)
so it preserves an existing leading '#' (i.e., if params.wm_color startsWith '#'
use it as-is, otherwise prepend '#') and fall back to "#ffffff" when no wm_color
is provided; mirror the safer handling used in applyAdjustments() and update the
assignment where color is defined in watermark.ts.
- Around line 156-159: The current use of ensureAlpha(opacity) on
watermarkPipeline doesn't change existing alpha channels so wm_opacity is
ineffective for PNGs; instead ensure the watermark has an alpha channel via
watermarkPipeline.ensureAlpha() (without trying to set opacity there) and apply
the requested wm_opacity when compositing the watermark into the base
image—e.g., multiply/premultiply the watermark's alpha by params.wm_opacity or
pass params.wm_opacity as the overlay opacity in the composite call (use
watermarkPipeline and the composite(...) operation to apply the effective
opacity).
- Around line 32-34: The code uses pipeline.clone().metadata() (mainMetadata /
mainWidth / mainHeight) which returns original input dimensions, not
post-transform output dimensions, so update the logic to obtain the transformed
output size by calling pipeline.clone().toBuffer({ resolveWithObject: true })
and use the returned info.width and info.height (with
DEFAULT_FALLBACK_WIDTH/HEIGHT fallbacks) for all watermark geometry and wm_scale
calculations; replace uses of mainMetadata/mainWidth/mainHeight with these
resolved output dimensions and keep the existing pipeline.clone() usage pattern
to avoid mutating the original pipeline.
- Around line 96-97: The code calls loadFontsForSatori(fontFamily, [400, 700])
but keeps using the original fontFamily when building the Satori style, which
breaks font matching if loadFontsForSatori fell back to a different font (e.g.,
"Inter"); modify the code around where fonts are loaded (function
loadFontsForSatori) to capture the actual loaded font name (e.g., const
actualFontFamily = fonts[0]?.name || fontFamily) and then use actualFontFamily
in the Satori style.fontFamily instead of the original fontFamily; apply the
same change to the other occurrence with the same pattern in the OG generator.
---
Nitpick comments:
In `@CLAUDE.md`:
- Around line 34-68: The fenced code block containing the project directory tree
in CLAUDE.md should include a language specifier to satisfy markdownlint; change
the opening triple backticks for that block from ``` to ```text (or
```plaintext) so the directory listing is treated as plain text and the linter
warning is resolved.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 58a9b4c9-0af1-4aba-8e1f-5dabc0a9a30b
📒 Files selected for processing (17)
CLAUDE.mdsrc/constants.tssrc/index.tssrc/middleware/error-handler.tssrc/middleware/origin-validator.tssrc/routes/image.tssrc/routes/og.tssrc/services/cache.tssrc/services/fonts.tssrc/services/image-fetcher.tssrc/services/image-processor.tssrc/services/transforms/adjustments.tssrc/services/transforms/crop.tssrc/services/transforms/output.tssrc/services/transforms/resize.tssrc/services/transforms/watermark.tstests/unit/origin-validation.test.ts
✅ Files skipped from review due to trivial changes (3)
- src/services/fonts.ts
- src/constants.ts
- tests/unit/origin-validation.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/index.ts
Switch builder stage from `bun x tsc --noEmit || true` to `bun run build` (tsdown) to match the project's build toolchain. Build failures now correctly fail the Docker build. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…watermark opacity/fonts, Redis types
- crop: reject non-finite and non-integer values (Infinity, decimals)
- output: PNG uses compressionLevel instead of no-op quality param
- resize: use toBuffer({ resolveWithObject: true }) for accurate post-transform dimensions, clamp to min 1
- watermark: fix color double-# prefix, use linear() for alpha channel opacity instead of no-op ensureAlpha(opacity), resolve actual post-transform dimensions, capture actual font name after fallback
- og-generator: same font fallback fix — use actualFontFamily from loaded fonts
- cache: remove redundant Buffer.from() on Redis data, unify getCached return to Uint8Array | null
- image-processor: sort imports
- CLAUDE.md: add language specifier to code fence
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
processImage()into focused transform modules, extracted shared middleware, eliminated duplicationRedis Cache
"redis"cache mode alongsidedisk,memory,hybrid,noneRedisClientwith auto-reconnect, offline queue, and auto-pipeliningREDIS_URL,REDIS_KEY_PREFIX,REDIS_CONNECTION_TIMEOUT,REDIS_MAX_RETRIESSecurity
src/middleware/origin-validator.ts— reusablecreateOriginGuard()validates Origin/Referer headers with subdomain matching, bypasses/healthimage-fetcher.tsnow usesredirect: "manual"and re-validates each redirect target throughvalidateUrl()to block redirect-to-private-IP attacksCode Quality Refactor
processImage()intosrc/services/transforms/modules:crop.ts,resize.ts,adjustments.ts,watermark.ts,output.ts— orchestrator is now ~50 linessrc/middleware/error-handler.ts) — replaced duplicate.onError()blocks in image and OG routessrc/constants.ts) — named all magic numbers (blur bounds, watermark defaults, timeouts, redirect limits)_formatparam fromsetCache()signatureArray#sort()→Array#toSorted()app.stop()or Redis disconnectOther Changes
@types/bun,@elysiajs/cors,elysia,satorito latest versionsCLAUDE.mdwith architecture patterns and conventionsTest plan
bun test— 184 tests passingbun run lint— 0 warnings, 0 errorsbun run format:check— all files formattedbun run build— successful build🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements
Developer Experience