perf(node): optimize request/response hot path#195
perf(node): optimize request/response hot path#195productdevbook wants to merge 3 commits intoh3js:mainfrom
Conversation
- request.json(): read body buffer directly, parse in one step Saves one Promise chain link vs text().then(JSON.parse) - readBody(): single-chunk fast path Avoids Buffer.concat() allocation for small payloads (most API requests arrive in a single chunk) - sendNodeResponse(): eliminate async function overhead Changed from async to sync function. For string/buffer bodies (the common case), uses nodeRes.end(body) instead of separate write(body) + endNodeResponse() Promise - writeHead(): inline header flattening for small arrays (≤4 headers) Avoids Array.flat() allocation overhead for the common case - request.url: cache href string to avoid repeated FastURL getter chain Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThe PR optimizes Node adapter internals: it caches URL hrefs and reuses buffer helpers in request handling, refactors response sending to remove async/extra allocations and inline small header arrays, and adds a Node.js microbenchmark measuring per-request JSON latency. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
test/bench-node/bench-hotpath.mjs (1)
77-84: Consider padding values for consistent table alignment.For values with varying digit counts, the table columns may misalign. Using
padStart()would keep output neat across different runs.💡 Optional improvement
const pad = (v, w = 6) => String(v).padStart(w); console.log(`| ${pad(r.avg)}µs | ${pad(r.p50)}µs | ${pad(r.p95)}µs | ${pad(r.p99)}µs | ${pad(r.rps.toLocaleString(), 7)}/s |`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/bench-node/bench-hotpath.mjs` around lines 77 - 84, The printed benchmark table can misalign when numeric strings vary in length; update the formatting after calling benchSrvx() (the r object with properties avg, p50, p95, p99, rps) to pad numeric values to fixed widths (e.g., using String(...).padStart) before interpolating into the output line that currently logs `| ${r.avg}µs ... ${r.rps.toLocaleString()}/s |`, ensuring each column uses consistent widths (use a helper like pad(v,w) and apply it to r.avg, r.p50, r.p95, r.p99 and r.rps.toLocaleString()) so the ASCII table stays aligned across runs.src/adapters/_node/request.ts (1)
137-144: Behavioral note:json()returnsundefinedfor bodyless requests instead of throwing.When
#bodyStreamisnull(GET/HEAD requests), this returnsPromise.resolve(undefined). The standardResponse.json()throws aSyntaxErrorwhen parsing an empty body. While this may be intentional for ergonomics, it's a semantic deviation that could surprise callers expecting standard behavior.If intentional, consider adding a brief comment noting this deliberate difference.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adapters/_node/request.ts` around lines 137 - 144, Summary: The fast-path branch returns Promise.resolve(undefined) when this.#bodyStream is null, which deviates from Response.json() (which throws on empty body). Fix: either make behavior mirror the spec by throwing a SyntaxError when this.#bodyStream === null (e.g., replace Promise.resolve(undefined) with logic that yields the same parse-failure as new Response('').json()) or, if the undefined return is intentional, add a concise inline comment above the fast-path referencing this.#bodyStream, Response.json(), readBody, and bufToJSON that explicitly documents the deliberate semantic difference (that bodyless GET/HEAD returns undefined rather than throwing) so callers aren’t surprised.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/adapters/_node/send.ts`:
- Around line 28-31: TypeScript complains about nodeRes.end overloads because
res.body's type isn't recognized as a valid end() argument; update the code
where writeHead(nodeRes, res.status, res.statusText, res.headers);
nodeRes.end(res.body); is called to cast res.body to a compatible type (e.g.,
string | Buffer | Uint8Array | null) before passing it to nodeRes.end so the
overload matches; locate the call sites around writeHead and nodeRes.end in this
file (and in the _toNodeResponse flow if needed) and apply the narrow cast to
res.body to satisfy the end() signature.
- Around line 66-76: The code creates writeHeaders with new Array(len * 2) in
the send logic (rawHeaders/writeHeaders) which triggers unicorn/no-new-array;
replace the sparse-array allocation by creating a fully populated array (e.g.,
use Array.from({ length: len * 2 }) and assign by index, or build writeHeaders
by pushing pairs in the loop) so no sparse array is created, and keep the
existing loop that sets writeHeaders[i * 2] and writeHeaders[i * 2 + 1]; if you
decide the current new Array approach is a required micro-optimization, add an
inline eslint-disable comment for unicorn/no-new-array with a brief
justification next to the allocation site.
---
Nitpick comments:
In `@src/adapters/_node/request.ts`:
- Around line 137-144: Summary: The fast-path branch returns
Promise.resolve(undefined) when this.#bodyStream is null, which deviates from
Response.json() (which throws on empty body). Fix: either make behavior mirror
the spec by throwing a SyntaxError when this.#bodyStream === null (e.g., replace
Promise.resolve(undefined) with logic that yields the same parse-failure as new
Response('').json()) or, if the undefined return is intentional, add a concise
inline comment above the fast-path referencing this.#bodyStream,
Response.json(), readBody, and bufToJSON that explicitly documents the
deliberate semantic difference (that bodyless GET/HEAD returns undefined rather
than throwing) so callers aren’t surprised.
In `@test/bench-node/bench-hotpath.mjs`:
- Around line 77-84: The printed benchmark table can misalign when numeric
strings vary in length; update the formatting after calling benchSrvx() (the r
object with properties avg, p50, p95, p99, rps) to pad numeric values to fixed
widths (e.g., using String(...).padStart) before interpolating into the output
line that currently logs `| ${r.avg}µs ... ${r.rps.toLocaleString()}/s |`,
ensuring each column uses consistent widths (use a helper like pad(v,w) and
apply it to r.avg, r.p50, r.p95, r.p99 and r.rps.toLocaleString()) so the ASCII
table stays aligned across runs.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 605a46fd-bfd1-4c27-8179-190092ab0143
📒 Files selected for processing (3)
src/adapters/_node/request.tssrc/adapters/_node/send.tstest/bench-node/bench-hotpath.mjs
| // Inline flatten for small arrays — avoids Array.flat() allocation | ||
| const len = rawHeaders.length; | ||
| if (len <= 4) { | ||
| writeHeaders = new Array(len * 2); | ||
| for (let i = 0; i < len; i++) { | ||
| writeHeaders[i * 2] = rawHeaders[i][0]; | ||
| writeHeaders[i * 2 + 1] = rawHeaders[i][1]; | ||
| } | ||
| } else { | ||
| // @ts-expect-error | ||
| nodeRes.writeHead(status, statusText, writeHeaders); | ||
| writeHeaders = rawHeaders.flat(); | ||
| } |
There was a problem hiding this comment.
Fix ESLint unicorn/no-new-array warning.
The linter flags new Array(len * 2) because it creates a sparse array, which can be confusing. Since you're immediately populating all indices, consider using Array.from or simply building the array inline.
🔧 Proposed fix
// Inline flatten for small arrays — avoids Array.flat() allocation
const len = rawHeaders.length;
if (len <= 4) {
- writeHeaders = new Array(len * 2);
+ writeHeaders = Array.from({ length: len * 2 });
for (let i = 0; i < len; i++) {
writeHeaders[i * 2] = rawHeaders[i][0];
writeHeaders[i * 2 + 1] = rawHeaders[i][1];
}Alternatively, disable the rule inline if benchmarks show new Array() is measurably faster for this hot path:
+ // eslint-disable-next-line unicorn/no-new-array -- pre-sized for perf
writeHeaders = new Array(len * 2);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Inline flatten for small arrays — avoids Array.flat() allocation | |
| const len = rawHeaders.length; | |
| if (len <= 4) { | |
| writeHeaders = new Array(len * 2); | |
| for (let i = 0; i < len; i++) { | |
| writeHeaders[i * 2] = rawHeaders[i][0]; | |
| writeHeaders[i * 2 + 1] = rawHeaders[i][1]; | |
| } | |
| } else { | |
| // @ts-expect-error | |
| nodeRes.writeHead(status, statusText, writeHeaders); | |
| writeHeaders = rawHeaders.flat(); | |
| } | |
| // Inline flatten for small arrays — avoids Array.flat() allocation | |
| const len = rawHeaders.length; | |
| if (len <= 4) { | |
| writeHeaders = Array.from({ length: len * 2 }); | |
| for (let i = 0; i < len; i++) { | |
| writeHeaders[i * 2] = rawHeaders[i][0]; | |
| writeHeaders[i * 2 + 1] = rawHeaders[i][1]; | |
| } | |
| } else { | |
| writeHeaders = rawHeaders.flat(); | |
| } |
🧰 Tools
🪛 GitHub Check: autofix
[warning] 69-69: eslint-plugin-unicorn(no-new-array)
Do not use new Array(singleArgument).
🪛 GitHub Check: tests_common
[warning] 69-69: eslint-plugin-unicorn(no-new-array)
Do not use new Array(singleArgument).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/adapters/_node/send.ts` around lines 66 - 76, The code creates
writeHeaders with new Array(len * 2) in the send logic (rawHeaders/writeHeaders)
which triggers unicorn/no-new-array; replace the sparse-array allocation by
creating a fully populated array (e.g., use Array.from({ length: len * 2 }) and
assign by index, or build writeHeaders by pushing pairs in the loop) so no
sparse array is created, and keep the existing loop that sets writeHeaders[i *
2] and writeHeaders[i * 2 + 1]; if you decide the current new Array approach is
a required micro-optimization, add an inline eslint-disable comment for
unicorn/no-new-array with a brief justification next to the allocation site.
nodeRes.end(body) doesn't accept DataView/Readable types. Use write(body) + end() pattern (same as original) but without the endNodeResponse() Promise wrapper. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
commit: |
|
Closing — benchmarks show srvx adapter overhead is already ~2µs/req. The optimizations here are correct but not measurable in practice. The real bottleneck is on our framework's handler side, not the adapter layer. |
Summary
Optimizes the Node.js adapter hot path with targeted micro-optimizations that reduce per-request overhead:
request.json()direct path — reads body buffer directly and parses in one step, saving one Promise chain link vstext().then(JSON.parse)(previously:readBody → bufToString → JSON.parse= 3 links, now:readBody → bufToJSON= 2 links)readBody()single-chunk fast path — avoidsBuffer.concat()allocation for small payloads. Most API requests arrive in a single chunk;chunks.length === 1 ? chunks[0] : Buffer.concat(chunks)saves an allocation + copysendNodeResponse()sync elimination — changed fromasync functionto regular function. For string/buffer bodies (the common JSON response case), usesnodeRes.end(body)instead of separatewrite(body)+endNodeResponse()Promise allocation. Eliminates one Promise per requestwriteHead()inline header flattening — for ≤4 headers (covers most API responses), uses a pre-sized array with manual assignment instead ofArray.flat(), avoiding an allocationrequest.urlhref caching — caches the computed href string to avoid repeated FastURL getter chain traversal on multiple.urlaccessesBenchmark
Sequential latency benchmark (5000 requests, 500 warmup) shows results within noise (~110µs avg), which is expected since this benchmark is dominated by network round-trip time (~90µs). The optimizations target server-side processing overhead (10-20µs range) and would be more visible in throughput benchmarks under load.
A
bench-hotpath.mjsfile is included for easy reproducibility:Test plan
npx vitest run test/node.test.ts)🤖 Generated with Claude Code
Summary by CodeRabbit
Refactor
Chores