Conversation
When a user sends a new message instead of resolving pending tool approvals, inject synthetic denial responses rather than silently dropping the orphaned tool calls. This preserves context in the message history and prevents the AI SDK from receiving incomplete tool-call/result pairs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…Id usage Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Auto-deny unresolved tool approvals when a new generation starts - Fix unhandled rejection in DeltaStreamer on transaction teardown - Remove orphaned streaming demo example files - Fix pre-existing type errors in streaming integration tests - Bump version to 0.6.0 and update changelog Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughBumps package to 0.6.0 and documents a Tool Approval flow; implements auto-deny of unresolved approvals, merges approval responses when possible, updates client-side filtering to retain pending tool-calls, tightens DeltaStreamer abort/finish behavior, adds streaming integration tests, and adds docs for tool approval. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant UI as Client UI
participant Server as Server/API
participant Agent as Generator/Agent
participant DB as DB/Message Store
UI->>Server: sendMessage(user message)
Server->>DB: save user message
Server->>Agent: schedule generation
Agent->>DB: append assistant message (tool-call + approval-request)
Agent-->>Agent: pause generation awaiting approvals
UI->>Server: submitApproval(approvalId, approve/deny)
Server->>DB: save approval response (merge if existingResponseMessage)
Server->>Agent: continueAfterApprovals
Agent->>DB: append tool-result / continue assistant output
Note over Server,DB: If a new generation starts before approvals resolve,\nServer runs autoDenyUnresolvedApprovals -> inject synthetic denial messages
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/client/streaming.ts (1)
337-343:⚠️ Potential issue | 🔴 CriticalStream state will be marked as successfully completed even when delta writes fail.
The change silently returns from
sendDelta()on error instead of propagating the exception. This causes a critical issue:
- When an error occurs,
onAsyncAbort(call.fail)notifies Convex of the failureabortController.abort()prevents further writes- However,
consumeStream()continues its loop (unaware of the error)- The loop completes normally and calls
finish(), marking the stream as successfully completed in the database- This is incorrect—the stream should be marked as failed/aborted via
fail(), not finishedThe original exception-based flow was correct: the exception propagated to the
streamText.tscatch block (lines 168-171), which explicitly calledstreamer?.fail()to mark the stream with the proper failed state before rethrowing.With this change, streams that fail during delta transmission will have incorrect state in the database, appearing as successfully completed when they actually failed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/streaming.ts` around lines 337 - 343, sendDelta currently swallows errors by calling onAsyncAbort and abortController.abort() then returning, which lets consumeStream() finish and call finish() instead of marking the stream failed; change sendDelta (the catch block in sendDelta) to rethrow the caught error after calling this.config.onAsyncAbort(...) and this.abortController.abort() so the exception propagates to the streamText.ts catch path (which calls streamer?.fail()), preserving the original error (throw e) rather than returning silently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.mcp.json:
- Around line 1-14: The committed .mcp.json contains machine- and user-specific
paths (the "command" entry referencing /Applications/Hive.app/... and the
MCP_SOCKET_PATH with /Users/magicseth/...), so remove this file from the repo
and replace it with a portable approach: add ".mcp.json" to .gitignore, create a
".mcp.json.example" with placeholder values for "command" and
"env.MCP_SOCKET_PATH" (e.g., "<PATH_TO_HIVE>" and "<MCP_SOCKET_PATH>") and
update any code that reads .mcp.json to fall back to environment variables
(e.g., process.env.MCP_COMMAND and process.env.MCP_SOCKET_PATH) when present so
local machine paths are not stored in source control.
In `@docs/http-streaming-requirements.md`:
- Around line 35-52: The fenced ASCII diagram block lacks a language specifier;
update the opening fence to include a plain text language (for example use
```text or ```plaintext) so the diagram containing create(), addDelta(),
finish(), abort()/timeout, and cleanup is lint-compliant and rendered as plain
text.
- Around line 120-123: The fenced code block containing the index query snippet
using .withIndex("streamId_start_end", (q) => q.eq("streamId",
cursor.streamId).gte("start", cursor.cursor)) should include a language
specifier (e.g., add ```text or ```js) or a short inline comment indicating the
format; update the block surrounding the symbols .withIndex,
"streamId_start_end", q.eq, gte, cursor.streamId, and cursor.cursor to start
with a language tag so syntax/formatting and reader intent are clear.
In `@src/client/streaming.integration.test.ts`:
- Line 1: Remove the unused import symbol `vi` from the test file's import
statement (the line importing beforeEach, describe, expect, test, vi) so the
file only imports the used symbols; update the import to exclude `vi` (e.g.,
import { beforeEach, describe, expect, test } from "vitest") to eliminate the
static analysis warning.
- Around line 7-12: Remove the unused symbol DEFAULT_STREAMING_OPTIONS from the
import list in src/client/streaming.integration.test.ts; update the import
statement that currently imports compressUIMessageChunks, DeltaStreamer,
mergeTransforms, DEFAULT_STREAMING_OPTIONS from "./streaming.js" to only import
compressUIMessageChunks, DeltaStreamer, and mergeTransforms so the test file no
longer contains an unused import.
---
Outside diff comments:
In `@src/client/streaming.ts`:
- Around line 337-343: sendDelta currently swallows errors by calling
onAsyncAbort and abortController.abort() then returning, which lets
consumeStream() finish and call finish() instead of marking the stream failed;
change sendDelta (the catch block in sendDelta) to rethrow the caught error
after calling this.config.onAsyncAbort(...) and this.abortController.abort() so
the exception propagates to the streamText.ts catch path (which calls
streamer?.fail()), preserving the original error (throw e) rather than returning
silently.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: dbc1b56b-6e79-4c27-aeaa-289daa6aed40
📒 Files selected for processing (12)
.mcp.jsonCHANGELOG.mddocs/http-streaming-requirements.mddocs/tool-approval.mdxexample/ui/main.tsxpackage.jsonsrc/client/search.test.tssrc/client/search.tssrc/client/streaming.integration.test.tssrc/client/streaming.tssrc/mapping.test.tssrc/mapping.ts
docs/http-streaming-requirements.md
Outdated
| ``` | ||
| create() addDelta() (with heartbeat) | ||
| │ │ | ||
| ▼ ▼ | ||
| ┌──────────┐ ┌──────────┐ | ||
| │ streaming │─────▶│ streaming │──── heartbeat every ~2.5 min | ||
| └──────────┘ └──────────┘ | ||
| │ │ | ||
| │ finish() │ abort() / timeout (10 min) | ||
| ▼ ▼ | ||
| ┌──────────┐ ┌─────────┐ | ||
| │ finished │ │ aborted │ | ||
| └──────────┘ └─────────┘ | ||
| │ | ||
| │ cleanup (5 min delay) | ||
| ▼ | ||
| [deleted] | ||
| ``` |
There was a problem hiding this comment.
Add language specifier to fenced code block.
This ASCII diagram code block is missing a language specifier. Consider using text or plaintext to satisfy linting and improve clarity.
-```
+```text
create() addDelta() (with heartbeat)🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 35-35: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/http-streaming-requirements.md` around lines 35 - 52, The fenced ASCII
diagram block lacks a language specifier; update the opening fence to include a
plain text language (for example use ```text or ```plaintext) so the diagram
containing create(), addDelta(), finish(), abort()/timeout, and cleanup is
lint-compliant and rendered as plain text.
docs/http-streaming-requirements.md
Outdated
| ``` | ||
| .withIndex("streamId_start_end", (q) => | ||
| q.eq("streamId", cursor.streamId).gte("start", cursor.cursor)) | ||
| ``` |
There was a problem hiding this comment.
Add language specifier to fenced code block.
This code block is missing a language specifier. Since it shows an index query pattern, consider using text or a comment indicating the format.
-```
+```text
.withIndex("streamId_start_end", (q) =>🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 120-120: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/http-streaming-requirements.md` around lines 120 - 123, The fenced code
block containing the index query snippet using .withIndex("streamId_start_end",
(q) => q.eq("streamId", cursor.streamId).gte("start", cursor.cursor)) should
include a language specifier (e.g., add ```text or ```js) or a short inline
comment indicating the format; update the block surrounding the symbols
.withIndex, "streamId_start_end", q.eq, gte, cursor.streamId, and cursor.cursor
to start with a language tag so syntax/formatting and reader intent are clear.
ianmacartney
left a comment
There was a problem hiding this comment.
seems like there's some leftover changes in here.
overall seems good though
…nings - Add .mcp.json to .gitignore and remove from tracking (machine-specific) - Remove docs/http-streaming-requirements.md (would auto-publish to docs site) - Remove unused imports (vi, DEFAULT_STREAMING_OPTIONS) in streaming tests Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
commit: |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/client/streaming.integration.test.ts (1)
46-47: Keep the new integration suite fully typed.This file widens Convex IDs to
stringand falls back toanyin several assertions. That weakens the strict typing these tests should be preserving; prefer the concreteId<...>returned bycreateThreadand narrowparts/transformswith typed predicates instead ofany.As per coding guidelines,
src/**/*.ts: UseId<'tableName'>for document IDs and maintain strict TypeScript type safety.Also applies to: 232-233, 369-371, 442-443, 758-759, 963-964, 155-157, 625-635, 746-748, 946-949, 1023-1025, 1203-1203
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/streaming.integration.test.ts` around lines 46 - 47, The tests currently widen Convex IDs to string and use any for assertions; change threadId from string to the concrete Id<'threads'> (or appropriate table name) and assign it from the typed return of createThread so it preserves the Convex Id type; replace loose any uses in assertions for parts/transforms with narrow type predicates or type guards (e.g., isTextPart/isTransformType) and assert against those narrowed types instead of casting to any; update occurrences around variables t, threadId, createThread, and the parts/transforms checks so all expectations use the proper Id<'...'> and narrowed types rather than string/any.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/client/streaming.integration.test.ts`:
- Around line 21-29: The shared test configuration defaultTestOptions currently
swallows every async abort via onAsyncAbort, hiding real streaming write
failures; change onAsyncAbort in defaultTestOptions to fail fast (e.g., throw
the received _reason or call a test failure helper) so unexpected aborts
surface, and then update only the specific tests that expect teardown/abort
behavior to override onAsyncAbort with the current no-op handler; locate
defaultTestOptions and the onAsyncAbort property to implement the change and
adjust the few tests that intentionally exercise aborts to provide their own
override.
- Around line 171-178: The tests call streamText(...) and await
streamer.consumeStream(result.toUIMessageStream()) but fail to drain the AI SDK
result itself; for each occurrence (e.g., the variable result from streamText
and its toUIMessageStream()), also call and await result.consumeStream() after
awaiting streamer.consumeStream(...) so the streamText internal async work is
fully consumed; locate uses of streamText(...) and ensure both
streamer.consumeStream(result.toUIMessageStream()) and await
result.consumeStream() are present (matching the pattern used in the first test
case) to prevent leaked async tasks/unhandled rejections.
---
Nitpick comments:
In `@src/client/streaming.integration.test.ts`:
- Around line 46-47: The tests currently widen Convex IDs to string and use any
for assertions; change threadId from string to the concrete Id<'threads'> (or
appropriate table name) and assign it from the typed return of createThread so
it preserves the Convex Id type; replace loose any uses in assertions for
parts/transforms with narrow type predicates or type guards (e.g.,
isTextPart/isTransformType) and assert against those narrowed types instead of
casting to any; update occurrences around variables t, threadId, createThread,
and the parts/transforms checks so all expectations use the proper Id<'...'> and
narrowed types rather than string/any.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 174ddac9-ead6-4473-b4af-9cca3939b322
📒 Files selected for processing (2)
.gitignoresrc/client/streaming.integration.test.ts
| const defaultTestOptions = { | ||
| throttleMs: 0, | ||
| abortSignal: undefined, | ||
| compress: null, | ||
| onAsyncAbort: async (_reason: string) => { | ||
| // In integration tests, async aborts can happen when the stream | ||
| // finishes before a pending delta write completes. This is expected. | ||
| }, | ||
| }; |
There was a problem hiding this comment.
Don't suppress every async abort in the shared test config.
This default handler makes real streaming write failures indistinguishable from the one teardown race you're documenting. I'd make the shared default fail fast and only override it in the handful of cases where an abort is actually expected.
🔧 Minimal fix
const defaultTestOptions = {
throttleMs: 0,
abortSignal: undefined,
compress: null,
- onAsyncAbort: async (_reason: string) => {
- // In integration tests, async aborts can happen when the stream
- // finishes before a pending delta write completes. This is expected.
- },
+ onAsyncAbort: async (reason: string) => {
+ throw new Error(`Unexpected async abort in test: ${reason}`);
+ },
};Then override onAsyncAbort only in tests that intentionally exercise teardown/abort behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/client/streaming.integration.test.ts` around lines 21 - 29, The shared
test configuration defaultTestOptions currently swallows every async abort via
onAsyncAbort, hiding real streaming write failures; change onAsyncAbort in
defaultTestOptions to fail fast (e.g., throw the received _reason or call a test
failure helper) so unexpected aborts surface, and then update only the specific
tests that expect teardown/abort behavior to override onAsyncAbort with the
current no-op handler; locate defaultTestOptions and the onAsyncAbort property
to implement the change and adjust the few tests that intentionally exercise
aborts to provide their own override.
| const result = streamText({ | ||
| model: mockModel({ | ||
| content: [{ type: "text", text: "Done" }], | ||
| }), | ||
| prompt: "Test", | ||
| }); | ||
|
|
||
| await streamer.consumeStream(result.toUIMessageStream()); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate the test file and check if it exists
find . -name "streaming.integration.test.ts" -type fRepository: get-convex/agent
Length of output: 102
🏁 Script executed:
# Get an overview of the file size and structure
wc -l src/client/streaming.integration.test.ts
git ls-files src/client/ | head -20Repository: get-convex/agent
Length of output: 648
🏁 Script executed:
# Read the first test case mentioned (lines 141-143) and surrounding context
sed -n '135,150p' src/client/streaming.integration.test.tsRepository: get-convex/agent
Length of output: 603
🏁 Script executed:
# Read the snippet from lines 171-178 (the main review target)
sed -n '165,185p' src/client/streaming.integration.test.tsRepository: get-convex/agent
Length of output: 641
🏁 Script executed:
# Check a few of the flagged line ranges to see the pattern
sed -n '245,260p' src/client/streaming.integration.test.ts
sed -n '280,295p' src/client/streaming.integration.test.tsRepository: get-convex/agent
Length of output: 1092
🏁 Script executed:
# Check all the flagged line ranges to confirm the incomplete pattern
echo "=== Lines 251-255 ===" && sed -n '245,260p' src/client/streaming.integration.test.ts
echo "=== Lines 287-291 ===" && sed -n '280,300p' src/client/streaming.integration.test.ts
echo "=== Lines 461-468 ===" && sed -n '455,475p' src/client/streaming.integration.test.ts
echo "=== Lines 507-511 ===" && sed -n '500,520p' src/client/streaming.integration.test.tsRepository: get-convex/agent
Length of output: 2695
🏁 Script executed:
# Check remaining flagged ranges
echo "=== Lines 519-523 ===" && sed -n '513,530p' src/client/streaming.integration.test.ts
echo "=== Lines 557-563 ===" && sed -n '550,570p' src/client/streaming.integration.test.ts
echo "=== Lines 606-615 ===" && sed -n '600,620p' src/client/streaming.integration.test.tsRepository: get-convex/agent
Length of output: 1953
🏁 Script executed:
# Check the last batch of flagged ranges
echo "=== Lines 983-993 ===" && sed -n '975,1000p' src/client/streaming.integration.test.ts
echo "=== Lines 1077-1083 ===" && sed -n '1070,1090p' src/client/streaming.integration.test.ts
echo "=== Lines 1122-1126 ===" && sed -n '1115,1135p' src/client/streaming.integration.test.tsRepository: get-convex/agent
Length of output: 2242
🏁 Script executed:
# Count how many streamText() calls exist and how many have result.consumeStream() after toUIMessageStream
rg 'streamText\s*\(' src/client/streaming.integration.test.ts | wc -l
rg 'await result\.consumeStream\(\)' src/client/streaming.integration.test.ts | wc -lRepository: get-convex/agent
Length of output: 64
🏁 Script executed:
# Look at streamText implementation to understand what result.consumeStream() does
cat -n src/client/streamText.ts | head -80Repository: get-convex/agent
Length of output: 3269
🏁 Script executed:
# Search for any comments or documentation about result consumption
rg 'consumeStream' src/client/streamText.ts -B 3 -A 3Repository: get-convex/agent
Length of output: 499
🏁 Script executed:
# Check if there are any issues or patterns related to teardown in the test file
rg 'teardown|unhandled|rejection|abort' src/client/streaming.integration.test.ts | head -20Repository: get-convex/agent
Length of output: 1000
🏁 Script executed:
# Look at the context around the first test to understand the explicit comment
sed -n '135,155p' src/client/streaming.integration.test.tsRepository: get-convex/agent
Length of output: 788
🏁 Script executed:
# Check what streamText is being imported in the integration test
head -50 src/client/streaming.integration.test.ts | grep -A 5 -B 5 importRepository: get-convex/agent
Length of output: 848
🏁 Script executed:
# Check the full imports section
sed -n '1,30p' src/client/streaming.integration.test.tsRepository: get-convex/agent
Length of output: 1045
🏁 Script executed:
# Let me check if this is using the Agent's streamText wrapper or the direct AI SDK one
rg 'from.*streamText' src/client/streaming.integration.test.tsRepository: get-convex/agent
Length of output: 42
🏁 Script executed:
# Find where streamText is defined in the test
sed -n '80,130p' src/client/streaming.integration.test.tsRepository: get-convex/agent
Length of output: 1559
Drain each streamText() result to prevent async work leaking past t.run().
The first test case (lines 141-147) explicitly does this with a comment "Ensure the AI SDK result is also fully consumed", calling both await streamer.consumeStream(result.toUIMessageStream()) and await result.consumeStream(). However, all 12 later instances only await the streamer and skip the result drain, leaving AI SDK internals unfinished when the test scope exits. This can surface as unhandled rejections during teardown.
🔧 Representative fix
const result = streamText({
model: mockModel({
content: [{ type: "text", text: "Done" }],
}),
prompt: "Test",
});
await streamer.consumeStream(result.toUIMessageStream());
+await result.consumeStream();Also applies to: 251–255, 287–291, 461–468, 507–511, 519–523, 557–563, 606–615, 983–993, 1077–1083, 1122–1126
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/client/streaming.integration.test.ts` around lines 171 - 178, The tests
call streamText(...) and await
streamer.consumeStream(result.toUIMessageStream()) but fail to drain the AI SDK
result itself; for each occurrence (e.g., the variable result from streamText
and its toUIMessageStream()), also call and await result.consumeStream() after
awaiting streamer.consumeStream(...) so the streamText internal async work is
fully consumed; locate uses of streamText(...) and ensure both
streamer.consumeStream(result.toUIMessageStream()) and await
result.consumeStream() are present (matching the pattern used in the first test
case) to prevent leaked async tasks/unhandled rejections.
done |
ianmacartney
left a comment
There was a problem hiding this comment.
I noticed mergeApprovalResponseMessages isn't applied at the same place where we drop hanging tool messages - so it isn't reflected in fetchRecentAndSearchMessages / fetchContextMessages. I can understand auto-rejecting only during the fetchContextWithPrompt
I'm also curious why we merge at read time instead of writing merged messages when approving / denying?
docs/tool-approval.mdx
Outdated
| // 3. Submit an approval decision | ||
| export const submitApproval = mutation({ | ||
| args: { | ||
| threadId: v.string(), | ||
| approvalId: v.string(), | ||
| approved: v.boolean(), | ||
| reason: v.optional(v.string()), | ||
| }, | ||
| returns: v.object({ messageId: v.string() }), | ||
| handler: async (ctx, { threadId, approvalId, approved, reason }) => { | ||
| const { messageId } = approved | ||
| ? await approvalAgent.approveToolCall(ctx, { | ||
| threadId, approvalId, reason, | ||
| }) | ||
| : await approvalAgent.denyToolCall(ctx, { | ||
| threadId, approvalId, reason, | ||
| }); | ||
| return { messageId }; | ||
| }, | ||
| }); | ||
|
|
||
| // 4. Continue generation after all approvals resolved. | ||
| // Pass the last approval message ID as promptMessageId so the agent | ||
| // resumes generation from where the approval was issued. | ||
| export const continueAfterApprovals = internalAction({ | ||
| args: { threadId: v.string(), lastApprovalMessageId: v.string() }, | ||
| handler: async (ctx, { threadId, lastApprovalMessageId }) => { | ||
| const result = await approvalAgent.streamText( | ||
| ctx, | ||
| { threadId }, | ||
| { promptMessageId: lastApprovalMessageId }, | ||
| ); | ||
| await result.consumeStream(); | ||
| }, | ||
| }); |
There was a problem hiding this comment.
I think it's worth calling out that these can be in the same function - you can approve tool calls from a mutation or an action
src/client/search.ts
Outdated
| ...inputPrompt, | ||
| ...existingResponses, | ||
| ]), | ||
| ); |
There was a problem hiding this comment.
With this, if the user's contextHandler doesn't call autoDenyUnresolvedApprovals and mergeApprovalResponseMessages then they'll end up with invalid results. I think we should do this after the context handler as post-processing, or add them to search/recent/input/prompt/existing, though there the messages might span two lists.. I'd rather we merge at write time.
- respondToToolCallApproval now appends to an existing approval-response message for the same step, so the DB always has a single merged message - Remove mergeApprovalResponseMessages from the read path in search.ts - Move autoDenyUnresolvedApprovals to post-process after contextHandler so custom handlers don't need to worry about it - Fix consumeStream to skip finish() when stream was aborted - Note in tool-approval docs that approve/deny works from mutations or actions Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Ok, can you take one more look! |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/client/search.ts (1)
651-674:⚠️ Potential issue | 🟠 MajorDon't auto-deny approvals pulled in from search results.
processedMessagesalready includessearchhits here. WhensearchOtherThreadssurfaces an unresolved approval from another thread,autoDenyUnresolvedApprovalswill synthesizeauto-denied: new generation startedfor that unrelated conversation. That mutates retrieved history and can mislead the model. Please scope this to current-thread messages, or carry thread provenance through post-processing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/search.ts` around lines 651 - 674, The current code runs autoDenyUnresolvedApprovals(processedMessages) over messages that include search hits from other threads, which causes synthesized auto-denials to be injected into unrelated conversation history; restrict the mutation to only messages from the current thread by either (A) filtering processedMessages to only messages with the current threadId before calling autoDenyUnresolvedApprovals (use the threadId variable and the message.threadId/provenance field) and then merge back, or (B) change autoDenyUnresolvedApprovals to accept a threadId (or provenance) and have it only modify messages whose threadId matches the provided threadId; update any callers (here around processedMessages, allMessages, search, recent, inputMessages, inputPrompt, existingResponses and args.contextHandler) accordingly so search-sourced approvals are not mutated.src/client/index.ts (1)
1127-1155:⚠️ Potential issue | 🟠 MajorScope merge candidates to the matched approval step.
existingResponseMessageis set to the firsttool-approval-responseseen anywhere later in the thread. If an older approval is handled after a newer step has already written its own approval response, this path will append into that unrelated message and return the wrongmessageId. Please either reject stale approvals once newer history exists, or only reuse a response message that belongs to the same prompt/step as the matched request.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/index.ts` around lines 1127 - 1155, The scan currently sets existingResponseMessage to the first tool-approval-response anywhere in the thread, which can cause merging into a response for a different prompt/step; change the logic so you locate the tool-approval-request (matching typedPart.approvalId === args.approvalId) first (capture its message._id as promptMessageId) and only accept/reuse a tool-approval-response if that response explicitly references the same prompt/step (e.g. has a parent/replyTo/promptId field equal to promptMessageId or an explicit stepId that matches the request), otherwise treat a pre-existing response that is not tied to this request as a newer unrelated entry and reject the stale approval; update the loop around listMessages/listMessages(...) and the existingResponseMessage assignment to enforce this scoped check (references: existingResponseMessage, listMessages, typedPart, tool-approval-response, tool-approval-request, approvalId, promptMessageId).
🧹 Nitpick comments (1)
example/convex/approval.test.ts (1)
285-295: Assert the mergedmessageIdexplicitly.This only feeds
lastMessageIdinto the continuation; it never proves that the first and second approvals reused the same stored message or that only one tool-response message was written. A regression back to separate writes would still pass here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example/convex/approval.test.ts` around lines 285 - 295, Collect and assert the merged messageId explicitly instead of only using lastMessageId: when iterating approvalParts and calling ctx.runMutation with submitApprovalForTestMultiToolAgent, capture the first returned messageId (e.g., firstMessageId) and assert that every subsequent messageId equals firstMessageId (or assert firstMessageId === lastMessageId after the loop); optionally also assert the SDK/DB shows only one tool-response message for the step to guard against separate writes. Ensure you use the existing variables approvalParts, lastMessageId, and the ctx.runMutation(submitApprovalForTestMultiToolAgent) call sites to locate where to add the assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/client/index.ts`:
- Around line 1127-1155: The scan currently sets existingResponseMessage to the
first tool-approval-response anywhere in the thread, which can cause merging
into a response for a different prompt/step; change the logic so you locate the
tool-approval-request (matching typedPart.approvalId === args.approvalId) first
(capture its message._id as promptMessageId) and only accept/reuse a
tool-approval-response if that response explicitly references the same
prompt/step (e.g. has a parent/replyTo/promptId field equal to promptMessageId
or an explicit stepId that matches the request), otherwise treat a pre-existing
response that is not tied to this request as a newer unrelated entry and reject
the stale approval; update the loop around listMessages/listMessages(...) and
the existingResponseMessage assignment to enforce this scoped check (references:
existingResponseMessage, listMessages, typedPart, tool-approval-response,
tool-approval-request, approvalId, promptMessageId).
In `@src/client/search.ts`:
- Around line 651-674: The current code runs
autoDenyUnresolvedApprovals(processedMessages) over messages that include search
hits from other threads, which causes synthesized auto-denials to be injected
into unrelated conversation history; restrict the mutation to only messages from
the current thread by either (A) filtering processedMessages to only messages
with the current threadId before calling autoDenyUnresolvedApprovals (use the
threadId variable and the message.threadId/provenance field) and then merge
back, or (B) change autoDenyUnresolvedApprovals to accept a threadId (or
provenance) and have it only modify messages whose threadId matches the provided
threadId; update any callers (here around processedMessages, allMessages,
search, recent, inputMessages, inputPrompt, existingResponses and
args.contextHandler) accordingly so search-sourced approvals are not mutated.
---
Nitpick comments:
In `@example/convex/approval.test.ts`:
- Around line 285-295: Collect and assert the merged messageId explicitly
instead of only using lastMessageId: when iterating approvalParts and calling
ctx.runMutation with submitApprovalForTestMultiToolAgent, capture the first
returned messageId (e.g., firstMessageId) and assert that every subsequent
messageId equals firstMessageId (or assert firstMessageId === lastMessageId
after the loop); optionally also assert the SDK/DB shows only one tool-response
message for the step to guard against separate writes. Ensure you use the
existing variables approvalParts, lastMessageId, and the
ctx.runMutation(submitApprovalForTestMultiToolAgent) call sites to locate where
to add the assertions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b5be4c89-991e-432e-a970-19052b86791c
📒 Files selected for processing (5)
docs/tool-approval.mdxexample/convex/approval.test.tssrc/client/index.tssrc/client/search.tssrc/client/streaming.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/client/streaming.ts
- docs/tool-approval.mdx
- finish() now checks abortController.signal.aborted after awaiting ongoingWrite and sendDelta, preventing streams.finish from running on aborted streams - findApprovalContext resets existingResponseMessage when crossing a different step's assistant message, preventing approval responses from merging into a different step's tool message Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/client/streaming.ts (1)
338-344:⚠️ Potential issue | 🟠 MajorStream may be left in orphaned "streaming" state when delta write fails.
When
addDeltathrows, the error is caught and swallowed after callingonAsyncAbort(which iscall.fail) and setting the local abort signal. However,streams.abortis never called to mark the stream as aborted in the database.The
call.failcallback only finalizes the pending message; it does not update the stream state. Since the error doesn't propagate, the catch block instreamText.ts(lines 150-153) that would callstreamer?.fail()and properly abort the stream never executes. This leaves the stream in "streaming" status in the database indefinitely, and callers receive no indication that streaming failed.Consider calling
streams.aborthere before returning, or ensureonAsyncAbortis responsible for marking the stream as aborted in the database.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/streaming.ts` around lines 338 - 344, When addDelta throws in src/client/streaming.ts the catch currently calls this.config.onAsyncAbort and aborts the local abortController but never marks the stream as aborted in the DB, leaving it stuck in "streaming"; update the catch block to also call the stream-abort path (e.g., invoke streams.abort with the current stream id/context) before returning, or ensure this.config.onAsyncAbort delegates to streams.abort — specifically modify the error handler that references addDelta, this.config.onAsyncAbort, and this.abortController.abort() to also call the stream abort method (streams.abort) so the stream state is cleared like streamer?.fail() does in streamText.ts.
🧹 Nitpick comments (1)
src/client/index.ts (1)
1157-1163: Clarify: tracking logic captures first encountered response regardless of approvalId.The current logic tracks the first (newest)
tool-approval-responsemessage seen during iteration, regardless of which approvalId it contains. This is intentional for same-step merging, but the comment could clarify that responses for any approval in the same step are valid merge targets.Consider enhancing the comment:
- // Track the most recent tool-approval-response message for merging + // Track the most recent tool-approval-response message for merging. + // Any response in the same step is a valid merge target, regardless + // of which approvalId it contains.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/index.ts` around lines 1157 - 1163, The comment above the block that sets existingResponseMessage when typedPart.type === "tool-approval-response" should be clarified: update the comment to state that this intentionally captures the first encountered tool-approval-response message for the current step regardless of its approvalId (i.e., responses for any approval in the same step are valid merge targets), referencing the typedPart.type check and the existingResponseMessage assignment so readers understand the intended merging behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/client/streaming.ts`:
- Around line 338-344: When addDelta throws in src/client/streaming.ts the catch
currently calls this.config.onAsyncAbort and aborts the local abortController
but never marks the stream as aborted in the DB, leaving it stuck in
"streaming"; update the catch block to also call the stream-abort path (e.g.,
invoke streams.abort with the current stream id/context) before returning, or
ensure this.config.onAsyncAbort delegates to streams.abort — specifically modify
the error handler that references addDelta, this.config.onAsyncAbort, and
this.abortController.abort() to also call the stream abort method
(streams.abort) so the stream state is cleared like streamer?.fail() does in
streamText.ts.
---
Nitpick comments:
In `@src/client/index.ts`:
- Around line 1157-1163: The comment above the block that sets
existingResponseMessage when typedPart.type === "tool-approval-response" should
be clarified: update the comment to state that this intentionally captures the
first encountered tool-approval-response message for the current step regardless
of its approvalId (i.e., responses for any approval in the same step are valid
merge targets), referencing the typedPart.type check and the
existingResponseMessage assignment so readers understand the intended merging
behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cdb914be-e64d-4d27-8c18-307c81aa85fd
📒 Files selected for processing (2)
src/client/index.tssrc/client/streaming.ts
ianmacartney
left a comment
There was a problem hiding this comment.
The full table scan + inconsistent write scenario need fixing
| - Auto-deny unresolved tool approvals when a new generation starts | ||
| - Fix unhandled rejection in DeltaStreamer on transaction teardown | ||
|
|
||
| **Note on versioning:** This release jumps from 0.3.2 to 0.6.0, skipping versions 0.4.0 and 0.5.0. This is intentional and aligns the `@convex-dev/agent` package version with the AI SDK v6 major version for clearer compatibility signaling. Going forward, the minor version of this package will track the AI SDK major version to make it easier for developers to identify which version of the AI SDK is supported. |
There was a problem hiding this comment.
note: this does constrain us a bit in not making any breaking changes between major version changes of the AI SDK.. though we can always break from this in the future.
| let cursor: string | null = null; | ||
| let existingResponseMessage: MessageDoc | undefined; | ||
| do { | ||
| const page = await this.listMessages(ctx, { |
There was a problem hiding this comment.
This is walking every message in the chat history -if we can pass in the order / promptMessageId / something, then we can highly constrain it, and we could likely enforce that you can only respond to tool calls in the latest N message order.
| const { promptMessageId, existingResponseMessage } = | ||
| await this.findApprovalContext(ctx, { | ||
| threadId: args.threadId, | ||
| approvalId: args.approvalId, | ||
| }); | ||
|
|
||
| const newPart = { | ||
| type: "tool-approval-response" as const, | ||
| approvalId: args.approvalId, | ||
| }); | ||
| approved: args.approved, | ||
| reason: args.reason, | ||
| }; | ||
|
|
||
| // Merge into an existing approval-response message for this step | ||
| // so the AI SDK sees a single tool message per step. | ||
| if (existingResponseMessage) { | ||
| const existingContent = existingResponseMessage.message?.content; | ||
| const mergedContent = Array.isArray(existingContent) | ||
| ? [...(existingContent as any[]), newPart] | ||
| : [newPart]; | ||
| await this.updateMessage(ctx, { | ||
| messageId: existingResponseMessage._id, | ||
| patch: { | ||
| message: { role: "tool", content: mergedContent }, | ||
| status: "success", | ||
| }, | ||
| }); |
There was a problem hiding this comment.
note: if this is called from an action, then this could race and replace the content with last-write-wins, losing the previous approval... can we do this in saveMessage so anyone who calls saveMessage with a tool approval will have it auto-squashed into the appropriate message? It should generally be one of the last messages (maybe has to be the the last message b/c of the Gemini?).
…lict Write-time merge in findApprovalContext handles all cases, making the read-time mergeApprovalResponseMessages unnecessary. Also resolves merge conflict in toModelMessageContent tool-approval-response case. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Hey @sethconvex, thank you for the great work!! When do you expect this version to be released? Looking forward for it 🤞 |
- Remove stash conflict markers from UIMessages.ts (kept approval/execution-denied UI processing code) - Remove misplaced filterApprovalPartsForProvider call on MessageDoc[] in search.ts (only needed on ModelMessage[] in fetchContextWithPrompt) - Update approval test expectations to match write-time merge behavior (one tool message instead of two) - Remove duplicate JSDoc comment Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Prevents scanning entire thread history when looking for approval requests. Approvals should always be near the end of the thread. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
serializeResponseMessages takes explicit response messages (used by start.ts streaming loop). serializeNewMessagesInStep keeps the heuristic path for callers that don't track message counts. Shared logic extracted to serializeStepMessages. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
findApprovalContext was incorrectly resetting existingResponseMessage when the same assistant message had multiple approval requests (e.g., two tool calls needing approval in one step). Now only resets when the target approval is in a genuinely different step. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The filter was stripping tool-approval-request and tool-approval-response parts before the AI SDK could process them via collectToolApprovals. The SDK handles filtering internally before sending to providers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Verifies that approving two tool calls from the same assistant message correctly merges approval responses into one tool message via findApprovalContext's step-boundary logic. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- compressTextStreamParts: file parts were double-pushed (sanitized copy + original with binary data). Fixed with else-if chain. - DeltaStreamer abort handler: abort the controller immediately and await in-flight stream creation before cleanup, preventing orphaned streams. - streamText: save deferred final step in catch block so generated messages aren't lost when consumeStream throws. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@codex review |
|
To use Codex here, create a Codex account and connect to github. |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/client/streaming.ts (1)
246-262:⚠️ Potential issue | 🟠 MajorGuard the abort event listener from unhandled Promise rejections.
Line 246 uses an
asyncevent listener; if Line 253 or Lines 257-260 reject, that rejection can go unhandled. Wrap the async work in a caught task (ortry/catch) so this path cannot reintroduce unhandled rejections.Proposed hardening
- config.abortSignal.addEventListener("abort", async () => { + config.abortSignal.addEventListener("abort", () => { + void (async () => { if (this.abortController.signal.aborted) { return; } this.abortController.abort(); // Wait for in-flight stream creation before trying to abort it if (this.#creatingStreamIdPromise) { await this.#creatingStreamIdPromise; } if (this.streamId) { await this.#ongoingWrite; await this.ctx.runMutation(this.component.streams.abort, { streamId: this.streamId, reason: "abortSignal", }); } - }); + })().catch(async (e) => { + await this.config.onAsyncAbort( + e instanceof Error ? e.message : "abort listener failure", + ); + }); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/streaming.ts` around lines 246 - 262, The abortSignal listener added via config.abortSignal.addEventListener currently uses an async callback that can produce unhandled rejections from awaiting this.#creatingStreamIdPromise, this.#ongoingWrite, or this.ctx.runMutation(this.component.streams.abort); wrap the entire async body in a try/catch (or dispatch the work to a caught task) so any thrown error is caught and handled (e.g., log via processLogger or swallow if intentional) and ensure the listener still respects this.abortController.signal.aborted and performs the abort flow for this.abortController, this.streamId, and related awaits inside the guarded block.
🧹 Nitpick comments (1)
src/client/streaming.ts (1)
446-453: Avoid theundefined as unknown as Uint8Arraydouble-cast here.Line 451 bypasses the type system and weakens safety for downstream consumers of
file.uint8Array. Prefer introducing a compressed/persisted file-part type where bytes are optional/omitted, then encode that type directly instead of force-casting.As per coding guidelines
src/**/*.ts: maintain strict TypeScript type safety.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/streaming.ts` around lines 446 - 453, The code is force-casting file.uint8Array to Uint8Array with "undefined as unknown as Uint8Array" inside the part.type === "file" branch (the compressed.push block); replace this unsafe double-cast by introducing a dedicated compressed/persisted file-part shape where the file field omits or makes uint8Array optional (e.g., PersistedFilePart or CompressedFilePart) and construct that object explicitly in the compressed.push call (copy other file metadata but do not include the bytes), then adjust any encoder/consumer to accept the new type instead of relying on a coerced Uint8Array.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/client/approval.test.ts`:
- Around line 314-326: The type error comes from the flatMap/filter pipeline
producing a broad union that isn't narrowed by the final assertion on
approvalParts; replace the inline filter with a proper type predicate function
so TypeScript can narrow each item to the shape with type ===
"tool-approval-request" and then map to { approvalId, toolCallId } — target the
pipeline that starts from result1.savedMessages and the filter call, implement a
predicate like function isApprovalPart(p): p is { type: "tool-approval-request";
approvalId: string; toolCallId: string } and use it in .filter(isApprovalPart)
(and then .map(...) if needed) to eliminate the type mismatch for approvalParts.
In `@src/client/streamText.ts`:
- Around line 172-174: When persisting the deferred step (pendingFinalStep) in
the recovery block, wrap the await call.save({ step: pendingFinalStep }, false)
in its own try/catch so any error from call.save does not overwrite the original
caught error variable (e); if save fails, record or log that save error (e.g.,
attach as a suppressed/inner error or processLogger.error) but rethrow the
original stream error (e) so the root failure is preserved. Ensure you reference
pendingFinalStep, call.save and the original error variable (e) in the fix.
---
Outside diff comments:
In `@src/client/streaming.ts`:
- Around line 246-262: The abortSignal listener added via
config.abortSignal.addEventListener currently uses an async callback that can
produce unhandled rejections from awaiting this.#creatingStreamIdPromise,
this.#ongoingWrite, or this.ctx.runMutation(this.component.streams.abort); wrap
the entire async body in a try/catch (or dispatch the work to a caught task) so
any thrown error is caught and handled (e.g., log via processLogger or swallow
if intentional) and ensure the listener still respects
this.abortController.signal.aborted and performs the abort flow for
this.abortController, this.streamId, and related awaits inside the guarded
block.
---
Nitpick comments:
In `@src/client/streaming.ts`:
- Around line 446-453: The code is force-casting file.uint8Array to Uint8Array
with "undefined as unknown as Uint8Array" inside the part.type === "file" branch
(the compressed.push block); replace this unsafe double-cast by introducing a
dedicated compressed/persisted file-part shape where the file field omits or
makes uint8Array optional (e.g., PersistedFilePart or CompressedFilePart) and
construct that object explicitly in the compressed.push call (copy other file
metadata but do not include the bytes), then adjust any encoder/consumer to
accept the new type instead of relying on a coerced Uint8Array.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fb9e969e-f421-45b4-b497-03679df48a1f
📒 Files selected for processing (9)
docs/tool-approval.mdxsrc/UIMessages.tssrc/client/approval.test.tssrc/client/index.tssrc/client/start.tssrc/client/streamText.tssrc/client/streaming.tssrc/mapping.test.tssrc/mapping.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/tool-approval.mdx
- Wrap deferred step save in catch block with its own try/catch so a save failure doesn't mask the original stream error. - Wrap abort signal listener body in try/catch to prevent unhandled promise rejections from crashing the process. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
src/client/approval.test.ts (1)
314-320:⚠️ Potential issue | 🟡 MinorFix the TypeScript type error causing pipeline failure.
The
flatMapproduces a broad union type that TypeScript cannot narrow through the filter + final type assertion. Use a type predicate to properly narrow the type.🔧 Proposed fix
// Extract both approval IDs const approvalParts = result1.savedMessages ?.flatMap((m) => - Array.isArray(m.message?.content) ? m.message.content : [], + Array.isArray(m.message?.content) + ? (m.message.content as unknown[]) + : [], ) .filter( - (p) => (p as { type: string }).type === "tool-approval-request", - ) as Array<{ approvalId: string; toolCallId: string }>; + ( + p, + ): p is { + type: "tool-approval-request"; + approvalId: string; + toolCallId: string; + } => (p as { type?: string }).type === "tool-approval-request", + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/approval.test.ts` around lines 314 - 320, The current pipeline uses flatMap + filter with an unsafe type assertion for approvalParts causing a TypeScript error; replace the final type assertion by adding a type predicate function (e.g., isToolApprovalRequest) that checks p?.type === "tool-approval-request" and returns p is { approvalId: string; toolCallId: string }, then pass that predicate into the filter used after flatMap (on result1.savedMessages.flatMap(...)). Update approvalParts to be declared from that filtered result (no cast) so TypeScript can correctly narrow the union.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/client/approval.test.ts`:
- Around line 314-320: The current pipeline uses flatMap + filter with an unsafe
type assertion for approvalParts causing a TypeScript error; replace the final
type assertion by adding a type predicate function (e.g., isToolApprovalRequest)
that checks p?.type === "tool-approval-request" and returns p is { approvalId:
string; toolCallId: string }, then pass that predicate into the filter used
after flatMap (on result1.savedMessages.flatMap(...)). Update approvalParts to
be declared from that filtered result (no cast) so TypeScript can correctly
narrow the union.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0fa66f39-c370-49a5-a45a-e264327bcb96
📒 Files selected for processing (9)
docs/tool-approval.mdxsrc/UIMessages.tssrc/client/approval.test.tssrc/client/index.tssrc/client/start.tssrc/client/streamText.tssrc/client/streaming.tssrc/mapping.test.tssrc/mapping.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- src/client/streaming.ts
- src/client/start.ts
- src/client/streamText.ts
sethconvex
left a comment
There was a problem hiding this comment.
ok, I think I've addressed the issues here. I've reviewed all the code by eyeball, I've tested locally, and had multiple review passes from ai agents. I think we're ready to release.
Use a type predicate in the filter instead of a cast to properly narrow the approval parts type. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/client/approval.test.ts`:
- Around line 337-344: Rename the intentionally unused destructured variable
msgId1 to have a leading underscore to satisfy the ESLint rule; update the first
await call that uses ctx.runMutation and
anyApi["approval.test"].submitApprovalForMultiToolAgent to destructure as
messageId: _msgId1 (leaving the second destructuring to messageId: msgId2
unchanged) so the variable is clearly marked as intentionally unused.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3134c1b1-dd8a-4d99-b912-99766227bef2
📒 Files selected for processing (1)
src/client/approval.test.ts
src/client/approval.test.ts
Outdated
| const { messageId: msgId1 } = await ctx.runMutation( | ||
| anyApi["approval.test"].submitApprovalForMultiToolAgent, | ||
| { threadId: thread.threadId, approvalId: approvalParts[0].approvalId }, | ||
| ); | ||
| const { messageId: msgId2 } = await ctx.runMutation( | ||
| anyApi["approval.test"].submitApprovalForMultiToolAgent, | ||
| { threadId: thread.threadId, approvalId: approvalParts[1].approvalId }, | ||
| ); |
There was a problem hiding this comment.
Prefix unused variable with underscore.
msgId1 is assigned but never used. Per the ESLint rule, unused variables should match /^_/u. Since both approvals are merged at write time and only the last promptMessageId is needed to continue generation, this is intentionally unused.
🔧 Proposed fix
// Approve both tool calls
- const { messageId: msgId1 } = await ctx.runMutation(
+ const { messageId: _msgId1 } = await ctx.runMutation(
anyApi["approval.test"].submitApprovalForMultiToolAgent,
{ threadId: thread.threadId, approvalId: approvalParts[0].approvalId },
);🧰 Tools
🪛 GitHub Check: Test and lint
[warning] 337-337:
'msgId1' is assigned a value but never used. Allowed unused vars must match /^_/u
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/client/approval.test.ts` around lines 337 - 344, Rename the intentionally
unused destructured variable msgId1 to have a leading underscore to satisfy the
ESLint rule; update the first await call that uses ctx.runMutation and
anyApi["approval.test"].submitApprovalForMultiToolAgent to destructure as
messageId: _msgId1 (leaving the second destructuring to messageId: msgId2
unchanged) so the variable is clearly marked as intentionally unused.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| let existingResponseMessage: MessageDoc | undefined; | ||
| // Limit the search to the most recent messages. Approvals should always | ||
| // be near the end of the thread — scanning further is wasteful. | ||
| const MAX_MESSAGES_TO_SCAN = 200; |
There was a problem hiding this comment.
nit: let's make this 100 and just fetch a single page instead of the do/while
| }); | ||
| } | ||
| } catch (e) { | ||
| console.error("Error during stream abort cleanup:", e); |
There was a problem hiding this comment.
is it ok to swallow the error here?
| // Skip finish if it will be handled externally (atomically with message save) | ||
| if (!this.#finishedExternally) { | ||
| // or if the stream was aborted (e.g., due to a failed delta write) | ||
| if (!this.#finishedExternally && !this.abortController.signal.aborted) { |
There was a problem hiding this comment.
to confirm - calling streams.abort means we don't need to call this.finish for aborted streams?
| if (this.abortController.signal.aborted) { | ||
| return; | ||
| } |
There was a problem hiding this comment.
nit: this isn't necessary, since sendDelta does this as the first line
Merge activity
|
Summary
auto-deny-unresolved-approvalsbranch)Test plan
npm run typecheckpassesnpm test— 264 tests pass, 0 type errors, 0 unhandled rejectionsnpm run lint— 0 errors (2 pre-existing warnings)🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation
Chores