Conversation
📝 WalkthroughWalkthroughAdded experimental tool-call reply mode via Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant ChatPlugin as Chat Plugin<br/>(chat.ts)
participant MessageQueue as Message Queue<br/>(PendingMessageQueue)
participant Agent as Agent/LLM Chain<br/>(chain.ts)
participant ToolExec as Tool Executor<br/>(character_reply)
participant MessageCollector as Message Collector<br/>(message.ts)
User->>ChatPlugin: Message arrives
activate ChatPlugin
ChatPlugin->>MessageQueue: Start pending messages
Note over MessageQueue: Buffer incoming messages
ChatPlugin->>Agent: Generate response with tools
activate Agent
loop Streaming
Agent->>ToolExec: Execute character_reply tool
ToolExec->>ToolExec: Parse status/message/next_reply
ToolExec-->>Agent: Tool result
Agent->>MessageQueue: Check pending messages
alt Messages buffered
MessageQueue-->>Agent: Latest trigger reason
Agent->>Agent: Schedule next_reply/wake_up
else No messages
Agent->>Agent: Use existing reasons
end
Agent-->>ChatPlugin: Stream chunk with toolCalls
ChatPlugin-->>User: Emit response segment
end
deactivate Agent
ChatPlugin->>MessageCollector: Mark pending messages consumed
MessageCollector->>MessageCollector: Release response lock
activate MessageCollector
MessageCollector->>MessageCollector: Skip already-consumed waiters
MessageCollector-->>ChatPlugin: Proceed with next
deactivate MessageCollector
ChatPlugin-->>User: Response complete
deactivate ChatPlugin
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
2311b38 to
6552f4e
Compare
There was a problem hiding this comment.
Code Review
This pull request introduces an experimental tool-calling reply mechanism (experimentalToolCallReply), allowing models to manage state updates and message replies through structured tools rather than XML blocks. Key changes include the addition of a new tool-calling preset, a PendingMessageQueue to handle incoming messages during tool execution, and enhanced support for video and sticker elements across various adapters. The review feedback highlights critical issues regarding the potential loss of context when multiple tool calls occur in a single turn, the accidental removal of retry logic in the model response stream, and a type-checking risk that could lead to data loss in message chunks.
| lastResponseMessage = | ||
| copyOfConfig.experimentalToolCallReply && | ||
| chunk.toolCalls?.length | ||
| ? new AIMessage(chunk.responseContent) | ||
| : chunk.responseMessage | ||
| await ctx.chatluna_character.broadcastOnBot( |
| ): AsyncGenerator<StreamedParsedResponseChunk> { | ||
| for (let retryCount = 0; retryCount < 2; retryCount++) { | ||
| if (signal?.aborted) return | ||
| let emittedAny = false | ||
| if (signal?.aborted) return | ||
|
|
||
| try { | ||
| const lastMessage = | ||
| completionMessages[completionMessages.length - 1] | ||
| const historyMessages = completionMessages.slice(0, -1) | ||
| try { | ||
| const lastMessage = | ||
| completionMessages[completionMessages.length - 1] | ||
| const historyMessages = completionMessages.slice(0, -1) | ||
|
|
||
| const systemMessage = | ||
| chain != null ? historyMessages.shift() : undefined | ||
| const systemMessage = | ||
| chain != null ? historyMessages.shift() : undefined | ||
|
|
||
| if (chain) { | ||
| for await (const responseChunk of streamAgentResponseContents( | ||
| chain, | ||
| if (chain) { | ||
| for await (const responseChunk of streamAgentResponseContents( | ||
| ctx, | ||
| chain, | ||
| session, | ||
| model, | ||
| config, | ||
| presetName, | ||
| systemMessage, | ||
| historyMessages, | ||
| lastMessage, | ||
| signal, | ||
| messageQueue, | ||
| onAgentEvent | ||
| )) { | ||
| yield await parseResponseContent( | ||
| ctx, | ||
| session, | ||
| model, | ||
| presetName, | ||
| systemMessage, | ||
| historyMessages, | ||
| lastMessage, | ||
| signal | ||
| )) { | ||
| emittedAny = true | ||
|
|
||
| yield await parseResponseContent( | ||
| ctx, | ||
| session, | ||
| config, | ||
| responseChunk | ||
| ) | ||
| } | ||
|
|
||
| return | ||
| } | ||
|
|
||
| const responseMessage = await model.invoke( | ||
| completionMessages, | ||
| createStreamConfig(session, model, presetName, signal) | ||
| ) | ||
| const responseContent = getMessageContent(responseMessage.content) | ||
| if (responseContent.trim().length < 1) { | ||
| return | ||
| config, | ||
| responseChunk | ||
| ) | ||
| } | ||
|
|
||
| logger.debug(`model response:\n${responseContent}`) | ||
| emittedAny = true | ||
|
|
||
| yield await parseResponseContent(ctx, session, config, { | ||
| responseMessage, | ||
| responseContent, | ||
| isIntermediate: false | ||
| }) | ||
| return | ||
| } catch (e) { | ||
| if (signal?.aborted) return | ||
| logger.error('model requests failed', e) | ||
| if (emittedAny || retryCount === 1) return | ||
| await sleep(3000) | ||
| } | ||
|
|
||
| const responseMessage = await model.invoke( | ||
| completionMessages, | ||
| createStreamConfig(session, model, presetName, signal) | ||
| ) | ||
| const responseContent = getMessageContent(responseMessage.content) | ||
|
|
||
| logger.debug(`model response:\n${responseContent}`) | ||
|
|
||
| yield await parseResponseContent(ctx, session, config, { | ||
| responseMessage, | ||
| responseContent, | ||
| isIntermediate: false | ||
| }) | ||
| } catch (e) { | ||
| if (signal?.aborted) return | ||
| logger.error('model requests failed', e) | ||
| } | ||
| } |
|
|
||
| return new AIMessageChunk({ | ||
| content | ||
| content: text.trim().length < 1 ? '' : content |
There was a problem hiding this comment.
这里的逻辑 text.trim().length < 1 ? '' : content 存在风险。如果 content 是一个包含工具调用(tool calls)的数组而非纯字符串,getMessageContent(content) 可能会返回空字符串,导致整个 content 被替换为 '',从而丢失原始的工具调用信息。建议仅在 content 确定为字符串时进行此类处理。
| content: text.trim().length < 1 ? '' : content | |
| content: (typeof content === 'string' && text.trim().length < 1) ? '' : content |
|
|
||
| const chunkQueue = createAsyncChunkQueue<ChatLunaChainStreamChunk>() | ||
| let buf = '' | ||
| const toolCalls: ChatLunaChainStreamChunk['toolCalls'] = [] |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/plugins/filter.ts (1)
237-256:⚠️ Potential issue | 🟠 Major
messageWaitbursts are still skewing the activity trigger path.Once Lines 814-825 put the group into wait mode, later messages in that buffered burst still go through
updateIncomingMessageStats()and the activity-score recompute. That means the delayed wait reply does not actually isolate the burst from the tuned activity model, so the same burst can prime an extra activity-triggered reply right after it. You’ll want to decide wait mode before updating activity stats, and skip both accumulation and score recomputation whileinfo.messageWaitis active.Based on learnings, activity scoring algorithm constants in
src/plugins/filter.tsare tuned and changes have large behavioral impact.Also applies to: 749-825, 864-881
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/filter.ts` around lines 237 - 256, The activity-trigger path is still updated for messages that are part of a buffered "wait" burst; before calling updateIncomingMessageStats() or mutating info.messageTimestamps/currentActivityThreshold you must check info.messageWait and bail out if it's active: modify the logic in the incoming message handling to decide wait mode first, and when info.messageWait is truthy skip both accumulation (pushing/shift of info.messageTimestamps) and the activity-score recompute (the block that sets info.currentActivityThreshold and checks enableActivityScoreTrigger), ensuring updateIncomingMessageStats() is not run while info.messageWait is set so bursts do not prime additional activity-triggered replies.src/utils/messages.ts (1)
309-321:⚠️ Potential issue | 🟠 MajorDon’t require
chatluna_file_urlfor incoming audio/video context.
chatluna_file_urlis the synthetic attr added by the reply renderers, but adapter-providedaudio/videoelements usually arrive with nativesrc/urlattrs only. With the current guard those attachments serialize to nothing, so the model loses the media reference in history.🐛 Suggested fix
- const url = element.attrs['chatluna_file_url'] + const url = + element.attrs['chatluna_file_url'] ?? + element.attrs['src'] ?? + element.attrs['url'] if (!url) { continue }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/messages.ts` around lines 309 - 321, The current branch in the media serialization in src/utils/messages.ts only reads element.attrs['chatluna_file_url'], which drops adapter-provided audio/video that use native attributes; update the block that handles element.type === 'video' || element.type === 'audio' (the code that computes url, marker, and calls filteredBuffer.push(`<voice>...</voice>` / `<video>...</video>` ) to fallback to standard attributes (e.g. element.attrs['src'] or element.attrs['url']) when chatluna_file_url is missing, ensure you still call escapeXml(String(url)) and preserve the voice vs video branching; leave early only if all candidate attrs are absent.src/utils/chain.ts (1)
208-217:⚠️ Potential issue | 🟡 MinorReset
extraRefon session-less calls.Line 215 returns before touching
extraRef, so a laterinvoke()/stream()withoutconfigurable.sessionreuses the previous session's extra tools.Suggested fix
if (!session) { + extraRef.value = [] return toolMask }Also applies to: 236-237
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/chain.ts` around lines 208 - 217, In updateToolsIfNeeded, when options?.configurable?.session is missing the function currently returns early and leaves extraRef populated from a previous session; change the early-return path to clear/reset the extraRef (and any session-specific tool state) before returning so subsequent invoke()/stream() calls without a session don't reuse previous session tools; apply the same reset logic in the sibling path referenced near the tool selection/assignment (the code handling toolMask/extraRef around the later branch) so both session-less exit points clear extraRef.
🧹 Nitpick comments (1)
src/utils/messages.ts (1)
145-154: Avoid mutating the caller-ownedmessagesarray here.This
shift()loop destructively removes the leading system prompts from the input array. If the same array is reused for retries, logging, or another formatter pass, those messages are gone on the second use.♻️ Non-mutating version
- while (messages[0]?.getType() === 'system') { - const message = messages.shift() + let firstNonSystem = 0 + while (messages[firstNonSystem]?.getType() === 'system') { + const message = messages[firstNonSystem]! systemMessages.push(message) currentTokens += await model.getNumTokens( getMessageContent(message.content) ) + firstNonSystem++ } @@ - for (let index = messages.length - 1; index >= 0; index--) { + for (let index = messages.length - 1; index >= firstNonSystem; index--) { const message = messages[index]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/messages.ts` around lines 145 - 154, The loop currently mutates the caller-owned messages array by calling messages.shift(); instead, collect leading system messages without modifying messages (e.g., iterate from index 0 until a non-'system' message is found or use messages.findIndex and slice), push those messages into systemMessages and accumulate currentTokens by calling model.getNumTokens(getMessageContent(message.content)) for each, leaving the original messages array intact for retries/logging/other formatters; ensure you reference the same symbols (systemMessages, messages, currentTokens, model.getNumTokens, getMessageContent) when implementing the non-mutating approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/plugins/chat.ts`:
- Around line 316-343: The schema for the message object allows mixed sibling
fields (e.g., text together with image/sticker/at/face or parts), but
buildXmlMessage() currently returns on the first non-text branch and drops text;
update buildXmlMessage (and any helpers that render message objects) to preserve
and render mixed payloads by: detecting when text exists alongside other media
and either (a) emit the text first then emit the media nodes (or wrap each
sibling as separate parts in the same message stream) or (b) convert the
combined fields into a parts array preserving order before rendering; ensure
handling of the existing parts array remains compatible and that the renderer no
longer short-circuits after encountering the first non-text field.
- Around line 2009-2011: Currently stopPendingMessages(session) is called in the
finally block before the response lock is handed off, which allows a newer
trigger to enqueue a waiter and take the lock, breaking the “latest-wins”
behavior; move the call to service.stopPendingMessages(session) so it executes
after the lock handoff/replay logic (i.e., after releaseResponseLock()/the
replay of buffered triggers) so pending capture is stopped only once the old
turn has fully handed off the lock. Update both occurrences around the block
that contains releaseResponseLock() and the replay (the 2059–2069 region) to
call stopPendingMessages(session) after the replay/lock release instead of in
the finally prior to handoff.
In `@src/service/message.ts`:
- Around line 64-70: The willConsume flag on _activePendingMessages is set by
setPendingMessagesWillConsume but never honored when new messages arrive: update
the incoming-message handler (the code that currently calls active.append(...)
and returns early) to first check active.willConsume — if true, append and
return; if false, do not divert the message to active.append but let it continue
through the normal trigger/waiter flow; also ensure
setPendingMessagesWillConsume resets willConsume (e.g., after a final
character_reply) and that any queued messages are flushed back into the normal
processing path instead of remaining diverted.
In `@src/utils/render.ts`:
- Around line 134-145: In renders.video.process, the code rebuilds a fresh video
element from child text and drops any existing attributes (match.extra or
el.attrs) and overwrites src; update process (the function defined in
src/utils/render.ts) to collect and preserve existing attributes: start from
el.attrs (falling back to match.extra if available), ensure src is set from
el.attrs or the extracted url only if missing, copy through all other attributes
into the created video node (video.attrs), and still set chatluna_file_url as
before; reference getElementText, renders.video.process, match.extra, el.attrs,
and h('video') when making the change.
In `@src/utils/send.ts`:
- Around line 122-129: The OneBot path drops a leading quote because the split
function returns a segment with start pointing at a preceding 'quote' but the
OneBot send code only uses the video element; update the OneBot send logic (the
block handling part.elements and sending via OneBot) to check part.elements[0]
for type 'quote' and, when present, prepend a reply segment constructed from
that quote (e.g., create a segment { type: 'reply', id: quote.id } or the
project's equivalent) before the video segment so the quoted reference is
preserved; keep the existing split behavior but ensure the OneBot message
assembly includes the extracted reply segment whenever part.elements[0].type ===
'quote'.
In `@src/utils/triggers.ts`:
- Around line 30-34: The parser in triggers.ts currently turns user_id="all"
into token "id_all", which then looks up messageTimestampsByUserId['all']
instead of treating "all" as a wildcard; change the logic inside the
message_from_user branch (the code that extracts userId and assigns token) so
that if userId.trim().toLowerCase() === 'all' you do not set token to "id_all"
(either leave token undefined or set a designated wildcard token) so downstream
checks against messageTimestampsByUserId treat it as "any user"; apply the same
adjustment to the other occurrences referenced (the similar user_id parsing at
the blocks around the earlier lines 147-149 and later 275-277) so "all" is
handled case-insensitively and treated as a wildcard rather than a literal user
id.
---
Outside diff comments:
In `@src/plugins/filter.ts`:
- Around line 237-256: The activity-trigger path is still updated for messages
that are part of a buffered "wait" burst; before calling
updateIncomingMessageStats() or mutating
info.messageTimestamps/currentActivityThreshold you must check info.messageWait
and bail out if it's active: modify the logic in the incoming message handling
to decide wait mode first, and when info.messageWait is truthy skip both
accumulation (pushing/shift of info.messageTimestamps) and the activity-score
recompute (the block that sets info.currentActivityThreshold and checks
enableActivityScoreTrigger), ensuring updateIncomingMessageStats() is not run
while info.messageWait is set so bursts do not prime additional
activity-triggered replies.
In `@src/utils/chain.ts`:
- Around line 208-217: In updateToolsIfNeeded, when
options?.configurable?.session is missing the function currently returns early
and leaves extraRef populated from a previous session; change the early-return
path to clear/reset the extraRef (and any session-specific tool state) before
returning so subsequent invoke()/stream() calls without a session don't reuse
previous session tools; apply the same reset logic in the sibling path
referenced near the tool selection/assignment (the code handling
toolMask/extraRef around the later branch) so both session-less exit points
clear extraRef.
In `@src/utils/messages.ts`:
- Around line 309-321: The current branch in the media serialization in
src/utils/messages.ts only reads element.attrs['chatluna_file_url'], which drops
adapter-provided audio/video that use native attributes; update the block that
handles element.type === 'video' || element.type === 'audio' (the code that
computes url, marker, and calls filteredBuffer.push(`<voice>...</voice>` /
`<video>...</video>` ) to fallback to standard attributes (e.g.
element.attrs['src'] or element.attrs['url']) when chatluna_file_url is missing,
ensure you still call escapeXml(String(url)) and preserve the voice vs video
branching; leave early only if all candidate attrs are absent.
---
Nitpick comments:
In `@src/utils/messages.ts`:
- Around line 145-154: The loop currently mutates the caller-owned messages
array by calling messages.shift(); instead, collect leading system messages
without modifying messages (e.g., iterate from index 0 until a non-'system'
message is found or use messages.findIndex and slice), push those messages into
systemMessages and accumulate currentTokens by calling
model.getNumTokens(getMessageContent(message.content)) for each, leaving the
original messages array intact for retries/logging/other formatters; ensure you
reference the same symbols (systemMessages, messages, currentTokens,
model.getNumTokens, getMessageContent) when implementing the non-mutating
approach.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8af5bb06-77f2-49af-9c95-11481da2934f
📒 Files selected for processing (15)
resources/presets/default-tool-call.ymlresources/presets/default.ymlsrc/config.tssrc/plugins/chat.tssrc/plugins/filter.tssrc/service/message.tssrc/service/trigger.tssrc/types.tssrc/utils/chain.tssrc/utils/elements.tssrc/utils/index.tssrc/utils/messages.tssrc/utils/render.tssrc/utils/send.tssrc/utils/triggers.ts
| private _activePendingMessages: Record< | ||
| string, | ||
| { | ||
| willConsume: boolean | ||
| append: (message: Message, triggerReason?: string) => void | ||
| } | ||
| > = {} |
There was a problem hiding this comment.
willConsume is dead state right now.
setPendingMessagesWillConsume() updates active.willConsume, but Line 699 still appends every incoming message and returns early regardless. After a final character_reply, new messages are still diverted into the pending queue instead of falling back to the normal trigger/waiter flow.
Suggested fix
const active = this._activePendingMessages[groupId]
if (active) {
- active.append(message, triggerReason)
- return true
+ if (active.willConsume) {
+ active.append(message, triggerReason)
+ return true
+ }
}Also applies to: 146-154, 697-701
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/service/message.ts` around lines 64 - 70, The willConsume flag on
_activePendingMessages is set by setPendingMessagesWillConsume but never honored
when new messages arrive: update the incoming-message handler (the code that
currently calls active.append(...) and returns early) to first check
active.willConsume — if true, append and return; if false, do not divert the
message to active.append but let it continue through the normal trigger/waiter
flow; also ensure setPendingMessagesWillConsume resets willConsume (e.g., after
a final character_reply) and that any queued messages are flushed back into the
normal processing path instead of remaining diverted.
| renders.video = { | ||
| parse: createMatch, | ||
| render: (match) => [ | ||
| h('video', match.extra ?? {}, [h.text(match.content)]) | ||
| ], | ||
| process: (el) => { | ||
| const url = getElementText(el.children).trim() | ||
| const video = h('video', { src: url }) | ||
| video.attrs['chatluna_file_url'] = url | ||
|
|
||
| return [video] | ||
| } |
There was a problem hiding this comment.
Preserve src and passthrough attrs when normalizing videos.
This process() step rebuilds a fresh <video> from child text only. Any attrs carried through match.extra are dropped, and an already-normalized video element with src on el.attrs gets rewritten as src="".
🐛 Suggested fix
renders.video = {
parse: createMatch,
render: (match) => [
h('video', match.extra ?? {}, [h.text(match.content)])
],
process: (el) => {
- const url = getElementText(el.children).trim()
- const video = h('video', { src: url })
+ const url = String(
+ getElementText(el.children).trim() ||
+ el.attrs['src'] ||
+ ''
+ )
+ const video = h('video', { ...el.attrs, src: url })
video.attrs['chatluna_file_url'] = url
return [video]
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/render.ts` around lines 134 - 145, In renders.video.process, the
code rebuilds a fresh video element from child text and drops any existing
attributes (match.extra or el.attrs) and overwrites src; update process (the
function defined in src/utils/render.ts) to collect and preserve existing
attributes: start from el.attrs (falling back to match.extra if available),
ensure src is set from el.attrs or the extracted url only if missing, copy
through all other attributes into the created video node (video.attrs), and
still set chatluna_file_url as before; reference getElementText,
renders.video.process, match.extra, el.attrs, and h('video') when making the
change.
| split: (elements, idx, start) => ({ | ||
| type: 'video', | ||
| start: | ||
| idx > start && elements[idx - 1]?.type === 'quote' | ||
| ? idx - 1 | ||
| : idx, | ||
| end: idx + 1 | ||
| }), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/utils/send.ts | head -200Repository: ChatLunaLab/chatluna-character
Length of output: 8119
🏁 Script executed:
sed -n '43,119p' src/utils/send.ts | grep -A 50 "onebot"Repository: ChatLunaLab/chatluna-character
Length of output: 1952
OneBot video sends drop the leading quote element.
The split function includes a preceding quote element (lines 122–129), but the OneBot send path (lines 146–169) constructs a message with only the video segment. When a quoted video is sent on OneBot, the quote reference is lost.
Apply the suggested fix to extract and include a reply segment if part.elements[0] is a quote:
Suggested fix
const bot = session.bot as OneBotBot<Context>
const action = session.isDirect ? 'send_private_msg' : 'send_group_msg'
+ const message: OneBotMessageSegment[] = []
+
+ if (part.elements[0]?.type === 'quote') {
+ message.push({
+ type: 'reply',
+ data: {
+ id: String(part.elements[0].attrs.id ?? '')
+ }
+ })
+ }
+
+ message.push({
+ type: 'video',
+ data: { file }
+ })
+
const data = (await bot.internal._request(
action,
session.isDirect
? {
user_id: Number(session.userId),
- message: [
- {
- type: 'video',
- data: { file }
- }
- ]
+ message
}
: {
group_id: Number(session.guildId),
- message: [
- {
- type: 'video',
- data: { file }
- }
- ]
+ message
}
)) as OneBotSendMessageResponse🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/send.ts` around lines 122 - 129, The OneBot path drops a leading
quote because the split function returns a segment with start pointing at a
preceding 'quote' but the OneBot send code only uses the video element; update
the OneBot send logic (the block handling part.elements and sending via OneBot)
to check part.elements[0] for type 'quote' and, when present, prepend a reply
segment constructed from that quote (e.g., create a segment { type: 'reply', id:
quote.id } or the project's equivalent) before the video segment so the quoted
reference is preserved; keep the existing split behavior but ensure the OneBot
message assembly includes the extracted reply segment whenever
part.elements[0].type === 'quote'.
| if (type === 'message_from_user') { | ||
| const userId = attributes.match(/\buser_id\s*=\s*['"]([^'"]+)['"]/i)?.[1] | ||
| if (userId?.trim()) { | ||
| token = `id_${userId.trim()}` | ||
| } |
There was a problem hiding this comment.
message_from_user with user_id="all" never fires.
The new tag parser turns this into id_all, but Lines 275-277 treat all as a literal user ID and look up messageTimestampsByUserId['all']. The preset/tool docs now advertise all as “any user”, so this path silently does nothing.
Suggested fix
const lastMessageTimeByUserId =
info.messageTimestampsByUserId?.[predicate.userId] ?? 0
- return lastMessageTimeByUserId >= sentAt
+ if (predicate.userId === 'all') {
+ return info.lastUserMessageTime >= sentAt
+ }
+ return lastMessageTimeByUserId >= sentAtAlso applies to: 147-149, 275-277
🧰 Tools
🪛 ESLint
[error] 31-31: Replace /\buser_id\s*=\s*['"]([^'"]+)['"]/i with ⏎················/\buser_id\s*=\s*['"]([^'"]+)['"]/i⏎············
(prettier/prettier)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/triggers.ts` around lines 30 - 34, The parser in triggers.ts
currently turns user_id="all" into token "id_all", which then looks up
messageTimestampsByUserId['all'] instead of treating "all" as a wildcard; change
the logic inside the message_from_user branch (the code that extracts userId and
assigns token) so that if userId.trim().toLowerCase() === 'all' you do not set
token to "id_all" (either leave token undefined or set a designated wildcard
token) so downstream checks against messageTimestampsByUserId treat it as "any
user"; apply the same adjustment to the other occurrences referenced (the
similar user_id parsing at the blocks around the earlier lines 147-149 and later
275-277) so "all" is handled case-insensitively and treated as a wildcard rather
than a literal user id.
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (5)
src/utils/send.ts (1)
146-169:⚠️ Potential issue | 🟡 MinorOneBot video sends drop the leading quote element.
The
splitfunction includes a precedingquoteelement (lines 125–127), but the OneBot send path constructs a message with only thevideosegment. When a quoted video is sent on OneBot, the quote reference is lost.The correct pattern is demonstrated in lines 235–243 of this same file (the OneBot image handling): check if
part.elements[0]?.type === 'quote'and prepend areplysegment before thevideosegment.🛡️ Proposed fix to preserve quote
const bot = session.bot as OneBotBot<Context> const action = session.isDirect ? 'send_private_msg' : 'send_group_msg' + const message: OneBotMessageSegment[] = [] + + if (part.elements[0]?.type === 'quote') { + message.push({ + type: 'reply', + data: { + id: String(part.elements[0].attrs.id ?? '') + } + }) + } + + message.push({ + type: 'video', + data: { file } + }) + const data = (await bot.internal._request( action, session.isDirect ? { user_id: Number(session.userId), - message: [ - { - type: 'video', - data: { file } - } - ] + message } : { group_id: Number(session.guildId), - message: [ - { - type: 'video', - data: { file } - } - ] + message } )) as OneBotSendMessageResponse🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/send.ts` around lines 146 - 169, The OneBot video send path (in the block building `action`/`data` using `bot.internal._request` for `send_private_msg`/`send_group_msg`) omits a leading `quote` element so quoted videos lose their reply reference; modify the construction of the message payload to detect a leading quote (check `part.elements[0]?.type === 'quote'`) and prepend a `reply`/quote segment before the `video` segment (same pattern used in the OneBot image handling) so the resulting `message` array includes the quote then the video for both private and group sends.src/plugins/chat.ts (2)
2009-2011:⚠️ Potential issue | 🟠 MajorStop pending capture after the lock handoff.
Turning it off before
releaseResponseLock()opens a window where a newer trigger can queue as a normal waiter, and the buffered-trigger replay below can then overtake it. That breaks latest-wins ordering.Based on learnings: The response lock in
service/message.tsuses a "latest-wins" strategy; only the most recent waiter is resolved when multiple requests queue.Also applies to: 2062-2069
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/chat.ts` around lines 2009 - 2011, The call to service.stopPendingMessages(session) must happen after releasing the response lock to avoid a race where a newer trigger becomes a normal waiter and gets overtaken; move the stopPendingMessages(session) call to immediately after the releaseResponseLock(...) invocation in this module (and the duplicate occurrence around the block at the other location noted), ensuring both occurrences are adjusted so stopPendingMessages is executed only after releaseResponseLock completes.
636-687:⚠️ Potential issue | 🟠 MajorMixed
character_reply.messages[]payloads still drop text.This renderer still returns on the first non-text field, so a valid tool payload like
{ text: '说明', image: 'https://...' }becomes image-only. That also makes the new text+image example inresources/presets/default.ymlimpossible to reproduce through the tool path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/plugins/chat.ts` around lines 636 - 687, Current renderer returns early on the first non-text field (e.g., checking args.image/args.sticker/etc.) which drops any accompanying args.text; change the rendering to accumulate parts into a single message instead of returning immediately: inside the handler that inspects args (references: args, quote, escape(), isHttpUrl(), and the checks for args.at, args.face, args.sticker, args.image, args.file, args.video, args.markdown, args.voice), build an array of XML fragments (validating URLs with isHttpUrl where used and using escape for values) and push text content (escape(args.text)) in addition to other tags, then return `<message${quote}>` + joined fragments + `</message>` so mixed payloads (e.g., text+image) are preserved.src/service/message.ts (1)
697-701:⚠️ Potential issue | 🟠 MajorHonor
willConsumebefore diverting messages.
setPendingMessagesWillConsume()is dead as long as every incoming message is still appended and returned here. After a finalcharacter_reply, newer triggers never fall back to the normal waiter/cooldown path.Suggested fix
const active = this._activePendingMessages[groupId] if (active) { - active.append(message, triggerReason) - return true + if (active.willConsume) { + active.append(message, triggerReason) + return true + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/service/message.ts` around lines 697 - 701, The current logic always diverts messages into this._activePendingMessages[groupId] via active.append, which prevents honoring the willConsume flag set by setPendingMessagesWillConsume; update the branch so you check the pending object's willConsume (or the flag set by setPendingMessagesWillConsume) before calling active.append — only call active.append(message, triggerReason) and return true when active exists AND active.willConsume is false; when willConsume is true, do not append and let the code fall through to the normal waiter/cooldown path (or return false as appropriate). Ensure the flag name used matches the setter/getter from setPendingMessagesWillConsume and that you reference the same object on this._activePendingMessages[groupId].src/utils/triggers.ts (1)
30-34:⚠️ Potential issue | 🟠 Major
message_from_user user_id="all"still encodes as a literal ID.This emits
id_all;extractNextReplyReasonsFromTool()emits the same token, and the evaluator treats it as the real user id"all". The documented "any user" condition therefore never fires unless evaluation special-cases this token.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/triggers.ts` around lines 30 - 34, The current code encodes message_from_user user_id="all" as the literal token `id_all`, which prevents the documented "any user" condition from matching; change the mapping so that when attributes.match(...)[1].trim().toLowerCase() === 'all' you produce a canonical wildcard token (e.g. 'any_user' or a shared sentinel) instead of `id_all`, updating the assignment to token in this block and making the same canonicalization in extractNextReplyReasonsFromTool so both producers emit the identical wildcard token rather than the literal "all" id.
🧹 Nitpick comments (1)
src/utils/send.ts (1)
7-9: Inline the 1-line helper per coding guidelines.This helper is only called once (line 230). As per coding guidelines, "Do NOT create extra functions for short logic; if a function body would be 1-5 lines, inline it at the call site."
♻️ Inline the helper
-function isOneBotImageElement(el: h) { - return el.type === 'img' -} - export interface SendPart {Then at line 230:
if ( session.platform === 'onebot' && part.type === 'default' && - part.elements.some(isOneBotImageElement) + part.elements.some((el) => el.type === 'img') ) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/send.ts` around lines 7 - 9, Remove the one-line helper isOneBotImageElement and replace its single call with the expression it returns; i.e., inline the predicate el.type === 'img' directly at the call site where isOneBotImageElement(...) is used, then delete the isOneBotImageElement function declaration. Ensure the inlined expression uses the same local variable name as the original call so types flow unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/plugins/chat.ts`:
- Around line 1538-1541: The catch block around the model request should not
swallow errors; instead, if the request fails and signal?.aborted is false, log
the error, sleep 3 seconds, retry the model request once, and if the retry also
fails propagate the error (rethrow) so the caller can handle it; update the
try/catch around the model request (the block using logger.error and
signal?.aborted) to perform this sleep-and-retry-once logic and only exit
silently when signal?.aborted is true.
- Around line 1755-1768: The current checks in the startup validation loop over
config.privateConfigs and config.configs reject any falsy toolCalling values (so
undefined/inherit is disallowed); change the checks to only reject explicit
false by testing cfg.toolCalling === false. Update the two locations that throw
errors (the loops referencing config.privateConfigs and config.configs) so they
allow undefined and only throw when cfg.toolCalling is strictly false; this
preserves inheritance handled later in getConfigAndPresetForGuild while
enforcing that experimentalToolCallReply requires toolCalling not be explicitly
set to false.
In `@src/utils/messages.ts`:
- Around line 263-267: The image/sticker branches insert imageUrl raw into
XML-like tags and must escape XML metacharacters; wrap imageUrl with the same
escaping utility used by the file/voice/video branches (e.g., escapeXml or the
project's URL/XML escape helper) before pushing into filteredBuffer, and apply
the same fix to the other occurrence noted (the branch around the second
occurrence).
In `@src/utils/send.ts`:
- Line 147: Prettier flagged formatting issues: convert the inline ternary that
sets action (const action = session.isDirect ? 'send_private_msg' :
'send_group_msg') and the other ternary at the later occurrence into multi-line
ternary expressions so each branch is on its own line, and collapse the chained
nullish-coalescing expressions spanning lines 277–280 into a single line (keep
the same operands/order) so the `??` chain is not broken across lines; locate
and update the occurrences by editing the assignment to action and the other
ternary expression, and the `??` chain expression in send.ts to match Prettier’s
expected multiline ternary and single-line nullish-coalescing formats.
---
Duplicate comments:
In `@src/plugins/chat.ts`:
- Around line 2009-2011: The call to service.stopPendingMessages(session) must
happen after releasing the response lock to avoid a race where a newer trigger
becomes a normal waiter and gets overtaken; move the
stopPendingMessages(session) call to immediately after the
releaseResponseLock(...) invocation in this module (and the duplicate occurrence
around the block at the other location noted), ensuring both occurrences are
adjusted so stopPendingMessages is executed only after releaseResponseLock
completes.
- Around line 636-687: Current renderer returns early on the first non-text
field (e.g., checking args.image/args.sticker/etc.) which drops any accompanying
args.text; change the rendering to accumulate parts into a single message
instead of returning immediately: inside the handler that inspects args
(references: args, quote, escape(), isHttpUrl(), and the checks for args.at,
args.face, args.sticker, args.image, args.file, args.video, args.markdown,
args.voice), build an array of XML fragments (validating URLs with isHttpUrl
where used and using escape for values) and push text content
(escape(args.text)) in addition to other tags, then return `<message${quote}>` +
joined fragments + `</message>` so mixed payloads (e.g., text+image) are
preserved.
In `@src/service/message.ts`:
- Around line 697-701: The current logic always diverts messages into
this._activePendingMessages[groupId] via active.append, which prevents honoring
the willConsume flag set by setPendingMessagesWillConsume; update the branch so
you check the pending object's willConsume (or the flag set by
setPendingMessagesWillConsume) before calling active.append — only call
active.append(message, triggerReason) and return true when active exists AND
active.willConsume is false; when willConsume is true, do not append and let the
code fall through to the normal waiter/cooldown path (or return false as
appropriate). Ensure the flag name used matches the setter/getter from
setPendingMessagesWillConsume and that you reference the same object on
this._activePendingMessages[groupId].
In `@src/utils/send.ts`:
- Around line 146-169: The OneBot video send path (in the block building
`action`/`data` using `bot.internal._request` for
`send_private_msg`/`send_group_msg`) omits a leading `quote` element so quoted
videos lose their reply reference; modify the construction of the message
payload to detect a leading quote (check `part.elements[0]?.type === 'quote'`)
and prepend a `reply`/quote segment before the `video` segment (same pattern
used in the OneBot image handling) so the resulting `message` array includes the
quote then the video for both private and group sends.
In `@src/utils/triggers.ts`:
- Around line 30-34: The current code encodes message_from_user user_id="all" as
the literal token `id_all`, which prevents the documented "any user" condition
from matching; change the mapping so that when
attributes.match(...)[1].trim().toLowerCase() === 'all' you produce a canonical
wildcard token (e.g. 'any_user' or a shared sentinel) instead of `id_all`,
updating the assignment to token in this block and making the same
canonicalization in extractNextReplyReasonsFromTool so both producers emit the
identical wildcard token rather than the literal "all" id.
---
Nitpick comments:
In `@src/utils/send.ts`:
- Around line 7-9: Remove the one-line helper isOneBotImageElement and replace
its single call with the expression it returns; i.e., inline the predicate
el.type === 'img' directly at the call site where isOneBotImageElement(...) is
used, then delete the isOneBotImageElement function declaration. Ensure the
inlined expression uses the same local variable name as the original call so
types flow unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8d961780-2e80-4638-9e3a-0406b2ca2df1
📒 Files selected for processing (13)
resources/presets/default-tool-call.ymlresources/presets/default.ymlsrc/config.tssrc/plugins/chat.tssrc/service/message.tssrc/types.tssrc/utils/chain.tssrc/utils/elements.tssrc/utils/index.tssrc/utils/messages.tssrc/utils/render.tssrc/utils/send.tssrc/utils/triggers.ts
✅ Files skipped from review due to trivial changes (3)
- src/utils/elements.ts
- resources/presets/default-tool-call.yml
- src/utils/render.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- src/utils/index.ts
- src/types.ts
- src/config.ts
| } catch (e) { | ||
| if (signal?.aborted) return | ||
| logger.error('model requests failed', e) | ||
| } |
There was a problem hiding this comment.
Don't turn generation failures into silent success.
This catch logs and exits the generator, so the caller continues as if the turn finished normally. Mid-stream failures can therefore persist partial progress/state and skip the required retry.
As per coding guidelines, "For streaming retry: catch, sleep 3s, retry once, then propagate."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/plugins/chat.ts` around lines 1538 - 1541, The catch block around the
model request should not swallow errors; instead, if the request fails and
signal?.aborted is false, log the error, sleep 3 seconds, retry the model
request once, and if the retry also fails propagate the error (rethrow) so the
caller can handle it; update the try/catch around the model request (the block
using logger.error and signal?.aborted) to perform this sleep-and-retry-once
logic and only exit silently when signal?.aborted is true.
| for (const [id, cfg] of Object.entries(config.privateConfigs)) { | ||
| if (!cfg.toolCalling) { | ||
| throw new Error( | ||
| `experimentalToolCallReply 依赖 toolCalling,privateConfigs.${id}.toolCalling 不能关闭。` | ||
| ) | ||
| } | ||
| } | ||
|
|
||
| for (const [id, cfg] of Object.entries(config.configs)) { | ||
| if (!cfg.toolCalling) { | ||
| throw new Error( | ||
| `experimentalToolCallReply 依赖 toolCalling,configs.${id}.toolCalling 不能关闭。` | ||
| ) | ||
| } |
There was a problem hiding this comment.
Only reject explicit toolCalling: false overrides.
These per-scope configs are merged onto the globals later in getConfigAndPresetForGuild(), so undefined means "inherit". The current falsy check also rejects omitted values and can fail startup for valid partial overrides.
Suggested fix
for (const [id, cfg] of Object.entries(config.privateConfigs)) {
- if (!cfg.toolCalling) {
+ if (cfg.toolCalling === false) {
throw new Error(
`experimentalToolCallReply 依赖 toolCalling,privateConfigs.${id}.toolCalling 不能关闭。`
)
}
}
for (const [id, cfg] of Object.entries(config.configs)) {
- if (!cfg.toolCalling) {
+ if (cfg.toolCalling === false) {
throw new Error(
`experimentalToolCallReply 依赖 toolCalling,configs.${id}.toolCalling 不能关闭。`
)
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/plugins/chat.ts` around lines 1755 - 1768, The current checks in the
startup validation loop over config.privateConfigs and config.configs reject any
falsy toolCalling values (so undefined/inherit is disallowed); change the checks
to only reject explicit false by testing cfg.toolCalling === false. Update the
two locations that throw errors (the loops referencing config.privateConfigs and
config.configs) so they allow undefined and only throw when cfg.toolCalling is
strictly false; this preserves inheritance handled later in
getConfigAndPresetForGuild while enforcing that experimentalToolCallReply
requires toolCalling not be explicitly set to false.
| filteredBuffer.push( | ||
| sticker | ||
| ? `<sticker>${imageUrl}</sticker>` | ||
| : `` | ||
| ) |
There was a problem hiding this comment.
Escape media URLs before embedding them in XML-like tags.
imageUrl/url can contain & and other XML metacharacters, especially on signed or storage-backed URLs. These branches now emit them raw, while the file/voice/video branches already escape their URLs.
Suggested fix
if (imageUrl) {
filteredBuffer.push(
sticker
- ? `<sticker>${imageUrl}</sticker>`
- : ``
+ ? `<sticker>${escapeXml(imageUrl)}</sticker>`
+ : ``
)
} else if (matchedImage) {
filteredBuffer.push(matchedImage.formatted)
usedImages.add(matchedImage.formatted)
@@
- const formatted = hash ? `[image:${hash}]` : ``
+ const formatted = hash
+ ? `[image:${hash}]`
+ : ``Also applies to: 375-375
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/messages.ts` around lines 263 - 267, The image/sticker branches
insert imageUrl raw into XML-like tags and must escape XML metacharacters; wrap
imageUrl with the same escaping utility used by the file/voice/video branches
(e.g., escapeXml or the project's URL/XML escape helper) before pushing into
filteredBuffer, and apply the same fix to the other occurrence noted (the branch
around the second occurrence).
| } | ||
|
|
||
| const bot = session.bot as OneBotBot<Context> | ||
| const action = session.isDirect ? 'send_private_msg' : 'send_group_msg' |
There was a problem hiding this comment.
Fix Prettier formatting violations.
ESLint/Prettier flagged three formatting issues:
- Lines 147 & 302: Ternary expressions should be multi-line.
- Lines 277–280: Chained
??operators should be on one line.
🎨 Apply Prettier fixes
const bot = session.bot as OneBotBot<Context>
- const action = session.isDirect ? 'send_private_msg' : 'send_group_msg'
+ const action = session.isDirect
+ ? 'send_private_msg'
+ : 'send_group_msg' if (el.type === 'img') {
const file = String(
- el.attrs.src ??
- el.attrs.url ??
- el.attrs.imageUrl ??
- ''
+ el.attrs.src ?? el.attrs.url ?? el.attrs.imageUrl ?? ''
) const bot = session.bot as OneBotBot<Context>
- const action = session.isDirect ? 'send_private_msg' : 'send_group_msg'
+ const action = session.isDirect
+ ? 'send_private_msg'
+ : 'send_group_msg'Also applies to: 277-280, 302-302
🧰 Tools
🪛 ESLint
[error] 147-147: Replace ·?·'send_private_msg' with ⏎················?·'send_private_msg'⏎···············
(prettier/prettier)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/send.ts` at line 147, Prettier flagged formatting issues: convert
the inline ternary that sets action (const action = session.isDirect ?
'send_private_msg' : 'send_group_msg') and the other ternary at the later
occurrence into multi-line ternary expressions so each branch is on its own
line, and collapse the chained nullish-coalescing expressions spanning lines
277–280 into a single line (keep the same operands/order) so the `??` chain is
not broken across lines; locate and update the occurrences by editing the
assignment to action and the other ternary expression, and the `??` chain
expression in send.ts to match Prettier’s expected multiline ternary and
single-line nullish-coalescing formats.
Summary
character_reply工具完成状态更新、消息发送与后续触发设置Verification
yarn tsc --noEmityarn fast-buildSummary by CodeRabbit
New Features
Configuration
Bug Fixes