Skip to content

feat(reply): 支持实验性工具调用回复#67

Open
CookSleep wants to merge 1 commit intoChatLunaLab:mainfrom
CookSleep:feat/experimental-tool-call-reply
Open

feat(reply): 支持实验性工具调用回复#67
CookSleep wants to merge 1 commit intoChatLunaLab:mainfrom
CookSleep:feat/experimental-tool-call-reply

Conversation

@CookSleep
Copy link
Copy Markdown
Member

@CookSleep CookSleep commented Apr 3, 2026

Summary

  • 为 chatluna-character 增加实验性工具调用回复模式,让模型通过 character_reply 工具完成状态更新、消息发送与后续触发设置
  • 调整工具调用期间的 pending message 处理与最终 reply 协议,避免把本次改动混入原有等待聚合 PR
  • 补充默认预设与 reply tool 相关配置说明,并增加工具调用专用预设文件

Verification

  • yarn tsc --noEmit
  • yarn fast-build

Summary by CodeRabbit

  • New Features

    • Experimental tool-call reply mode for structured response handling (requires tool calling enabled)
    • Video message output support
    • Configurable mute keywords for automatic message filtering
  • Configuration

    • New "CHARACTER(工具调用)" preset with enhanced tool-call scenario support and system prompts
    • Updated next-reply condition syntax using attributes (type, user_id, seconds, max_wait_seconds) for improved scheduling flexibility
  • Bug Fixes

    • Improved sticker/image distinction and serialization
    • Enhanced message queue handling for pending message consumption

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 3, 2026

📝 Walkthrough

Walkthrough

Added experimental tool-call reply mode via experimentalToolCallReply config flag, enabling structured character-reply tool calls that parse into status/message/next-reply state. Updated streaming pipeline to emit toolCalls arrays, introduced pending-message buffering during tool execution, and extended rendering/send utilities for video support and OneBot integration.

Changes

Cohort / File(s) Summary
Configuration Presets
resources/presets/default-tool-call.yml, resources/presets/default.yml
New tool-call preset with system prompts and character interaction rules; updated default preset with revised <next_reply /> tag-based syntax (replacing string reason), removed mute block, and added video output examples.
Core Configuration & Types
src/config.ts, src/types.ts
Added experimentalToolCallReply config flag; extended CharacterReplyToolField interface for custom reply tool hooks, added toolCalls to streaming chunk types, and extended NextReplyPredicate with maxWaitSeconds field.
Chat Plugin Core Logic
src/plugins/chat.ts
Implemented tool-call-driven reply mode with character_reply tool parsing, pending-message queue (PendingMessageQueue) buffering/consumption, modified streaming to include tool calls, updated prepareMessages with user guidance injection, and integrated reply-tool lifecycle with config validation.
Message Collection Service
src/service/message.ts
Extended MessageCollector with pending-message tracking state, APIs for reply-tool registration, pending-message start/stop/consume, and modified releaseResponseLock to skip already-consumed messages.
Utility: Chain Streaming
src/utils/chain.ts
Extended createChatLunaChain to accept optional extraTools callback for dynamic tool injection, normalized chunk creation to always return content (empty string instead of undefined), and accumulate/emit toolCalls in intermediate/final chunks.
Utility: Elements & Rendering
src/utils/elements.ts, src/utils/render.ts, src/utils/index.ts
Renamed image token type from img to image, added video token support, added video element render handler, and re-exported formatMessageString utility.
Utility: Message Formatting & Sending
src/utils/messages.ts, src/utils/send.ts
Updated formatCompletionMessages to collect all leading system messages, changed media serialization (distinguish sticker vs. image, update audio/video formatting); added OneBot video send rule with file-url extraction and OneBot API integration for send_private_msg/send_group_msg.
Utility: Trigger Parsing
src/utils/triggers.ts
Updated extractNextReplyReasons to synthesize grouped reason tokens from <next_reply /> tag attributes (type, user_id, seconds, max_wait_seconds, group); extended token parsing and evaluation to support time_id with maxWaitSeconds short-circuit and lastMessageTimeByUserId guard logic.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant ChatPlugin as Chat Plugin<br/>(chat.ts)
    participant MessageQueue as Message Queue<br/>(PendingMessageQueue)
    participant Agent as Agent/LLM Chain<br/>(chain.ts)
    participant ToolExec as Tool Executor<br/>(character_reply)
    participant MessageCollector as Message Collector<br/>(message.ts)

    User->>ChatPlugin: Message arrives
    activate ChatPlugin
    ChatPlugin->>MessageQueue: Start pending messages
    Note over MessageQueue: Buffer incoming messages
    ChatPlugin->>Agent: Generate response with tools
    activate Agent
    loop Streaming
        Agent->>ToolExec: Execute character_reply tool
        ToolExec->>ToolExec: Parse status/message/next_reply
        ToolExec-->>Agent: Tool result
        Agent->>MessageQueue: Check pending messages
        alt Messages buffered
            MessageQueue-->>Agent: Latest trigger reason
            Agent->>Agent: Schedule next_reply/wake_up
        else No messages
            Agent->>Agent: Use existing reasons
        end
        Agent-->>ChatPlugin: Stream chunk with toolCalls
        ChatPlugin-->>User: Emit response segment
    end
    deactivate Agent
    ChatPlugin->>MessageCollector: Mark pending messages consumed
    MessageCollector->>MessageCollector: Release response lock
    activate MessageCollector
    MessageCollector->>MessageCollector: Skip already-consumed waiters
    MessageCollector-->>ChatPlugin: Proceed with next
    deactivate MessageCollector
    ChatPlugin-->>User: Response complete
    deactivate ChatPlugin
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • PR #27 — Modifies trigger/next-reply parsing and scheduling surface that overlaps with the token synthesis and reason extraction in this PR's trigger updates.
  • PR #38 — Introduces per-session messaging infrastructure and trigger/tool-call foundation that this PR builds upon for pending-message buffering and reply-tool integration.
  • PR #39 — Updates rendering and send pipeline utilities that this PR extends with video support and OneBot integration logic.

Suggested reviewers

  • dingyi222666

Poem

🐰 A rabbit hops through tool-call streams so bright,
With character replies that queue and flow just right,
Pending messages buffered, awaiting their cue,
Next-reply with tags, and video too! 🎬✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding experimental tool call reply support, which aligns with the core objective of introducing a new tool-call-driven reply mode.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@CookSleep CookSleep force-pushed the feat/experimental-tool-call-reply branch from 2311b38 to 6552f4e Compare April 3, 2026 05:29
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an experimental tool-calling reply mechanism (experimentalToolCallReply), allowing models to manage state updates and message replies through structured tools rather than XML blocks. Key changes include the addition of a new tool-calling preset, a PendingMessageQueue to handle incoming messages during tool execution, and enhanced support for video and sticker elements across various adapters. The review feedback highlights critical issues regarding the potential loss of context when multiple tool calls occur in a single turn, the accidental removal of retry logic in the model response stream, and a type-checking risk that could lead to data loss in message chunks.

Comment on lines +1995 to +2000
lastResponseMessage =
copyOfConfig.experimentalToolCallReply &&
chunk.toolCalls?.length
? new AIMessage(chunk.responseContent)
: chunk.responseMessage
await ctx.chatluna_character.broadcastOnBot(
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

当启用 experimentalToolCallReply 时,lastResponseMessage 在循环中不断被覆盖。如果模型在一次回复中进行了多次工具调用(产生多个中间 chunk),只有最后一个 chunk 的内容会被记录到 lastResponseMessage 中,并在随后存入历史记录(第 2039 行)。这会导致之前的工具调用内容在后续对话的上下文中丢失。建议累加各 chunk 的 renderedContent,并在循环结束后统一构造完整的 AIMessage 存入历史。

Comment on lines 1488 to 1542
): AsyncGenerator<StreamedParsedResponseChunk> {
for (let retryCount = 0; retryCount < 2; retryCount++) {
if (signal?.aborted) return
let emittedAny = false
if (signal?.aborted) return

try {
const lastMessage =
completionMessages[completionMessages.length - 1]
const historyMessages = completionMessages.slice(0, -1)
try {
const lastMessage =
completionMessages[completionMessages.length - 1]
const historyMessages = completionMessages.slice(0, -1)

const systemMessage =
chain != null ? historyMessages.shift() : undefined
const systemMessage =
chain != null ? historyMessages.shift() : undefined

if (chain) {
for await (const responseChunk of streamAgentResponseContents(
chain,
if (chain) {
for await (const responseChunk of streamAgentResponseContents(
ctx,
chain,
session,
model,
config,
presetName,
systemMessage,
historyMessages,
lastMessage,
signal,
messageQueue,
onAgentEvent
)) {
yield await parseResponseContent(
ctx,
session,
model,
presetName,
systemMessage,
historyMessages,
lastMessage,
signal
)) {
emittedAny = true

yield await parseResponseContent(
ctx,
session,
config,
responseChunk
)
}

return
}

const responseMessage = await model.invoke(
completionMessages,
createStreamConfig(session, model, presetName, signal)
)
const responseContent = getMessageContent(responseMessage.content)
if (responseContent.trim().length < 1) {
return
config,
responseChunk
)
}

logger.debug(`model response:\n${responseContent}`)
emittedAny = true

yield await parseResponseContent(ctx, session, config, {
responseMessage,
responseContent,
isIntermediate: false
})
return
} catch (e) {
if (signal?.aborted) return
logger.error('model requests failed', e)
if (emittedAny || retryCount === 1) return
await sleep(3000)
}

const responseMessage = await model.invoke(
completionMessages,
createStreamConfig(session, model, presetName, signal)
)
const responseContent = getMessageContent(responseMessage.content)

logger.debug(`model response:\n${responseContent}`)

yield await parseResponseContent(ctx, session, config, {
responseMessage,
responseContent,
isIntermediate: false
})
} catch (e) {
if (signal?.aborted) return
logger.error('model requests failed', e)
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

在重构后的 streamModelResponse 函数中,原有的重试逻辑(retryCount < 2)被移除了。这会导致在模型请求遇到临时网络抖动或 API 限制时,插件不再尝试重试,从而降低了系统的鲁棒性。建议恢复重试机制以提高稳定性。


return new AIMessageChunk({
content
content: text.trim().length < 1 ? '' : content
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

这里的逻辑 text.trim().length < 1 ? '' : content 存在风险。如果 content 是一个包含工具调用(tool calls)的数组而非纯字符串,getMessageContent(content) 可能会返回空字符串,导致整个 content 被替换为 '',从而丢失原始的工具调用信息。建议仅在 content 确定为字符串时进行此类处理。

Suggested change
content: text.trim().length < 1 ? '' : content
content: (typeof content === 'string' && text.trim().length < 1) ? '' : content


const chunkQueue = createAsyncChunkQueue<ChatLunaChainStreamChunk>()
let buf = ''
const toolCalls: ChatLunaChainStreamChunk['toolCalls'] = []
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

变量 toolCalls 被定义并填充(第 305 行),但从未被读取或使用。这属于死代码,建议清理以保持代码整洁。

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/plugins/filter.ts (1)

237-256: ⚠️ Potential issue | 🟠 Major

messageWait bursts are still skewing the activity trigger path.

Once Lines 814-825 put the group into wait mode, later messages in that buffered burst still go through updateIncomingMessageStats() and the activity-score recompute. That means the delayed wait reply does not actually isolate the burst from the tuned activity model, so the same burst can prime an extra activity-triggered reply right after it. You’ll want to decide wait mode before updating activity stats, and skip both accumulation and score recomputation while info.messageWait is active.

Based on learnings, activity scoring algorithm constants in src/plugins/filter.ts are tuned and changes have large behavioral impact.

Also applies to: 749-825, 864-881

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/filter.ts` around lines 237 - 256, The activity-trigger path is
still updated for messages that are part of a buffered "wait" burst; before
calling updateIncomingMessageStats() or mutating
info.messageTimestamps/currentActivityThreshold you must check info.messageWait
and bail out if it's active: modify the logic in the incoming message handling
to decide wait mode first, and when info.messageWait is truthy skip both
accumulation (pushing/shift of info.messageTimestamps) and the activity-score
recompute (the block that sets info.currentActivityThreshold and checks
enableActivityScoreTrigger), ensuring updateIncomingMessageStats() is not run
while info.messageWait is set so bursts do not prime additional
activity-triggered replies.
src/utils/messages.ts (1)

309-321: ⚠️ Potential issue | 🟠 Major

Don’t require chatluna_file_url for incoming audio/video context.

chatluna_file_url is the synthetic attr added by the reply renderers, but adapter-provided audio/video elements usually arrive with native src/url attrs only. With the current guard those attachments serialize to nothing, so the model loses the media reference in history.

🐛 Suggested fix
-            const url = element.attrs['chatluna_file_url']
+            const url =
+                element.attrs['chatluna_file_url'] ??
+                element.attrs['src'] ??
+                element.attrs['url']
             if (!url) {
                 continue
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/messages.ts` around lines 309 - 321, The current branch in the
media serialization in src/utils/messages.ts only reads
element.attrs['chatluna_file_url'], which drops adapter-provided audio/video
that use native attributes; update the block that handles element.type ===
'video' || element.type === 'audio' (the code that computes url, marker, and
calls filteredBuffer.push(`<voice>...</voice>` / `<video>...</video>` ) to
fallback to standard attributes (e.g. element.attrs['src'] or
element.attrs['url']) when chatluna_file_url is missing, ensure you still call
escapeXml(String(url)) and preserve the voice vs video branching; leave early
only if all candidate attrs are absent.
src/utils/chain.ts (1)

208-217: ⚠️ Potential issue | 🟡 Minor

Reset extraRef on session-less calls.

Line 215 returns before touching extraRef, so a later invoke()/stream() without configurable.session reuses the previous session's extra tools.

Suggested fix
             if (!session) {
+                extraRef.value = []
                 return toolMask
             }

Also applies to: 236-237

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/chain.ts` around lines 208 - 217, In updateToolsIfNeeded, when
options?.configurable?.session is missing the function currently returns early
and leaves extraRef populated from a previous session; change the early-return
path to clear/reset the extraRef (and any session-specific tool state) before
returning so subsequent invoke()/stream() calls without a session don't reuse
previous session tools; apply the same reset logic in the sibling path
referenced near the tool selection/assignment (the code handling
toolMask/extraRef around the later branch) so both session-less exit points
clear extraRef.
🧹 Nitpick comments (1)
src/utils/messages.ts (1)

145-154: Avoid mutating the caller-owned messages array here.

This shift() loop destructively removes the leading system prompts from the input array. If the same array is reused for retries, logging, or another formatter pass, those messages are gone on the second use.

♻️ Non-mutating version
-    while (messages[0]?.getType() === 'system') {
-        const message = messages.shift()
+    let firstNonSystem = 0
+    while (messages[firstNonSystem]?.getType() === 'system') {
+        const message = messages[firstNonSystem]!
         systemMessages.push(message)
         currentTokens += await model.getNumTokens(
             getMessageContent(message.content)
         )
+        firstNonSystem++
     }
@@
-    for (let index = messages.length - 1; index >= 0; index--) {
+    for (let index = messages.length - 1; index >= firstNonSystem; index--) {
         const message = messages[index]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/messages.ts` around lines 145 - 154, The loop currently mutates the
caller-owned messages array by calling messages.shift(); instead, collect
leading system messages without modifying messages (e.g., iterate from index 0
until a non-'system' message is found or use messages.findIndex and slice), push
those messages into systemMessages and accumulate currentTokens by calling
model.getNumTokens(getMessageContent(message.content)) for each, leaving the
original messages array intact for retries/logging/other formatters; ensure you
reference the same symbols (systemMessages, messages, currentTokens,
model.getNumTokens, getMessageContent) when implementing the non-mutating
approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/plugins/chat.ts`:
- Around line 316-343: The schema for the message object allows mixed sibling
fields (e.g., text together with image/sticker/at/face or parts), but
buildXmlMessage() currently returns on the first non-text branch and drops text;
update buildXmlMessage (and any helpers that render message objects) to preserve
and render mixed payloads by: detecting when text exists alongside other media
and either (a) emit the text first then emit the media nodes (or wrap each
sibling as separate parts in the same message stream) or (b) convert the
combined fields into a parts array preserving order before rendering; ensure
handling of the existing parts array remains compatible and that the renderer no
longer short-circuits after encountering the first non-text field.
- Around line 2009-2011: Currently stopPendingMessages(session) is called in the
finally block before the response lock is handed off, which allows a newer
trigger to enqueue a waiter and take the lock, breaking the “latest-wins”
behavior; move the call to service.stopPendingMessages(session) so it executes
after the lock handoff/replay logic (i.e., after releaseResponseLock()/the
replay of buffered triggers) so pending capture is stopped only once the old
turn has fully handed off the lock. Update both occurrences around the block
that contains releaseResponseLock() and the replay (the 2059–2069 region) to
call stopPendingMessages(session) after the replay/lock release instead of in
the finally prior to handoff.

In `@src/service/message.ts`:
- Around line 64-70: The willConsume flag on _activePendingMessages is set by
setPendingMessagesWillConsume but never honored when new messages arrive: update
the incoming-message handler (the code that currently calls active.append(...)
and returns early) to first check active.willConsume — if true, append and
return; if false, do not divert the message to active.append but let it continue
through the normal trigger/waiter flow; also ensure
setPendingMessagesWillConsume resets willConsume (e.g., after a final
character_reply) and that any queued messages are flushed back into the normal
processing path instead of remaining diverted.

In `@src/utils/render.ts`:
- Around line 134-145: In renders.video.process, the code rebuilds a fresh video
element from child text and drops any existing attributes (match.extra or
el.attrs) and overwrites src; update process (the function defined in
src/utils/render.ts) to collect and preserve existing attributes: start from
el.attrs (falling back to match.extra if available), ensure src is set from
el.attrs or the extracted url only if missing, copy through all other attributes
into the created video node (video.attrs), and still set chatluna_file_url as
before; reference getElementText, renders.video.process, match.extra, el.attrs,
and h('video') when making the change.

In `@src/utils/send.ts`:
- Around line 122-129: The OneBot path drops a leading quote because the split
function returns a segment with start pointing at a preceding 'quote' but the
OneBot send code only uses the video element; update the OneBot send logic (the
block handling part.elements and sending via OneBot) to check part.elements[0]
for type 'quote' and, when present, prepend a reply segment constructed from
that quote (e.g., create a segment { type: 'reply', id: quote.id } or the
project's equivalent) before the video segment so the quoted reference is
preserved; keep the existing split behavior but ensure the OneBot message
assembly includes the extracted reply segment whenever part.elements[0].type ===
'quote'.

In `@src/utils/triggers.ts`:
- Around line 30-34: The parser in triggers.ts currently turns user_id="all"
into token "id_all", which then looks up messageTimestampsByUserId['all']
instead of treating "all" as a wildcard; change the logic inside the
message_from_user branch (the code that extracts userId and assigns token) so
that if userId.trim().toLowerCase() === 'all' you do not set token to "id_all"
(either leave token undefined or set a designated wildcard token) so downstream
checks against messageTimestampsByUserId treat it as "any user"; apply the same
adjustment to the other occurrences referenced (the similar user_id parsing at
the blocks around the earlier lines 147-149 and later 275-277) so "all" is
handled case-insensitively and treated as a wildcard rather than a literal user
id.

---

Outside diff comments:
In `@src/plugins/filter.ts`:
- Around line 237-256: The activity-trigger path is still updated for messages
that are part of a buffered "wait" burst; before calling
updateIncomingMessageStats() or mutating
info.messageTimestamps/currentActivityThreshold you must check info.messageWait
and bail out if it's active: modify the logic in the incoming message handling
to decide wait mode first, and when info.messageWait is truthy skip both
accumulation (pushing/shift of info.messageTimestamps) and the activity-score
recompute (the block that sets info.currentActivityThreshold and checks
enableActivityScoreTrigger), ensuring updateIncomingMessageStats() is not run
while info.messageWait is set so bursts do not prime additional
activity-triggered replies.

In `@src/utils/chain.ts`:
- Around line 208-217: In updateToolsIfNeeded, when
options?.configurable?.session is missing the function currently returns early
and leaves extraRef populated from a previous session; change the early-return
path to clear/reset the extraRef (and any session-specific tool state) before
returning so subsequent invoke()/stream() calls without a session don't reuse
previous session tools; apply the same reset logic in the sibling path
referenced near the tool selection/assignment (the code handling
toolMask/extraRef around the later branch) so both session-less exit points
clear extraRef.

In `@src/utils/messages.ts`:
- Around line 309-321: The current branch in the media serialization in
src/utils/messages.ts only reads element.attrs['chatluna_file_url'], which drops
adapter-provided audio/video that use native attributes; update the block that
handles element.type === 'video' || element.type === 'audio' (the code that
computes url, marker, and calls filteredBuffer.push(`<voice>...</voice>` /
`<video>...</video>` ) to fallback to standard attributes (e.g.
element.attrs['src'] or element.attrs['url']) when chatluna_file_url is missing,
ensure you still call escapeXml(String(url)) and preserve the voice vs video
branching; leave early only if all candidate attrs are absent.

---

Nitpick comments:
In `@src/utils/messages.ts`:
- Around line 145-154: The loop currently mutates the caller-owned messages
array by calling messages.shift(); instead, collect leading system messages
without modifying messages (e.g., iterate from index 0 until a non-'system'
message is found or use messages.findIndex and slice), push those messages into
systemMessages and accumulate currentTokens by calling
model.getNumTokens(getMessageContent(message.content)) for each, leaving the
original messages array intact for retries/logging/other formatters; ensure you
reference the same symbols (systemMessages, messages, currentTokens,
model.getNumTokens, getMessageContent) when implementing the non-mutating
approach.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8af5bb06-77f2-49af-9c95-11481da2934f

📥 Commits

Reviewing files that changed from the base of the PR and between 2cfa4fb and 2311b38.

📒 Files selected for processing (15)
  • resources/presets/default-tool-call.yml
  • resources/presets/default.yml
  • src/config.ts
  • src/plugins/chat.ts
  • src/plugins/filter.ts
  • src/service/message.ts
  • src/service/trigger.ts
  • src/types.ts
  • src/utils/chain.ts
  • src/utils/elements.ts
  • src/utils/index.ts
  • src/utils/messages.ts
  • src/utils/render.ts
  • src/utils/send.ts
  • src/utils/triggers.ts

Comment on lines +64 to +70
private _activePendingMessages: Record<
string,
{
willConsume: boolean
append: (message: Message, triggerReason?: string) => void
}
> = {}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

willConsume is dead state right now.

setPendingMessagesWillConsume() updates active.willConsume, but Line 699 still appends every incoming message and returns early regardless. After a final character_reply, new messages are still diverted into the pending queue instead of falling back to the normal trigger/waiter flow.

Suggested fix
         const active = this._activePendingMessages[groupId]
         if (active) {
-            active.append(message, triggerReason)
-            return true
+            if (active.willConsume) {
+                active.append(message, triggerReason)
+                return true
+            }
         }

Also applies to: 146-154, 697-701

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/service/message.ts` around lines 64 - 70, The willConsume flag on
_activePendingMessages is set by setPendingMessagesWillConsume but never honored
when new messages arrive: update the incoming-message handler (the code that
currently calls active.append(...) and returns early) to first check
active.willConsume — if true, append and return; if false, do not divert the
message to active.append but let it continue through the normal trigger/waiter
flow; also ensure setPendingMessagesWillConsume resets willConsume (e.g., after
a final character_reply) and that any queued messages are flushed back into the
normal processing path instead of remaining diverted.

Comment on lines +134 to +145
renders.video = {
parse: createMatch,
render: (match) => [
h('video', match.extra ?? {}, [h.text(match.content)])
],
process: (el) => {
const url = getElementText(el.children).trim()
const video = h('video', { src: url })
video.attrs['chatluna_file_url'] = url

return [video]
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preserve src and passthrough attrs when normalizing videos.

This process() step rebuilds a fresh <video> from child text only. Any attrs carried through match.extra are dropped, and an already-normalized video element with src on el.attrs gets rewritten as src="".

🐛 Suggested fix
     renders.video = {
         parse: createMatch,
         render: (match) => [
             h('video', match.extra ?? {}, [h.text(match.content)])
         ],
         process: (el) => {
-            const url = getElementText(el.children).trim()
-            const video = h('video', { src: url })
+            const url = String(
+                getElementText(el.children).trim() ||
+                    el.attrs['src'] ||
+                    ''
+            )
+            const video = h('video', { ...el.attrs, src: url })
             video.attrs['chatluna_file_url'] = url

             return [video]
         }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/render.ts` around lines 134 - 145, In renders.video.process, the
code rebuilds a fresh video element from child text and drops any existing
attributes (match.extra or el.attrs) and overwrites src; update process (the
function defined in src/utils/render.ts) to collect and preserve existing
attributes: start from el.attrs (falling back to match.extra if available),
ensure src is set from el.attrs or the extracted url only if missing, copy
through all other attributes into the created video node (video.attrs), and
still set chatluna_file_url as before; reference getElementText,
renders.video.process, match.extra, el.attrs, and h('video') when making the
change.

Comment on lines +122 to +129
split: (elements, idx, start) => ({
type: 'video',
start:
idx > start && elements[idx - 1]?.type === 'quote'
? idx - 1
: idx,
end: idx + 1
}),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n src/utils/send.ts | head -200

Repository: ChatLunaLab/chatluna-character

Length of output: 8119


🏁 Script executed:

sed -n '43,119p' src/utils/send.ts | grep -A 50 "onebot"

Repository: ChatLunaLab/chatluna-character

Length of output: 1952


OneBot video sends drop the leading quote element.

The split function includes a preceding quote element (lines 122–129), but the OneBot send path (lines 146–169) constructs a message with only the video segment. When a quoted video is sent on OneBot, the quote reference is lost.

Apply the suggested fix to extract and include a reply segment if part.elements[0] is a quote:

Suggested fix
             const bot = session.bot as OneBotBot<Context>
             const action = session.isDirect ? 'send_private_msg' : 'send_group_msg'
+            const message: OneBotMessageSegment[] = []
+
+            if (part.elements[0]?.type === 'quote') {
+                message.push({
+                    type: 'reply',
+                    data: {
+                        id: String(part.elements[0].attrs.id ?? '')
+                    }
+                })
+            }
+
+            message.push({
+                type: 'video',
+                data: { file }
+            })
+
             const data = (await bot.internal._request(
                 action,
                 session.isDirect
                     ? {
                           user_id: Number(session.userId),
-                          message: [
-                              {
-                                  type: 'video',
-                                  data: { file }
-                              }
-                          ]
+                          message
                       }
                     : {
                           group_id: Number(session.guildId),
-                          message: [
-                              {
-                                  type: 'video',
-                                  data: { file }
-                              }
-                          ]
+                          message
                       }
             )) as OneBotSendMessageResponse
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/send.ts` around lines 122 - 129, The OneBot path drops a leading
quote because the split function returns a segment with start pointing at a
preceding 'quote' but the OneBot send code only uses the video element; update
the OneBot send logic (the block handling part.elements and sending via OneBot)
to check part.elements[0] for type 'quote' and, when present, prepend a reply
segment constructed from that quote (e.g., create a segment { type: 'reply', id:
quote.id } or the project's equivalent) before the video segment so the quoted
reference is preserved; keep the existing split behavior but ensure the OneBot
message assembly includes the extracted reply segment whenever
part.elements[0].type === 'quote'.

Comment on lines +30 to +34
if (type === 'message_from_user') {
const userId = attributes.match(/\buser_id\s*=\s*['"]([^'"]+)['"]/i)?.[1]
if (userId?.trim()) {
token = `id_${userId.trim()}`
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

message_from_user with user_id="all" never fires.

The new tag parser turns this into id_all, but Lines 275-277 treat all as a literal user ID and look up messageTimestampsByUserId['all']. The preset/tool docs now advertise all as “any user”, so this path silently does nothing.

Suggested fix
         const lastMessageTimeByUserId =
             info.messageTimestampsByUserId?.[predicate.userId] ?? 0
-        return lastMessageTimeByUserId >= sentAt
+        if (predicate.userId === 'all') {
+            return info.lastUserMessageTime >= sentAt
+        }
+        return lastMessageTimeByUserId >= sentAt

Also applies to: 147-149, 275-277

🧰 Tools
🪛 ESLint

[error] 31-31: Replace /\buser_id\s*=\s*['"]([^'"]+)['"]/i with ⏎················/\buser_id\s*=\s*['"]([^'"]+)['"]/i⏎············

(prettier/prettier)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/triggers.ts` around lines 30 - 34, The parser in triggers.ts
currently turns user_id="all" into token "id_all", which then looks up
messageTimestampsByUserId['all'] instead of treating "all" as a wildcard; change
the logic inside the message_from_user branch (the code that extracts userId and
assigns token) so that if userId.trim().toLowerCase() === 'all' you do not set
token to "id_all" (either leave token undefined or set a designated wildcard
token) so downstream checks against messageTimestampsByUserId treat it as "any
user"; apply the same adjustment to the other occurrences referenced (the
similar user_id parsing at the blocks around the earlier lines 147-149 and later
275-277) so "all" is handled case-insensitively and treated as a wildcard rather
than a literal user id.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (5)
src/utils/send.ts (1)

146-169: ⚠️ Potential issue | 🟡 Minor

OneBot video sends drop the leading quote element.

The split function includes a preceding quote element (lines 125–127), but the OneBot send path constructs a message with only the video segment. When a quoted video is sent on OneBot, the quote reference is lost.

The correct pattern is demonstrated in lines 235–243 of this same file (the OneBot image handling): check if part.elements[0]?.type === 'quote' and prepend a reply segment before the video segment.

🛡️ Proposed fix to preserve quote
             const bot = session.bot as OneBotBot<Context>
             const action = session.isDirect ? 'send_private_msg' : 'send_group_msg'
+            const message: OneBotMessageSegment[] = []
+
+            if (part.elements[0]?.type === 'quote') {
+                message.push({
+                    type: 'reply',
+                    data: {
+                        id: String(part.elements[0].attrs.id ?? '')
+                    }
+                })
+            }
+
+            message.push({
+                type: 'video',
+                data: { file }
+            })
+
             const data = (await bot.internal._request(
                 action,
                 session.isDirect
                     ? {
                           user_id: Number(session.userId),
-                          message: [
-                              {
-                                  type: 'video',
-                                  data: { file }
-                              }
-                          ]
+                          message
                       }
                     : {
                           group_id: Number(session.guildId),
-                          message: [
-                              {
-                                  type: 'video',
-                                  data: { file }
-                              }
-                          ]
+                          message
                       }
             )) as OneBotSendMessageResponse
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/send.ts` around lines 146 - 169, The OneBot video send path (in the
block building `action`/`data` using `bot.internal._request` for
`send_private_msg`/`send_group_msg`) omits a leading `quote` element so quoted
videos lose their reply reference; modify the construction of the message
payload to detect a leading quote (check `part.elements[0]?.type === 'quote'`)
and prepend a `reply`/quote segment before the `video` segment (same pattern
used in the OneBot image handling) so the resulting `message` array includes the
quote then the video for both private and group sends.
src/plugins/chat.ts (2)

2009-2011: ⚠️ Potential issue | 🟠 Major

Stop pending capture after the lock handoff.

Turning it off before releaseResponseLock() opens a window where a newer trigger can queue as a normal waiter, and the buffered-trigger replay below can then overtake it. That breaks latest-wins ordering.

Based on learnings: The response lock in service/message.ts uses a "latest-wins" strategy; only the most recent waiter is resolved when multiple requests queue.

Also applies to: 2062-2069

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/chat.ts` around lines 2009 - 2011, The call to
service.stopPendingMessages(session) must happen after releasing the response
lock to avoid a race where a newer trigger becomes a normal waiter and gets
overtaken; move the stopPendingMessages(session) call to immediately after the
releaseResponseLock(...) invocation in this module (and the duplicate occurrence
around the block at the other location noted), ensuring both occurrences are
adjusted so stopPendingMessages is executed only after releaseResponseLock
completes.

636-687: ⚠️ Potential issue | 🟠 Major

Mixed character_reply.messages[] payloads still drop text.

This renderer still returns on the first non-text field, so a valid tool payload like { text: '说明', image: 'https://...' } becomes image-only. That also makes the new text+image example in resources/presets/default.yml impossible to reproduce through the tool path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/chat.ts` around lines 636 - 687, Current renderer returns early
on the first non-text field (e.g., checking args.image/args.sticker/etc.) which
drops any accompanying args.text; change the rendering to accumulate parts into
a single message instead of returning immediately: inside the handler that
inspects args (references: args, quote, escape(), isHttpUrl(), and the checks
for args.at, args.face, args.sticker, args.image, args.file, args.video,
args.markdown, args.voice), build an array of XML fragments (validating URLs
with isHttpUrl where used and using escape for values) and push text content
(escape(args.text)) in addition to other tags, then return `<message${quote}>` +
joined fragments + `</message>` so mixed payloads (e.g., text+image) are
preserved.
src/service/message.ts (1)

697-701: ⚠️ Potential issue | 🟠 Major

Honor willConsume before diverting messages.

setPendingMessagesWillConsume() is dead as long as every incoming message is still appended and returned here. After a final character_reply, newer triggers never fall back to the normal waiter/cooldown path.

Suggested fix
         const active = this._activePendingMessages[groupId]
         if (active) {
-            active.append(message, triggerReason)
-            return true
+            if (active.willConsume) {
+                active.append(message, triggerReason)
+                return true
+            }
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/service/message.ts` around lines 697 - 701, The current logic always
diverts messages into this._activePendingMessages[groupId] via active.append,
which prevents honoring the willConsume flag set by
setPendingMessagesWillConsume; update the branch so you check the pending
object's willConsume (or the flag set by setPendingMessagesWillConsume) before
calling active.append — only call active.append(message, triggerReason) and
return true when active exists AND active.willConsume is false; when willConsume
is true, do not append and let the code fall through to the normal
waiter/cooldown path (or return false as appropriate). Ensure the flag name used
matches the setter/getter from setPendingMessagesWillConsume and that you
reference the same object on this._activePendingMessages[groupId].
src/utils/triggers.ts (1)

30-34: ⚠️ Potential issue | 🟠 Major

message_from_user user_id="all" still encodes as a literal ID.

This emits id_all; extractNextReplyReasonsFromTool() emits the same token, and the evaluator treats it as the real user id "all". The documented "any user" condition therefore never fires unless evaluation special-cases this token.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/triggers.ts` around lines 30 - 34, The current code encodes
message_from_user user_id="all" as the literal token `id_all`, which prevents
the documented "any user" condition from matching; change the mapping so that
when attributes.match(...)[1].trim().toLowerCase() === 'all' you produce a
canonical wildcard token (e.g. 'any_user' or a shared sentinel) instead of
`id_all`, updating the assignment to token in this block and making the same
canonicalization in extractNextReplyReasonsFromTool so both producers emit the
identical wildcard token rather than the literal "all" id.
🧹 Nitpick comments (1)
src/utils/send.ts (1)

7-9: Inline the 1-line helper per coding guidelines.

This helper is only called once (line 230). As per coding guidelines, "Do NOT create extra functions for short logic; if a function body would be 1-5 lines, inline it at the call site."

♻️ Inline the helper
-function isOneBotImageElement(el: h) {
-    return el.type === 'img'
-}
-
 export interface SendPart {

Then at line 230:

         if (
             session.platform === 'onebot' &&
             part.type === 'default' &&
-            part.elements.some(isOneBotImageElement)
+            part.elements.some((el) => el.type === 'img')
         ) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/send.ts` around lines 7 - 9, Remove the one-line helper
isOneBotImageElement and replace its single call with the expression it returns;
i.e., inline the predicate el.type === 'img' directly at the call site where
isOneBotImageElement(...) is used, then delete the isOneBotImageElement function
declaration. Ensure the inlined expression uses the same local variable name as
the original call so types flow unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/plugins/chat.ts`:
- Around line 1538-1541: The catch block around the model request should not
swallow errors; instead, if the request fails and signal?.aborted is false, log
the error, sleep 3 seconds, retry the model request once, and if the retry also
fails propagate the error (rethrow) so the caller can handle it; update the
try/catch around the model request (the block using logger.error and
signal?.aborted) to perform this sleep-and-retry-once logic and only exit
silently when signal?.aborted is true.
- Around line 1755-1768: The current checks in the startup validation loop over
config.privateConfigs and config.configs reject any falsy toolCalling values (so
undefined/inherit is disallowed); change the checks to only reject explicit
false by testing cfg.toolCalling === false. Update the two locations that throw
errors (the loops referencing config.privateConfigs and config.configs) so they
allow undefined and only throw when cfg.toolCalling is strictly false; this
preserves inheritance handled later in getConfigAndPresetForGuild while
enforcing that experimentalToolCallReply requires toolCalling not be explicitly
set to false.

In `@src/utils/messages.ts`:
- Around line 263-267: The image/sticker branches insert imageUrl raw into
XML-like tags and must escape XML metacharacters; wrap imageUrl with the same
escaping utility used by the file/voice/video branches (e.g., escapeXml or the
project's URL/XML escape helper) before pushing into filteredBuffer, and apply
the same fix to the other occurrence noted (the branch around the second
occurrence).

In `@src/utils/send.ts`:
- Line 147: Prettier flagged formatting issues: convert the inline ternary that
sets action (const action = session.isDirect ? 'send_private_msg' :
'send_group_msg') and the other ternary at the later occurrence into multi-line
ternary expressions so each branch is on its own line, and collapse the chained
nullish-coalescing expressions spanning lines 277–280 into a single line (keep
the same operands/order) so the `??` chain is not broken across lines; locate
and update the occurrences by editing the assignment to action and the other
ternary expression, and the `??` chain expression in send.ts to match Prettier’s
expected multiline ternary and single-line nullish-coalescing formats.

---

Duplicate comments:
In `@src/plugins/chat.ts`:
- Around line 2009-2011: The call to service.stopPendingMessages(session) must
happen after releasing the response lock to avoid a race where a newer trigger
becomes a normal waiter and gets overtaken; move the
stopPendingMessages(session) call to immediately after the
releaseResponseLock(...) invocation in this module (and the duplicate occurrence
around the block at the other location noted), ensuring both occurrences are
adjusted so stopPendingMessages is executed only after releaseResponseLock
completes.
- Around line 636-687: Current renderer returns early on the first non-text
field (e.g., checking args.image/args.sticker/etc.) which drops any accompanying
args.text; change the rendering to accumulate parts into a single message
instead of returning immediately: inside the handler that inspects args
(references: args, quote, escape(), isHttpUrl(), and the checks for args.at,
args.face, args.sticker, args.image, args.file, args.video, args.markdown,
args.voice), build an array of XML fragments (validating URLs with isHttpUrl
where used and using escape for values) and push text content
(escape(args.text)) in addition to other tags, then return `<message${quote}>` +
joined fragments + `</message>` so mixed payloads (e.g., text+image) are
preserved.

In `@src/service/message.ts`:
- Around line 697-701: The current logic always diverts messages into
this._activePendingMessages[groupId] via active.append, which prevents honoring
the willConsume flag set by setPendingMessagesWillConsume; update the branch so
you check the pending object's willConsume (or the flag set by
setPendingMessagesWillConsume) before calling active.append — only call
active.append(message, triggerReason) and return true when active exists AND
active.willConsume is false; when willConsume is true, do not append and let the
code fall through to the normal waiter/cooldown path (or return false as
appropriate). Ensure the flag name used matches the setter/getter from
setPendingMessagesWillConsume and that you reference the same object on
this._activePendingMessages[groupId].

In `@src/utils/send.ts`:
- Around line 146-169: The OneBot video send path (in the block building
`action`/`data` using `bot.internal._request` for
`send_private_msg`/`send_group_msg`) omits a leading `quote` element so quoted
videos lose their reply reference; modify the construction of the message
payload to detect a leading quote (check `part.elements[0]?.type === 'quote'`)
and prepend a `reply`/quote segment before the `video` segment (same pattern
used in the OneBot image handling) so the resulting `message` array includes the
quote then the video for both private and group sends.

In `@src/utils/triggers.ts`:
- Around line 30-34: The current code encodes message_from_user user_id="all" as
the literal token `id_all`, which prevents the documented "any user" condition
from matching; change the mapping so that when
attributes.match(...)[1].trim().toLowerCase() === 'all' you produce a canonical
wildcard token (e.g. 'any_user' or a shared sentinel) instead of `id_all`,
updating the assignment to token in this block and making the same
canonicalization in extractNextReplyReasonsFromTool so both producers emit the
identical wildcard token rather than the literal "all" id.

---

Nitpick comments:
In `@src/utils/send.ts`:
- Around line 7-9: Remove the one-line helper isOneBotImageElement and replace
its single call with the expression it returns; i.e., inline the predicate
el.type === 'img' directly at the call site where isOneBotImageElement(...) is
used, then delete the isOneBotImageElement function declaration. Ensure the
inlined expression uses the same local variable name as the original call so
types flow unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8d961780-2e80-4638-9e3a-0406b2ca2df1

📥 Commits

Reviewing files that changed from the base of the PR and between 2311b38 and 6552f4e.

📒 Files selected for processing (13)
  • resources/presets/default-tool-call.yml
  • resources/presets/default.yml
  • src/config.ts
  • src/plugins/chat.ts
  • src/service/message.ts
  • src/types.ts
  • src/utils/chain.ts
  • src/utils/elements.ts
  • src/utils/index.ts
  • src/utils/messages.ts
  • src/utils/render.ts
  • src/utils/send.ts
  • src/utils/triggers.ts
✅ Files skipped from review due to trivial changes (3)
  • src/utils/elements.ts
  • resources/presets/default-tool-call.yml
  • src/utils/render.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • src/utils/index.ts
  • src/types.ts
  • src/config.ts

Comment on lines +1538 to 1541
} catch (e) {
if (signal?.aborted) return
logger.error('model requests failed', e)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't turn generation failures into silent success.

This catch logs and exits the generator, so the caller continues as if the turn finished normally. Mid-stream failures can therefore persist partial progress/state and skip the required retry.

As per coding guidelines, "For streaming retry: catch, sleep 3s, retry once, then propagate."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/chat.ts` around lines 1538 - 1541, The catch block around the
model request should not swallow errors; instead, if the request fails and
signal?.aborted is false, log the error, sleep 3 seconds, retry the model
request once, and if the retry also fails propagate the error (rethrow) so the
caller can handle it; update the try/catch around the model request (the block
using logger.error and signal?.aborted) to perform this sleep-and-retry-once
logic and only exit silently when signal?.aborted is true.

Comment on lines +1755 to +1768
for (const [id, cfg] of Object.entries(config.privateConfigs)) {
if (!cfg.toolCalling) {
throw new Error(
`experimentalToolCallReply 依赖 toolCalling,privateConfigs.${id}.toolCalling 不能关闭。`
)
}
}

for (const [id, cfg] of Object.entries(config.configs)) {
if (!cfg.toolCalling) {
throw new Error(
`experimentalToolCallReply 依赖 toolCalling,configs.${id}.toolCalling 不能关闭。`
)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Only reject explicit toolCalling: false overrides.

These per-scope configs are merged onto the globals later in getConfigAndPresetForGuild(), so undefined means "inherit". The current falsy check also rejects omitted values and can fail startup for valid partial overrides.

Suggested fix
         for (const [id, cfg] of Object.entries(config.privateConfigs)) {
-            if (!cfg.toolCalling) {
+            if (cfg.toolCalling === false) {
                 throw new Error(
                     `experimentalToolCallReply 依赖 toolCalling,privateConfigs.${id}.toolCalling 不能关闭。`
                 )
             }
         }
 
         for (const [id, cfg] of Object.entries(config.configs)) {
-            if (!cfg.toolCalling) {
+            if (cfg.toolCalling === false) {
                 throw new Error(
                     `experimentalToolCallReply 依赖 toolCalling,configs.${id}.toolCalling 不能关闭。`
                 )
             }
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/chat.ts` around lines 1755 - 1768, The current checks in the
startup validation loop over config.privateConfigs and config.configs reject any
falsy toolCalling values (so undefined/inherit is disallowed); change the checks
to only reject explicit false by testing cfg.toolCalling === false. Update the
two locations that throw errors (the loops referencing config.privateConfigs and
config.configs) so they allow undefined and only throw when cfg.toolCalling is
strictly false; this preserves inheritance handled later in
getConfigAndPresetForGuild while enforcing that experimentalToolCallReply
requires toolCalling not be explicitly set to false.

Comment on lines +263 to +267
filteredBuffer.push(
sticker
? `<sticker>${imageUrl}</sticker>`
: ``
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Escape media URLs before embedding them in XML-like tags.

imageUrl/url can contain & and other XML metacharacters, especially on signed or storage-backed URLs. These branches now emit them raw, while the file/voice/video branches already escape their URLs.

Suggested fix
             if (imageUrl) {
                 filteredBuffer.push(
                     sticker
-                        ? `<sticker>${imageUrl}</sticker>`
-                        : ``
+                        ? `<sticker>${escapeXml(imageUrl)}</sticker>`
+                        : ``
                 )
             } else if (matchedImage) {
                 filteredBuffer.push(matchedImage.formatted)
                 usedImages.add(matchedImage.formatted)
@@
-        const formatted = hash ? `[image:${hash}]` : ``
+        const formatted = hash
+            ? `[image:${hash}]`
+            : ``

Also applies to: 375-375

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/messages.ts` around lines 263 - 267, The image/sticker branches
insert imageUrl raw into XML-like tags and must escape XML metacharacters; wrap
imageUrl with the same escaping utility used by the file/voice/video branches
(e.g., escapeXml or the project's URL/XML escape helper) before pushing into
filteredBuffer, and apply the same fix to the other occurrence noted (the branch
around the second occurrence).

}

const bot = session.bot as OneBotBot<Context>
const action = session.isDirect ? 'send_private_msg' : 'send_group_msg'
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix Prettier formatting violations.

ESLint/Prettier flagged three formatting issues:

  • Lines 147 & 302: Ternary expressions should be multi-line.
  • Lines 277–280: Chained ?? operators should be on one line.
🎨 Apply Prettier fixes
             const bot = session.bot as OneBotBot<Context>
-            const action = session.isDirect ? 'send_private_msg' : 'send_group_msg'
+            const action = session.isDirect
+                ? 'send_private_msg'
+                : 'send_group_msg'
                 if (el.type === 'img') {
                     const file = String(
-                        el.attrs.src ??
-                            el.attrs.url ??
-                            el.attrs.imageUrl ??
-                            ''
+                        el.attrs.src ?? el.attrs.url ?? el.attrs.imageUrl ?? ''
                     )
             const bot = session.bot as OneBotBot<Context>
-            const action = session.isDirect ? 'send_private_msg' : 'send_group_msg'
+            const action = session.isDirect
+                ? 'send_private_msg'
+                : 'send_group_msg'

Also applies to: 277-280, 302-302

🧰 Tools
🪛 ESLint

[error] 147-147: Replace ·?·'send_private_msg' with ⏎················?·'send_private_msg'⏎···············

(prettier/prettier)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/send.ts` at line 147, Prettier flagged formatting issues: convert
the inline ternary that sets action (const action = session.isDirect ?
'send_private_msg' : 'send_group_msg') and the other ternary at the later
occurrence into multi-line ternary expressions so each branch is on its own
line, and collapse the chained nullish-coalescing expressions spanning lines
277–280 into a single line (keep the same operands/order) so the `??` chain is
not broken across lines; locate and update the occurrences by editing the
assignment to action and the other ternary expression, and the `??` chain
expression in send.ts to match Prettier’s expected multiline ternary and
single-line nullish-coalescing formats.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant