Skip to content

feat: 为群聊增加发言等待聚合触发#65

Open
CookSleep wants to merge 3 commits intoChatLunaLab:mainfrom
CookSleep:feat/group-wait-trigger
Open

feat: 为群聊增加发言等待聚合触发#65
CookSleep wants to merge 3 commits intoChatLunaLab:mainfrom
CookSleep:feat/group-wait-trigger

Conversation

@CookSleep
Copy link
Copy Markdown
Member

@CookSleep CookSleep commented Apr 1, 2026

Summary

  • 为全局群聊配置和分群聊配置新增“发言等待时长(秒)”,并让固定间隔为 0 时改为按静默等待聚合触发。
  • 让仅有 @ 或昵称的群消息在固定间隔不为 0 时也能进入等待聚合,避免用户连续补充多条消息时过早触发。
  • 在固定间隔触发为 0 时关闭群聊活跃度触发与统计,避免与等待聚合逻辑互相干扰。

Summary by CodeRabbit

  • New Features

    • New messageWaitTime setting (default 10s) to control aggregated response timing.
    • Experimental tool-call reply flow: models can return structured reply/tool outputs that drive follow-ups.
    • Video message support (send/receive) and improved image/sticker handling.
  • Improvements

    • Pending-message capture during streaming to accumulate and synthesize late inputs for the current turn.
    • Refined trigger/scheduling logic to reduce spurious immediate replies and honor direct-trigger semantics.
  • Documentation

    • Updated default presets and instruction examples, including next-reply schema changes.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 1, 2026

📝 Walkthrough

Walkthrough

Adds experimental tool-call reply flow, pending-message capture/consumption, and expanded media/tag handling; introduces per-group message-wait aggregation (messageWait + messageWaitTime), new reply-tool field types and tool-call propagation through streaming, plus OneBot video/image send handling and related APIs.

Changes

Cohort / File(s) Summary
Configuration
src/config.ts
Added top-level experimentalToolCallReply flag; added/updated messageWaitTime schema and clarified messageInterval/messageWaitTime semantics across global/group/guild/private config objects.
Trigger / filter logic
src/plugins/filter.ts
Introduced info.messageWait handling, replaced zero-interval finder with findMessageWaitTriggerReason, added isOnlyDirectTrigger gating, and adjusted scheduler/pending-work checks to use messageWait.
Chat plugin / agent flow
src/plugins/chat.ts
Added experimental tool-call reply integration, PendingMessageQueue, reply-tool creation/handling, streaming tool-call wiring, pending-message capture/consumption, and config validation for tool-calling.
Message service / collector
src/service/message.ts
Added CharacterReplyToolField support, APIs to register reply tool fields, and per-session pending-message lifecycle (start/setWillConsume/stop/markConsumed) plus response-waiter messageKey handling.
Types / schema
src/types.ts
Added CharacterReplyToolField interface, messageWaitTime in GuildConfig/PrivateConfig, optional messageWait in GroupInfo, maxWaitSeconds in time_id predicates, and toolCalls in streamed chunk types.
Trigger store init
src/service/trigger.ts
createDefaultGroupInfo now initializes messageWait: false.
Chain / LLM utils
src/utils/chain.ts
createChatLunaChain now accepts extraTools?: (session) => StructuredTool[]; streams include toolCalls accumulation; final chunk emission simplified.
Message formatting / rendering
src/utils/messages.ts, src/utils/elements.ts, src/utils/render.ts
Added/exported formatMessageString; standardized image/sticker handling; added video element support and rendering/processing changes; mapping and token normalization updated.
Send pipeline / OneBot
src/utils/send.ts
Added OneBot-compatible image/video send logic, new video send rule, element-to-OneBot segment mapping, and helper isOneBotImageElement.
Stream/chain helpers barrel
src/utils/index.ts
Re-exported formatMessageString.
Presets / docs
resources/presets/default-tool-call.yml, resources/presets/default.yml
Added new default tool-call preset file; updated default preset instructions, <next_reply /> schema examples, and output format docs.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant Chat as Chat Plugin
    participant Agent as LLM/Agent
    participant Queue as PendingMessageQueue
    participant Collector as MessageCollector

    User->>Chat: send message
    Chat->>Collector: deliver message (may append to pending)
    alt pending capture active
        Collector->>Queue: append incoming message
        Chat-->>User: (suppress immediate trigger)
    else normal flow
        Chat->>Agent: start streaming response (may include tool calls)
    end

    Agent-->>Chat: stream chunks (with toolCalls)
    Chat->>Queue: start/stop capture around stream
    alt agent emits character_reply tool call
        Chat->>Collector: mark pending willConsume(true)
        Collector->>Queue: drain and mark consumed
    end

    Agent-->>Chat: final response chunk
    Chat->>User: send rendered output (uses toolCalls if present)
    Queue->>Chat: if takeLatestTrigger() requested -> synthesize human update trigger
    Chat->>Agent: optionally trigger follow-up collection
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested reviewers

  • dingyi222666

Poem

🐰 I nibble code and count the hops,
When tools call out or chatter stops.
I queue the crumbs and wait a beat,
Then bundle messages up neat.
A tiny rabbit, making flows complete.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and concisely summarizes the main change: adding message-wait aggregation triggering for group chats. It directly reflects the PR's primary objective and matches the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements a message aggregation mechanism for group chats, primarily controlled by a new messageWaitTime configuration and a messageWait state. When the fixed message interval is set to 0, or when a message consists only of a mention or nickname, the system now waits for a period of inactivity before triggering. The changes include updates to the configuration schema, state initialization in GroupInfo, and significant logic adjustments in the filtering plugin to handle these new trigger conditions. Feedback was provided regarding the repetition of the aggregation mode check, suggesting it be extracted into a helper function for better maintainability.

Comment on lines +34 to +37
!(
config.enableFixedIntervalTrigger !== false &&
config.messageInterval === 0
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The condition to check if the aggregation mode (fixed interval 0) is active is repeated multiple times throughout this file (lines 34-37, 239-242, 396-399, 818-821, 855-858). It would be cleaner and more maintainable to extract this into a helper function or a boolean variable within the copyOfConfig object.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/types.ts (1)

242-247: ⚠️ Potential issue | 🟡 Minor

Restore the EOF newline.

Prettier/ESLint are already failing on this file, so CI will stay red until the final newline is back.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/types.ts` around lines 242 - 247, The file is missing a trailing newline
at EOF which causes Prettier/ESLint failures; open the declaration for module
'koishi' (the interface Tables containing chathub_character_variable and
chathub_character_wake_up_reply / CharacterVariableRecord and WakeUpReplyRecord)
and add a single newline character after the final closing brace so the file
ends with a newline.
src/plugins/filter.ts (1)

793-813: ⚠️ Potential issue | 🟠 Major

Fix isOnlyDirectTrigger to properly detect bare custom nickname triggers.

The current code only detects bare @/quote cases via session.stripped.content, not bare custom nickname cases. Since Koishi's session.stripped only strips prefixes and @mentions it knows about—not custom bot nicknames—a message containing only the bot's custom nickname will still have non-empty session.stripped.content and incorrectly fail the empty-content check. This breaks wait aggregation (messageWait) for nickname-only messages.

Separate the two cases: use isAppel && session.stripped.content.trim().length < 1 for @mentions/quotes, and plainText === value for bare custom nicknames matched directly against the extracted plaintext.

Suggested change
+        const plainText = plainTextContent.trim()
         const isDirectTrigger =
             isAppel ||
             (copyOfConfig.isNickname &&
                 currentPreset.nick_name.some((value) =>
                     plainTextContent.startsWith(value)
                 )) ||
             (copyOfConfig.isNickNameWithContent &&
                 currentPreset.nick_name.some((value) =>
                     plainTextContent.includes(value)
                 ))
 
         const isOnlyDirectTrigger =
-            isDirectTrigger && session.stripped.content.trim().length < 1
+            (isAppel && session.stripped.content.trim().length < 1) ||
+            currentPreset.nick_name.some((value) => plainText === value)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/filter.ts` around lines 793 - 813, The isOnlyDirectTrigger logic
incorrectly uses session.stripped.content for custom nicknames; change it to
distinguish `@/quote` mentions from bare custom nickname triggers by computing
isOnlyDirectTrigger as (isAppel && session.stripped.content.trim().length < 1)
OR (copyOfConfig.isNickname && currentPreset.nick_name.some(value =>
plainTextContent === value)) so that pure custom-nickname messages
(plainTextContent equals a nick_name entry) count as only-direct triggers; then
keep the existing messageWait updates that use isOnlyDirectTrigger
(info.messageWait logic with copyOfConfig.enableFixedIntervalTrigger and
copyOfConfig.messageInterval) so nickname-only messages properly set
messageWait.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/config.ts`:
- Around line 972-973: The file ends with the line "return modified" followed by
a closing brace "}" but is missing a trailing newline; add a single EOF newline
character at the end of the file (i.e., ensure there is a blank line after the
final "return modified" / "}" so Prettier/ESLint stop flagging the file).

In `@src/plugins/filter.ts`:
- Around line 329-345: The interval-count and activity-score trigger branches
must respect the session silence flag info.messageWait; update the logic so any
early-return trigger (e.g., the branch checking copyOfConfig.messageInterval and
the activity-score branch around the later 395-406 region) first checks if
info.messageWait is true and returns undefined if so. Concretely, in the block
using copyOfConfig.messageInterval and in the activity-score evaluation, add a
guard like "if (info.messageWait) return undefined" before computing/returning
the trigger string so that neither messageInterval nor activityScore can fire
while messageWait is set.

---

Outside diff comments:
In `@src/plugins/filter.ts`:
- Around line 793-813: The isOnlyDirectTrigger logic incorrectly uses
session.stripped.content for custom nicknames; change it to distinguish `@/quote`
mentions from bare custom nickname triggers by computing isOnlyDirectTrigger as
(isAppel && session.stripped.content.trim().length < 1) OR
(copyOfConfig.isNickname && currentPreset.nick_name.some(value =>
plainTextContent === value)) so that pure custom-nickname messages
(plainTextContent equals a nick_name entry) count as only-direct triggers; then
keep the existing messageWait updates that use isOnlyDirectTrigger
(info.messageWait logic with copyOfConfig.enableFixedIntervalTrigger and
copyOfConfig.messageInterval) so nickname-only messages properly set
messageWait.

In `@src/types.ts`:
- Around line 242-247: The file is missing a trailing newline at EOF which
causes Prettier/ESLint failures; open the declaration for module 'koishi' (the
interface Tables containing chathub_character_variable and
chathub_character_wake_up_reply / CharacterVariableRecord and WakeUpReplyRecord)
and add a single newline character after the final closing brace so the file
ends with a newline.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b65d1525-832e-4398-bbc5-f4e770e5c3b1

📥 Commits

Reviewing files that changed from the base of the PR and between 2cfa4fb and d9516d0.

📒 Files selected for processing (4)
  • src/config.ts
  • src/plugins/filter.ts
  • src/service/trigger.ts
  • src/types.ts

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/plugins/filter.ts`:
- Around line 814-817: The isOnlyDirectTrigger logic should detect true only for
a pure direct mention/nickname without quoted-reply content and must treat exact
nickname-only the same whether isNickname or isNickNameWithContent is set;
change the condition in isOnlyDirectTrigger so it becomes true when (isAppel &&
session.stripped.content.trim().length === 0 && the message is NOT a
quote-reply) OR ((copyOfConfig.isNickname || copyOfConfig.isNickNameWithContent)
&& currentPreset.nick_name.some(v => plainText === v)); use your platform's
property to detect quote-replies (e.g. session.subtype === 'quote' or
session.message?.isQuote) to exclude them from the isAppel branch and ensure
exact nickname-only matches use plainText equality as shown.
- Around line 819-825: The current logic arms info.messageWait for
direct/private chats because isOnlyDirectTrigger is being used to set waits;
change the branches so isOnlyDirectTrigger cannot set messageWait true: when
copyOfConfig.messageInterval === 0 set info.messageWait = !isOnlyDirectTrigger
(so DMs stay immediate), and in the final else remove isOnlyDirectTrigger from
the OR (set info.messageWait = info.messageWait) so only existing wait state or
non-direct scheduling controls aggregation; ensure references are to
copyOfConfig.enableFixedIntervalTrigger, copyOfConfig.messageInterval,
info.messageWait, and isOnlyDirectTrigger.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b282d674-99e0-4936-8169-8e7b4ea72dec

📥 Commits

Reviewing files that changed from the base of the PR and between d9516d0 and 27c846b.

📒 Files selected for processing (3)
  • src/config.ts
  • src/plugins/filter.ts
  • src/types.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/config.ts

Comment on lines +814 to +817
const isOnlyDirectTrigger =
(isAppel && session.stripped.content.trim().length < 1) ||
(copyOfConfig.isNickname &&
currentPreset.nick_name.some((value) => plainText === value))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

isOnlyDirectTrigger no longer matches “only @ or nickname”.

isAppel also covers quote replies, so a pure quote-to-bot message now gets buffered as if it were a bare @. In the other direction, an exact nickname-only message is buffered only when isNickname is enabled; with isNickNameWithContent alone it still fires immediately. Both cases drift from the PR’s stated aggregation rule.

Suggested fix
-        const isOnlyDirectTrigger =
-            (isAppel && session.stripped.content.trim().length < 1) ||
-            (copyOfConfig.isNickname &&
-                currentPreset.nick_name.some((value) => plainText === value))
+        const isOnlyDirectTrigger =
+            (!session.quote &&
+                isAppel &&
+                session.stripped.content.trim().length < 1) ||
+            ((copyOfConfig.isNickname ||
+                copyOfConfig.isNickNameWithContent) &&
+                currentPreset.nick_name.some((value) => plainText === value))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const isOnlyDirectTrigger =
(isAppel && session.stripped.content.trim().length < 1) ||
(copyOfConfig.isNickname &&
currentPreset.nick_name.some((value) => plainText === value))
const isOnlyDirectTrigger =
(!session.quote &&
isAppel &&
session.stripped.content.trim().length < 1) ||
((copyOfConfig.isNickname ||
copyOfConfig.isNickNameWithContent) &&
currentPreset.nick_name.some((value) => plainText === value))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/filter.ts` around lines 814 - 817, The isOnlyDirectTrigger logic
should detect true only for a pure direct mention/nickname without quoted-reply
content and must treat exact nickname-only the same whether isNickname or
isNickNameWithContent is set; change the condition in isOnlyDirectTrigger so it
becomes true when (isAppel && session.stripped.content.trim().length === 0 &&
the message is NOT a quote-reply) OR ((copyOfConfig.isNickname ||
copyOfConfig.isNickNameWithContent) && currentPreset.nick_name.some(v =>
plainText === v)); use your platform's property to detect quote-replies (e.g.
session.subtype === 'quote' or session.message?.isQuote) to exclude them from
the isAppel branch and ensure exact nickname-only matches use plainText equality
as shown.

Comment on lines +819 to +825
if (copyOfConfig.enableFixedIntervalTrigger === false) {
info.messageWait = false
} else if (copyOfConfig.messageInterval === 0) {
info.messageWait = true
} else {
info.messageWait = info.messageWait || isOnlyDirectTrigger
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep wait aggregation out of direct chats.

This branch now arms info.messageWait for private sessions too. That widens a group-only feature into DMs: with fixed interval enabled and messageInterval === 0, the first private message is deferred to the scheduler instead of triggering immediately.

Suggested fix
-        if (copyOfConfig.enableFixedIntervalTrigger === false) {
+        if (session.isDirect || copyOfConfig.enableFixedIntervalTrigger === false) {
             info.messageWait = false
         } else if (copyOfConfig.messageInterval === 0) {
             info.messageWait = true
         } else {
             info.messageWait = info.messageWait || isOnlyDirectTrigger
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (copyOfConfig.enableFixedIntervalTrigger === false) {
info.messageWait = false
} else if (copyOfConfig.messageInterval === 0) {
info.messageWait = true
} else {
info.messageWait = info.messageWait || isOnlyDirectTrigger
}
if (session.isDirect || copyOfConfig.enableFixedIntervalTrigger === false) {
info.messageWait = false
} else if (copyOfConfig.messageInterval === 0) {
info.messageWait = true
} else {
info.messageWait = info.messageWait || isOnlyDirectTrigger
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/filter.ts` around lines 819 - 825, The current logic arms
info.messageWait for direct/private chats because isOnlyDirectTrigger is being
used to set waits; change the branches so isOnlyDirectTrigger cannot set
messageWait true: when copyOfConfig.messageInterval === 0 set info.messageWait =
!isOnlyDirectTrigger (so DMs stay immediate), and in the final else remove
isOnlyDirectTrigger from the OR (set info.messageWait = info.messageWait) so
only existing wait state or non-direct scheduling controls aggregation; ensure
references are to copyOfConfig.enableFixedIntervalTrigger,
copyOfConfig.messageInterval, info.messageWait, and isOnlyDirectTrigger.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/plugins/filter.ts (1)

814-817: ⚠️ Potential issue | 🟠 Major

Restrict isOnlyDirectTrigger to actual bare guild @/nickname messages.

Line 815 still folds pure quote replies into the bare-trigger path, and this helper can still become true for exact nickname-only DMs. That means quote-only replies and private nickname-only messages get buffered like bare guild @/nickname messages, which is broader than the PR rule.

Suggested change
-        const isOnlyDirectTrigger =
-            (isAppel && session.stripped.content.trim().length < 1) ||
-            ((copyOfConfig.isNickname || copyOfConfig.isNickNameWithContent) &&
-                currentPreset.nick_name.some((value) => plainText === value))
+        const isOnlyDirectTrigger =
+            !session.isDirect &&
+            ((!session.quote &&
+                isAppel &&
+                session.stripped.content.trim().length === 0) ||
+                ((copyOfConfig.isNickname ||
+                    copyOfConfig.isNickNameWithContent) &&
+                    currentPreset.nick_name.some((value) => plainText === value)))

Also applies to: 819-825

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/filter.ts` around lines 814 - 817, isOnlyDirectTrigger is
currently true for quote-only replies and exact nickname-only DMs; change its
logic to only treat bare guild mentions/nickname messages as direct triggers by
requiring the message be in a guild channel (e.g. check session.channel?.type
=== 'GUILD' or !session.isDM) and that the message is not a quote-only reply
(ensure session.stripped contains no quote-only content or check a reply/quote
flag), in both the existing isAppel branch and the nickname branch that uses
copyOfConfig.isNickname / copyOfConfig.isNickNameWithContent and
currentPreset.nick_name / plainText so quote-replies and private nickname-only
messages are excluded (apply the same restriction to the other occurrence around
lines 819–825).
🧹 Nitpick comments (1)
src/plugins/filter.ts (1)

31-37: Drop the defensive !== false checks on enableFixedIntervalTrigger.

Lines 35, 240, 405, 419, 831, and 868 are all treating this as tri-state, but the schema already gives it a boolean default. Using copyOfConfig.enableFixedIntervalTrigger directly would make the new wait-mode branches much easier to read.

As per coding guidelines: "Do NOT add defensive/fallback checks; use the most probable type directly instead of guessing what types might be".

Also applies to: 237-242, 404-407, 419-421, 830-833, 867-870

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/filter.ts` around lines 31 - 37, The conditional is using
defensive `!== false` checks around config.enableFixedIntervalTrigger making the
logic tri-state; replace those with direct boolean usage (e.g., use
config.enableFixedIntervalTrigger or copyOfConfig.enableFixedIntervalTrigger) in
the affected branches (the condition combining isDirect,
config.enableActivityScoreTrigger, and messageInterval) so the code relies on
the schema-defaulted boolean; update the other referenced occurrences (near the
checks at the same logical branches) to remove `!== false` and simplify to
direct property checks to improve readability and maintainability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/plugins/filter.ts`:
- Around line 814-817: isOnlyDirectTrigger is currently true for quote-only
replies and exact nickname-only DMs; change its logic to only treat bare guild
mentions/nickname messages as direct triggers by requiring the message be in a
guild channel (e.g. check session.channel?.type === 'GUILD' or !session.isDM)
and that the message is not a quote-only reply (ensure session.stripped contains
no quote-only content or check a reply/quote flag), in both the existing isAppel
branch and the nickname branch that uses copyOfConfig.isNickname /
copyOfConfig.isNickNameWithContent and currentPreset.nick_name / plainText so
quote-replies and private nickname-only messages are excluded (apply the same
restriction to the other occurrence around lines 819–825).

---

Nitpick comments:
In `@src/plugins/filter.ts`:
- Around line 31-37: The conditional is using defensive `!== false` checks
around config.enableFixedIntervalTrigger making the logic tri-state; replace
those with direct boolean usage (e.g., use config.enableFixedIntervalTrigger or
copyOfConfig.enableFixedIntervalTrigger) in the affected branches (the condition
combining isDirect, config.enableActivityScoreTrigger, and messageInterval) so
the code relies on the schema-defaulted boolean; update the other referenced
occurrences (near the checks at the same logical branches) to remove `!== false`
and simplify to direct property checks to improve readability and
maintainability.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 761ad6a4-9274-4cef-8d26-3844b45958d2

📥 Commits

Reviewing files that changed from the base of the PR and between 27c846b and e857fb9.

📒 Files selected for processing (1)
  • src/plugins/filter.ts

@CookSleep CookSleep force-pushed the feat/group-wait-trigger branch from 2311b38 to e857fb9 Compare April 3, 2026 05:27
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/utils/chain.ts (1)

215-217: ⚠️ Potential issue | 🟠 Major

Reset session-scoped extra tools on the no-session path.

The early return at Line 215 leaves extraRef.value untouched. After one session installs extra reply tools, a later call without configurable.session will reuse the previous session's tool set.

🛠️ Suggested fix
             if (!session) {
+                extraRef.value = []
                 return toolMask
             }

Also applies to: 237-237

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/chain.ts` around lines 215 - 217, The early return when there is no
session leaves extraRef.value populated from a previous session, causing extra
reply tools to persist; update the no-session path in the function that checks
`session` (the block returning `toolMask`) to clear or reset `extraRef.value`
before returning, and apply the same reset to the other no-session early-return
location around the `configurable.session` check so that `extraRef.value` is
cleared whenever there is no active session.
src/utils/messages.ts (1)

309-321: ⚠️ Potential issue | 🟠 Major

Don't require chatluna_file_url for inbound audio/video serialization.

This attribute is only injected by src/utils/render.ts for outbound rendered tags. broadcast() calls mapElementToString() on raw session.elements, so incoming video/audio elements will now be skipped instead of becoming <video> / <voice> content.

🛠️ Suggested fix
         } else if (element.type === 'video' || element.type === 'audio') {
-            const url = element.attrs['chatluna_file_url']
+            const url =
+                element.attrs['chatluna_file_url'] ??
+                element.attrs.src ??
+                element.attrs.url
             if (!url) {
                 continue
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/messages.ts` around lines 309 - 321, The serialization currently
skips inbound audio/video when chatluna_file_url is missing; update the block
handling element.type === 'video' || 'audio' (in the mapElementToString /
serialization logic) to fallback to other common URL attrs (e.g.
element.attrs['src'] or element.attrs['url']) before skipping: compute const url
= element.attrs['chatluna_file_url'] || element.attrs['src'] ||
element.attrs['url']; only continue if url is falsy, and keep the existing
marker/voice branching that pushes `<voice>` or `<video>` with
escapeXml(String(url)).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@resources/presets/default.yml`:
- Around line 366-367: The example uses two next_reply entries in the wait-user
group where the silence condition uses user_id="all", which teaches a broader
wait pattern; change the silence condition to use the same user_id as the
explicit reply (i.e., use the same user_id value on the <next_reply ...
type="no_message_from_user" user_id="...">) or remove the group-level "all"
variant and show only the single no_message_from_user form so that next_reply
(group="wait-user") consistently represents waiting for the same user rather
than the whole group.

In `@src/config.ts`:
- Around line 408-414: The legacy top-level messageWaitTime is not migrated so
users upgrading lose their custom value; update migrateConfig (and related
LegacyConfig handling) to detect a top-level messageWaitTime, copy its numeric
value into the new group-level fields (where the new messageWaitTime now lives)
before exposing the new group structure, and then remove or mark the legacy key
as migrated to avoid double-applying; ensure the migration preserves the
original value and validates it against the same constraints used by
messageWaitTime (Config) so behavior remains identical after upgrade.

In `@src/plugins/chat.ts`:
- Around line 150-208: The current conversion loop for NextReplyToolGroup builds
tokens by skipping invalid NextReplyToolCondition entries which weakens
semantics; instead, detect any malformed condition in a group's conditions and
drop the entire group (i.e., skip adding to reasons) rather than filtering out
individual conditions. Concretely, inside the for (const item of value) loop
that casts to NextReplyToolGroup and processes group.conditions, introduce a
validation pass over group.conditions that returns false if any condition is
missing required fields (e.g., missing user_id for message_from_user or
no_message_from_user, non-finite seconds, etc. as used when constructing
tokens); if validation fails, continue the outer loop (do not build tokens or
push to reasons). Apply the same change to the analogous conversion blocks
handling the other ranges noted so all NextReplyToolGroup processing
consistently rejects groups with any incomplete NextReplyToolCondition.
- Around line 316-343: The schema for character_reply.messages allows multiple
sibling payloads (text, quote/at/face, sticker, image, parts) but
buildXmlMessage() currently returns on the first matching branch, causing silent
data loss; fix by either (A) making the message schema mutually exclusive using
oneOf/anyOf variants for the allowed shapes (text-only, quote-only,
sticker-only, image-only, parts-only) so validation fails when multiple siblings
are present, or (B) normalize inputs in buildXmlMessage() (the function named
buildXmlMessage) to explicitly merge or prioritize fields (e.g., concatenate
text and parts, attach at/quote metadata, include image/sticker) or throw a
validation error when conflicting fields are present; apply the same change to
the other similar message schema usage referenced in this file to ensure no
payloads are silently dropped.
- Around line 1755-1769: The checks in the startup validation treat undefined as
false and reject partial overrides; change both guards to only fail when
toolCalling is explicitly false (i.e., use cfg.toolCalling === false) so omitted
values inherit the global setting, and ensure any per-group merge follows the
pattern Object.assign({}, config, guildConfig) when combining global config with
each entry (do not add helper functions).
- Around line 678-684: The current serialization of args.voice always emits
id="undefined" because voice.id is optional; in the branch that builds the
string (where args.voice is cast to voice and escape() and quote are used)
change the serialization to conditionally include the id attribute only when
voice.id is non-null/defined (e.g., check voice.id !== undefined && voice.id !==
null) and escape voice.id before inserting; keep escaping voice.text and
preserve quote. Target the block that references args.voice, the local voice
variable, escape(), and quote to apply this conditional attribute emission.

In `@src/service/message.ts`:
- Around line 53-70: Reset the pending-message state when the service is reset:
ensure _activePendingMessages and _consumedPendingMessages are cleared (and
optionally _pendingCooldownTriggers if cooldowns should not persist) inside the
reset paths (clear(), clearAll, and dispose) so no stale append callbacks or
skip keys survive; locate the methods handling resets and set
_activePendingMessages = {} and _consumedPendingMessages = {} (and clear
_pendingCooldownTriggers) as part of their teardown logic.
- Around line 146-154: The willConsume flag set by setPendingMessagesWillConsume
is never read, causing buffered messages appended via active.append to be
discarded by queue.takeLatestTrigger; update the pending-message flow so that
either (A) the flag is respected: read state.willConsume in the pending
handler/queue.takeLatestTrigger path and, when true, let buffered messages
bypass cooldown/trigger filtering and be processed normally, or (B) always
replay all buffered messages when the pending handler finishes: when clearing
state in the pending-message completion code (the logic around
queue.takeLatestTrigger and the finally block in the chat handler), re-insert or
dispatch every message in state.buffer (not just the last triggerReason message)
back into the normal trigger/cooldown path; locate
setPendingMessagesWillConsume, active.append, and queue.takeLatestTrigger to
implement the chosen fix.

In `@src/utils/send.ts`:
- Around line 227-327: This fast-path assumes every element in part.elements is
one of the handled types and silently drops unknown types; before serializing,
add a guard that checks part.elements only contains the allowed types
('quote','text','at','face','img') (or passes isOneBotImageElement for image
detection) and if any element has an unhandled type, skip this OneBot fast-path
so normal session.send() fallback handles the mixed fragment; update the check
near the branch that uses session.platform, part.type and part.elements and bail
out early (do not build message or call bot.internal._request) when unknown
elements are present.
- Around line 121-183: The OneBot branch in send(...) discards the leading quote
pulled into the video part by split(...), causing loss of reply context; update
the OneBot send logic in send to include the part.elements (including any
leading quote element) when building the message payload instead of only
serializing a single video object—i.e., detect and preserve any part.elements[0]
of type 'quote' and serialize the full sequence (quote + video) into the message
array passed to bot.internal._request for actions
'send_private_msg'/'send_group_msg' so OneBot retains the reply reference.

---

Outside diff comments:
In `@src/utils/chain.ts`:
- Around line 215-217: The early return when there is no session leaves
extraRef.value populated from a previous session, causing extra reply tools to
persist; update the no-session path in the function that checks `session` (the
block returning `toolMask`) to clear or reset `extraRef.value` before returning,
and apply the same reset to the other no-session early-return location around
the `configurable.session` check so that `extraRef.value` is cleared whenever
there is no active session.

In `@src/utils/messages.ts`:
- Around line 309-321: The serialization currently skips inbound audio/video
when chatluna_file_url is missing; update the block handling element.type ===
'video' || 'audio' (in the mapElementToString / serialization logic) to fallback
to other common URL attrs (e.g. element.attrs['src'] or element.attrs['url'])
before skipping: compute const url = element.attrs['chatluna_file_url'] ||
element.attrs['src'] || element.attrs['url']; only continue if url is falsy, and
keep the existing marker/voice branching that pushes `<voice>` or `<video>` with
escapeXml(String(url)).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 41ef2b74-cf72-4edd-88ac-1fc6b7378532

📥 Commits

Reviewing files that changed from the base of the PR and between e857fb9 and 2311b38.

📒 Files selected for processing (13)
  • resources/presets/default-tool-call.yml
  • resources/presets/default.yml
  • src/config.ts
  • src/plugins/chat.ts
  • src/service/message.ts
  • src/types.ts
  • src/utils/chain.ts
  • src/utils/elements.ts
  • src/utils/index.ts
  • src/utils/messages.ts
  • src/utils/render.ts
  • src/utils/send.ts
  • src/utils/triggers.ts
✅ Files skipped from review due to trivial changes (2)
  • src/utils/index.ts
  • resources/presets/default-tool-call.yml
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/types.ts

Comment on lines +366 to +367
<next_reply group="wait-user" type="message_from_user" user_id="" />
<next_reply group="wait-user" type="no_message_from_user" user_id="all" seconds="60" />
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

This next_reply example teaches a broader wait than the prose above.

With user_id="all", the silence timer starts immediately and watches the whole group, so this does not mean “wait for this user to finish sending”. The model is likely to learn the wrong pattern from this example. Use the same user_id on the silence condition, or just show the single no_message_from_user form instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@resources/presets/default.yml` around lines 366 - 367, The example uses two
next_reply entries in the wait-user group where the silence condition uses
user_id="all", which teaches a broader wait pattern; change the silence
condition to use the same user_id as the explicit reply (i.e., use the same
user_id value on the <next_reply ... type="no_message_from_user" user_id="...">)
or remove the group-level "all" variant and show only the single
no_message_from_user form so that next_reply (group="wait-user") consistently
represents waiting for the same user rather than the whole group.

Comment on lines +408 to 414
messageWaitTime: Schema.number()
.default(10)
.min(0)
.max(300)
.description(
'发言等待时长(秒):仅在固定间隔触发开启且消息间隔为 0 时,或固定间隔不为 0 但本次触发消息只有@或昵称时生效,当收到一条消息后,连续 N 秒没有新消息才触发一次,用于应对用户需要一次性发送多条消息的场景'
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Migrate the legacy messageWaitTime key before exposing the new group fields.

Config/LegacyConfig still allow a top-level messageWaitTime, but migrateConfig() never copies or deletes it. Upgrades from the old flat config will silently fall back to 10 seconds here instead of preserving the user's existing value.

🛠️ Suggested fix
 type CommonMigration = {
     key:
         | 'maxMessages'
+        | 'messageWaitTime'
         | 'maxTokens'
         | 'image'
         | 'imageInputMaxCount'
         | 'imageInputMaxSize'
@@
 const commonMigrations: CommonMigration[] = [
     { key: 'maxMessages', privateDefault: 40, groupDefault: 40 },
+    { key: 'messageWaitTime', privateDefault: 10, groupDefault: 10 },
     { key: 'maxTokens', privateDefault: 20000, groupDefault: 20000 },
@@
 const legacyKeys = [
     'defaultPreset',
     'model',
     'maxMessages',
+    'messageWaitTime',
     'maxTokens',

Also applies to: 479-485

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config.ts` around lines 408 - 414, The legacy top-level messageWaitTime
is not migrated so users upgrading lose their custom value; update migrateConfig
(and related LegacyConfig handling) to detect a top-level messageWaitTime, copy
its numeric value into the new group-level fields (where the new messageWaitTime
now lives) before exposing the new group structure, and then remove or mark the
legacy key as migrated to avoid double-applying; ensure the migration preserves
the original value and validates it against the same constraints used by
messageWaitTime (Config) so behavior remains identical after upgrade.

Comment on lines +150 to +208
for (const item of value) {
if (!item || typeof item !== 'object' || Array.isArray(item)) {
continue
}

const group = item as NextReplyToolGroup
if (!Array.isArray(group.conditions)) {
continue
}

const tokens = group.conditions
.map((it) => {
if (!it || typeof it !== 'object' || Array.isArray(it)) {
return undefined
}

const condition = it as NextReplyToolCondition
if (condition.type === 'message_from_user') {
if (
typeof condition.user_id === 'string' &&
condition.user_id.trim()
) {
return `id_${condition.user_id.trim()}`
}

return undefined
}

if (condition.type === 'no_message_from_user') {
if (
typeof condition.seconds === 'number' &&
Number.isFinite(condition.seconds) &&
condition.seconds > 0 &&
typeof condition.user_id === 'string' &&
condition.user_id.trim()
) {
if (condition.user_id.trim() === 'all') {
return `time_${condition.seconds}s`
}

if (
typeof condition.max_wait_seconds === 'number' &&
Number.isFinite(condition.max_wait_seconds) &&
condition.max_wait_seconds > 0
) {
return `time_${condition.seconds}s_id_${condition.user_id.trim()}_max_${condition.max_wait_seconds}s`
}

return `time_${condition.seconds}s_id_${condition.user_id.trim()}`
}
}

return undefined
})
.filter((it) => typeof it === 'string')

if (tokens.length > 0) {
reasons.push(tokens.join('&'))
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Drop malformed next_reply groups instead of weakening them.

conditions only requires type, and both converters currently skip invalid conditions one-by-one while keeping the rest of the same AND group. That broadens the trigger semantics; for example, a missing user_id can collapse an intended user-scoped wait into a global silence trigger. Reject the whole group once one condition is incomplete, or tighten the schema so incomplete conditions never validate.

Also applies to: 231-289, 477-509

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/chat.ts` around lines 150 - 208, The current conversion loop for
NextReplyToolGroup builds tokens by skipping invalid NextReplyToolCondition
entries which weakens semantics; instead, detect any malformed condition in a
group's conditions and drop the entire group (i.e., skip adding to reasons)
rather than filtering out individual conditions. Concretely, inside the for
(const item of value) loop that casts to NextReplyToolGroup and processes
group.conditions, introduce a validation pass over group.conditions that returns
false if any condition is missing required fields (e.g., missing user_id for
message_from_user or no_message_from_user, non-finite seconds, etc. as used when
constructing tokens); if validation fails, continue the outer loop (do not build
tokens or push to reasons). Apply the same change to the analogous conversion
blocks handling the other ranges noted so all NextReplyToolGroup processing
consistently rejects groups with any incomplete NextReplyToolCondition.

Comment on lines +316 to +343
const message = {
type: 'object',
properties: {
text: {
type: 'string',
description: 'Text content'
},
quote: {
type: 'string',
description: 'Platform message ID to quote'
},
sticker: {
type: 'string',
description: 'HTTP(S) sticker URL'
},
image: {
type: 'string',
description: 'HTTP(S) image URL'
},
parts: {
type: 'array',
description: 'Multiple parts inside one message, joined in order.',
items: {
...part
}
}
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The character_reply.messages contract currently allows silent data loss.

The schema permits sibling fields like text, at, face, image, and parts, but buildXmlMessage() returns on the first matching branch. Payloads like { text, at }, { text, image }, or { parts, text } therefore lose content with no error. Either make these shapes mutually exclusive in the schema or normalize them before rendering.

Also applies to: 627-687

🧰 Tools
🪛 ESLint

[error] 337-337: Insert ⏎···················

(prettier/prettier)

🪛 GitHub Check: CodeFactor

[warning] 337-337: src/plugins/chat.ts#L337
Insert ⏎··················· (prettier/prettier)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/chat.ts` around lines 316 - 343, The schema for
character_reply.messages allows multiple sibling payloads (text, quote/at/face,
sticker, image, parts) but buildXmlMessage() currently returns on the first
matching branch, causing silent data loss; fix by either (A) making the message
schema mutually exclusive using oneOf/anyOf variants for the allowed shapes
(text-only, quote-only, sticker-only, image-only, parts-only) so validation
fails when multiple siblings are present, or (B) normalize inputs in
buildXmlMessage() (the function named buildXmlMessage) to explicitly merge or
prioritize fields (e.g., concatenate text and parts, attach at/quote metadata,
include image/sticker) or throw a validation error when conflicting fields are
present; apply the same change to the other similar message schema usage
referenced in this file to ensure no payloads are silently dropped.

Comment on lines +678 to +684
if (
args.voice &&
typeof args.voice === 'object' &&
!Array.isArray(args.voice)
) {
const voice = args.voice as Record<string, unknown>
return `<message${quote}><voice id="${escape(voice.id, true)}">${escape(voice.text)}</voice></message>`
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Omit voice.id when it is not provided.

The schema marks voice.id optional, but this always serializes it as id="undefined". That turns “use the default voice” into a literal voice id and can break downstream voice selection.

Suggested fix
     if (
         args.voice &&
         typeof args.voice === 'object' &&
         !Array.isArray(args.voice)
     ) {
         const voice = args.voice as Record<string, unknown>
-        return `<message${quote}><voice id="${escape(voice.id, true)}">${escape(voice.text)}</voice></message>`
+        const id =
+            typeof voice.id === 'string' && voice.id.length > 0
+                ? ` id="${escape(voice.id, true)}"`
+                : ''
+        return `<message${quote}><voice${id}>${escape(voice.text)}</voice></message>`
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (
args.voice &&
typeof args.voice === 'object' &&
!Array.isArray(args.voice)
) {
const voice = args.voice as Record<string, unknown>
return `<message${quote}><voice id="${escape(voice.id, true)}">${escape(voice.text)}</voice></message>`
if (
args.voice &&
typeof args.voice === 'object' &&
!Array.isArray(args.voice)
) {
const voice = args.voice as Record<string, unknown>
const id =
typeof voice.id === 'string' && voice.id.length > 0
? ` id="${escape(voice.id, true)}"`
: ''
return `<message${quote}><voice${id}>${escape(voice.text)}</voice></message>`
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/chat.ts` around lines 678 - 684, The current serialization of
args.voice always emits id="undefined" because voice.id is optional; in the
branch that builds the string (where args.voice is cast to voice and escape()
and quote are used) change the serialization to conditionally include the id
attribute only when voice.id is non-null/defined (e.g., check voice.id !==
undefined && voice.id !== null) and escape voice.id before inserting; keep
escaping voice.text and preserve quote. Target the block that references
args.voice, the local voice variable, escape(), and quote to apply this
conditional attribute emission.

Comment on lines +1755 to +1769
for (const [id, cfg] of Object.entries(config.privateConfigs)) {
if (!cfg.toolCalling) {
throw new Error(
`experimentalToolCallReply 依赖 toolCalling,privateConfigs.${id}.toolCalling 不能关闭。`
)
}
}

for (const [id, cfg] of Object.entries(config.configs)) {
if (!cfg.toolCalling) {
throw new Error(
`experimentalToolCallReply 依赖 toolCalling,configs.${id}.toolCalling 不能关闭。`
)
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Don't reject inherited toolCalling values in partial overrides.

privateConfigs and configs are merged later with the global config, so an override that omits toolCalling should inherit the global value. The current !cfg.toolCalling check treats undefined as disabled and prevents startup for valid partial configs whenever experimentalToolCallReply is on.

Suggested fix
         for (const [id, cfg] of Object.entries(config.privateConfigs)) {
-            if (!cfg.toolCalling) {
+            if (cfg.toolCalling === false) {
                 throw new Error(
                     `experimentalToolCallReply 依赖 toolCalling,privateConfigs.${id}.toolCalling 不能关闭。`
                 )
             }
         }

         for (const [id, cfg] of Object.entries(config.configs)) {
-            if (!cfg.toolCalling) {
+            if (cfg.toolCalling === false) {
                 throw new Error(
                     `experimentalToolCallReply 依赖 toolCalling,configs.${id}.toolCalling 不能关闭。`
                 )
             }
         }
As per coding guidelines, use consistent per-group config merging pattern: `const merged = Object.assign({}, config, guildConfig)` without creating helper functions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/plugins/chat.ts` around lines 1755 - 1769, The checks in the startup
validation treat undefined as false and reject partial overrides; change both
guards to only fail when toolCalling is explicitly false (i.e., use
cfg.toolCalling === false) so omitted values inherit the global setting, and
ensure any per-group merge follows the pattern Object.assign({}, config,
guildConfig) when combining global config with each entry (do not add helper
functions).

Comment on lines +53 to +70
private _consumedPendingMessages: Record<string, Set<string>> = {}

private _imageSizeCache: Record<string, number> = {}

private _imageSizeCacheCount = 0

private _pendingCooldownTriggers: Record<string, PendingCooldownTrigger> =
{}

private _replyToolFields: CharacterReplyToolField[] = []

private _activePendingMessages: Record<
string,
{
willConsume: boolean
append: (message: Message, triggerReason?: string) => void
}
> = {}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clear the new pending-message state on reset paths.

_activePendingMessages holds live append callbacks and _consumedPendingMessages holds skip keys, but clear(), clear-all, and dispose never remove them. After a reset, a stale callback can keep intercepting messages for the next conversation.

🧹 Suggested fix
         ctx.on('dispose', () => {
             for (const timer of Object.values(this._cooldownTriggerTimers)) {
                 clearTimeout(timer)
             }
             this._cooldownTriggerTimers = {}
+            this._activePendingMessages = {}
+            this._consumedPendingMessages = {}
         })
@@
             try {
                 this._messages[groupId] = []
                 this._groupTemp[groupId] = newTemp(clearedAt)
+                delete this._activePendingMessages[groupId]
+                delete this._consumedPendingMessages[groupId]
@@
         try {
             const clearedAt = new Date()
             this._messages = {}
             this._groupTemp = Object.fromEntries(
                 groupIds.map((groupId) => [groupId, newTemp(clearedAt)])
             )
+            this._activePendingMessages = {}
+            this._consumedPendingMessages = {}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/service/message.ts` around lines 53 - 70, Reset the pending-message state
when the service is reset: ensure _activePendingMessages and
_consumedPendingMessages are cleared (and optionally _pendingCooldownTriggers if
cooldowns should not persist) inside the reset paths (clear(), clearAll, and
dispose) so no stale append callbacks or skip keys survive; locate the methods
handling resets and set _activePendingMessages = {} and _consumedPendingMessages
= {} (and clear _pendingCooldownTriggers) as part of their teardown logic.

Comment on lines +146 to +154
setPendingMessagesWillConsume(session: Session, willConsume: boolean) {
const key = `${session.isDirect ? 'private' : 'group'}:${session.isDirect ? session.userId : session.guildId}`
const state = this._activePendingMessages[key]
if (!state) {
return
}

state.willConsume = willConsume
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

❓ Verification inconclusive

Script executed:

#!/bin/bash
rg -n -C5 '\b(startPendingMessages|setPendingMessagesWillConsume|stopPendingMessages|markConsumedPendingMessages)\b' src/plugins/chat.ts src/service/message.ts
rg -n -C5 '\bwillConsume\b|\btriggerCollect\s*\(|\bpullHistory\s*\(' src/plugins/chat.ts src/service/message.ts

Repository: ChatLunaLab/chatluna-character


Repository: ChatLunaLab/chatluna-character
Exit code: 0

stdout:

src/service/message.ts-130-
src/service/message.ts-131-    getReplyToolFields() {
src/service/message.ts-132-        return this._replyToolFields
src/service/message.ts-133-    }
src/service/message.ts-134-
src/service/message.ts:135:    startPendingMessages(
src/service/message.ts-136-        session: Session,
src/service/message.ts-137-        append: (message: Message, triggerReason?: string) => void
src/service/message.ts-138-    ) {
src/service/message.ts-139-        const key = `${session.isDirect ? 'private' : 'group'}:${session.isDirect ? session.userId : session.guildId}`
src/service/message.ts-140-        this._activePendingMessages[key] = {
src/service/message.ts-141-            willConsume: false,
src/service/message.ts-142-            append
src/service/message.ts-143-        }
src/service/message.ts-144-    }
src/service/message.ts-145-
src/service/message.ts:146:    setPendingMessagesWillConsume(session: Session, willConsume: boolean) {
src/service/message.ts-147-        const key = `${session.isDirect ? 'private' : 'group'}:${session.isDirect ? session.userId : session.guildId}`
src/service/message.ts-148-        const state = this._activePendingMessages[key]
src/service/message.ts-149-        if (!state) {
src/service/message.ts-150-            return
src/service/message.ts-151-        }
src/service/message.ts-152-
src/service/message.ts-153-        state.willConsume = willConsume
src/service/message.ts-154-    }
src/service/message.ts-155-
src/service/message.ts:156:    stopPendingMessages(session: Session) {
src/service/message.ts-157-        const key = `${session.isDirect ? 'private' : 'group'}:${session.isDirect ? session.userId : session.guildId}`
src/service/message.ts-158-        delete this._activePendingMessages[key]
src/service/message.ts-159-    }
src/service/message.ts-160-
src/service/message.ts-161-    mute(session: Session, time: number) {
--
src/service/message.ts-310-        } finally {
src/service/message.ts-311-            unlock()
src/service/message.ts-312-        }
src/service/message.ts-313-    }
src/service/message.ts-314-
src/service/message.ts:315:    markConsumedPendingMessages(session: Session, messages: Message[]) {
src/service/message.ts-316-        const groupId = `${session.isDirect ? 'private' : 'group'}:${session.isDirect ? session.userId : session.guildId}`
src/service/message.ts-317-        const consumed =
src/service/message.ts-318-            this._consumedPendingMessages[groupId] ?? new Set<string>()
src/service/message.ts-319-
src/service/message.ts-320-        for (const message of messages) {
--
src/plugins/chat.ts-1896-            let hasEmptyReplies = false
src/plugins/chat.ts-1897-            let hasNonEmptyReplies = false
src/plugins/chat.ts-1898-            queue = new PendingMessageQueue(
src/plugins/chat.ts-1899-                copyOfConfig.enableMessageId,
src/plugins/chat.ts-1900-                (messages) => {
src/plugins/chat.ts:1901:                    service.markConsumedPendingMessages(session, messages)
src/plugins/chat.ts-1902-                }
src/plugins/chat.ts-1903-            )
src/plugins/chat.ts-1904-
src/plugins/chat.ts:1905:            service.startPendingMessages(session, (message, reason) => {
src/plugins/chat.ts-1906-                queue?.pushRaw(message, reason)
src/plugins/chat.ts-1907-            })
src/plugins/chat.ts-1908-
src/plugins/chat.ts-1909-            try {
src/plugins/chat.ts-1910-                for await (const chunk of streamModelResponse(
--
src/plugins/chat.ts-1921-                        if (event.type === 'round-decision') {
src/plugins/chat.ts-1922-                            const willConsume =
src/plugins/chat.ts-1923-                                'willConsumePendingMessages' in event
src/plugins/chat.ts-1924-                                    ? event.willConsumePendingMessages === true
src/plugins/chat.ts-1925-                                    : event.canContinue === true
src/plugins/chat.ts:1926:                            service.setPendingMessagesWillConsume(
src/plugins/chat.ts-1927-                                session,
src/plugins/chat.ts-1928-                                willConsume
src/plugins/chat.ts-1929-                            )
src/plugins/chat.ts-1930-                            return
src/plugins/chat.ts-1931-                        }
--
src/plugins/chat.ts-1938-                        if (!action) {
src/plugins/chat.ts-1939-                            return
src/plugins/chat.ts-1940-                        }
src/plugins/chat.ts-1941-
src/plugins/chat.ts-1942-                        if (action.tool !== 'character_reply') {
src/plugins/chat.ts:1943:                            service.setPendingMessagesWillConsume(session, true)
src/plugins/chat.ts-1944-                            return
src/plugins/chat.ts-1945-                        }
src/plugins/chat.ts-1946-
src/plugins/chat.ts-1947-                        const args =
src/plugins/chat.ts-1948-                            action.toolInput &&
src/plugins/chat.ts-1949-                            typeof action.toolInput === 'object' &&
src/plugins/chat.ts-1950-                            !Array.isArray(action.toolInput)
src/plugins/chat.ts-1951-                                ? (action.toolInput as Record<string, unknown>)
src/plugins/chat.ts-1952-                                : {}
src/plugins/chat.ts:1953:                        service.setPendingMessagesWillConsume(
src/plugins/chat.ts-1954-                            session,
src/plugins/chat.ts-1955-                            args.is_final === false
src/plugins/chat.ts-1956-                        )
src/plugins/chat.ts-1957-                    }
src/plugins/chat.ts-1958-                )) {
--
src/plugins/chat.ts-2005-                    if (sendResult.breakSay) {
src/plugins/chat.ts-2006-                        break
src/plugins/chat.ts-2007-                    }
src/plugins/chat.ts-2008-                }
src/plugins/chat.ts-2009-            } finally {
src/plugins/chat.ts:2010:                service.stopPendingMessages(session)
src/plugins/chat.ts-2011-            }
src/plugins/chat.ts-2012-
src/plugins/chat.ts-2013-            if (!sentAny) {
src/plugins/chat.ts-2014-                if (hasEmptyReplies && !hasNonEmptyReplies) {
src/plugins/chat.ts-2015-                    await registerResponseTriggers(
src/service/message.ts-62-    private _replyToolFields: CharacterReplyToolField[] = []
src/service/message.ts-63-
src/service/message.ts-64-    private _activePendingMessages: Record<
src/service/message.ts-65-        string,
src/service/message.ts-66-        {
src/service/message.ts:67:            willConsume: boolean
src/service/message.ts-68-            append: (message: Message, triggerReason?: string) => void
src/service/message.ts-69-        }
src/service/message.ts-70-    > = {}
src/service/message.ts-71-
src/service/message.ts-72-    private _cooldownTriggerTimers: Record<
--
src/service/message.ts-136-        session: Session,
src/service/message.ts-137-        append: (message: Message, triggerReason?: string) => void
src/service/message.ts-138-    ) {
src/service/message.ts-139-        const key = `${session.isDirect ? 'private' : 'group'}:${session.isDirect ? session.userId : session.guildId}`
src/service/message.ts-140-        this._activePendingMessages[key] = {
src/service/message.ts:141:            willConsume: false,
src/service/message.ts-142-            append
src/service/message.ts-143-        }
src/service/message.ts-144-    }
src/service/message.ts-145-
src/service/message.ts:146:    setPendingMessagesWillConsume(session: Session, willConsume: boolean) {
src/service/message.ts-147-        const key = `${session.isDirect ? 'private' : 'group'}:${session.isDirect ? session.userId : session.guildId}`
src/service/message.ts-148-        const state = this._activePendingMessages[key]
src/service/message.ts-149-        if (!state) {
src/service/message.ts-150-            return
src/service/message.ts-151-        }
src/service/message.ts-152-
src/service/message.ts:153:        state.willConsume = willConsume
src/service/message.ts-154-    }
src/service/message.ts-155-
src/service/message.ts-156-    stopPendingMessages(session: Session) {
src/service/message.ts-157-        const key = `${session.isDirect ? 'private' : 'group'}:${session.isDirect ? session.userId : session.guildId}`
src/service/message.ts-158-        delete this._activePendingMessages[key]
--
src/service/message.ts-712-                }
src/service/message.ts-713-            } finally {
src/service/message.ts-714-                unlock()
src/service/message.ts-715-            }
src/service/message.ts-716-
src/service/message.ts:717:            await this.pullHistory(session, message)
src/service/message.ts:718:            const triggered = await this.triggerCollect(
src/service/message.ts-719-                session,
src/service/message.ts-720-                triggerReason,
src/service/message.ts-721-                message
src/service/message.ts-722-            )
src/service/message.ts-723-            return triggered
--
src/service/message.ts-756-        }
src/service/message.ts-757-
src/service/message.ts-758-        return this.isMute(session)
src/service/message.ts-759-    }
src/service/message.ts-760-
src/service/message.ts:761:    async triggerCollect(
src/service/message.ts-762-        session: Session,
src/service/message.ts-763-        triggerReason: string,
src/service/message.ts-764-        message?: Message,
src/service/message.ts-765-        signal?: AbortSignal
src/service/message.ts-766-    ) {
--
src/service/message.ts-788-        )
src/service/message.ts-789-
src/service/message.ts-790-        return true
src/service/message.ts-791-    }
src/service/message.ts-792-
src/service/message.ts:793:    async pullHistory(session: Session, focusMessage: Message) {
src/service/message.ts-794-        const groupId = `${session.isDirect ? 'private' : 'group'}:${session.isDirect ? session.userId : session.guildId}`
src/service/message.ts-795-        const guildConfig = session.isDirect
src/service/message.ts-796-            ? this._config.privateConfigs[session.userId]
src/service/message.ts-797-            : this._config.configs[session.guildId]
src/service/message.ts-798-        const globalConfig = session.isDirect
--
src/service/message.ts-946-
src/service/message.ts-947-        if (!pending) {
src/service/message.ts-948-            return
src/service/message.ts-949-        }
src/service/message.ts-950-
src/service/message.ts:951:        await this.pullHistory(pending.session, pending.message)
src/service/message.ts:952:        await this.triggerCollect(
src/service/message.ts-953-            pending.session,
src/service/message.ts-954-            pending.triggerReason,
src/service/message.ts-955-            pending.message
src/service/message.ts-956-        )
src/service/message.ts-957-    }
--
src/plugins/chat.ts-1917-                    chainPool[chainKey]?.chain.value,
src/plugins/chat.ts-1918-                    signal,
src/plugins/chat.ts-1919-                    queue,
src/plugins/chat.ts-1920-                    (event) => {
src/plugins/chat.ts-1921-                        if (event.type === 'round-decision') {
src/plugins/chat.ts:1922:                            const willConsume =
src/plugins/chat.ts-1923-                                'willConsumePendingMessages' in event
src/plugins/chat.ts-1924-                                    ? event.willConsumePendingMessages === true
src/plugins/chat.ts-1925-                                    : event.canContinue === true
src/plugins/chat.ts-1926-                            service.setPendingMessagesWillConsume(
src/plugins/chat.ts-1927-                                session,
src/plugins/chat.ts:1928:                                willConsume
src/plugins/chat.ts-1929-                            )
src/plugins/chat.ts-1930-                            return
src/plugins/chat.ts-1931-                        }
src/plugins/chat.ts-1932-
src/plugins/chat.ts-1933-                        if (event.type !== 'tool-call') {
--
src/plugins/chat.ts-2059-        } finally {
src/plugins/chat.ts-2060-            await service.releaseResponseLock(session)
src/plugins/chat.ts-2061-
src/plugins/chat.ts-2062-            const pending = queue?.takeLatestTrigger()
src/plugins/chat.ts-2063-            if (pending) {
src/plugins/chat.ts:2064:                await service.triggerCollect(
src/plugins/chat.ts-2065-                    session,
src/plugins/chat.ts-2066-                    pending.triggerReason!,
src/plugins/chat.ts-2067-                    pending.message
src/plugins/chat.ts-2068-                )
src/plugins/chat.ts-2069-            }

Script executed:

sed -n '690,710p' src/service/message.ts

Repository: ChatLunaLab/chatluna-character


Repository: ChatLunaLab/chatluna-character
Exit code: 0

stdout:

        }

        const triggerReason = await this._addMessage(session, message, {
            filterExpiredMessages: true,
            processImages: config
        })

        const active = this._activePendingMessages[groupId]
        if (active) {
            active.append(message, triggerReason)
            return true
        }

        if (triggerReason && !this.isMute(session)) {
            const unlock = await this._lockByGroupId(groupId)
            try {
                delete this._pendingCooldownTriggers[groupId]

                const timer = this._cooldownTriggerTimers[groupId]
                if (timer) {
                    clearTimeout(timer)

Script executed:

rg -n 'broadcast\s*\(' src/service/message.ts | head -20

Repository: ChatLunaLab/chatluna-character


Repository: ChatLunaLab/chatluna-character
Exit code: 0

stdout:

587:    async broadcast(session: Session) {

Script executed:

fd -e ts -e js | xargs grep -l 'PendingMessageQueue' | head -10

Repository: ChatLunaLab/chatluna-character


Repository: ChatLunaLab/chatluna-character
Exit code: 0

stdout:

src/plugins/chat.ts

Script executed:

sed -n '587,650p' src/service/message.ts

Repository: ChatLunaLab/chatluna-character


Repository: ChatLunaLab/chatluna-character
Exit code: 0

stdout:

    async broadcast(session: Session) {
        const groupId = `${session.isDirect ? 'private' : 'group'}:${session.isDirect ? session.userId : session.guildId}`
        this.ctx.chatluna_character_trigger.setLastSession(session)
        const guildConfig = session.isDirect
            ? this._config.privateConfigs[session.userId]
            : this._config.configs[session.guildId]
        const globalConfig = session.isDirect
            ? this._config.globalPrivateConfig
            : this._config.globalGroupConfig
        const config = Object.assign(
            {},
            this._config,
            globalConfig,
            guildConfig
        )

        const elements = session.elements
            ? session.elements
            : [h.text(session.content)]

        attachMultimodalFileLimit(elements, config.multimodalFileInputMaxSize)

        const hasMultimodalFile = elements.some(
            (element) =>
                element.type === 'file' ||
                element.type === 'video' ||
                element.type === 'audio'
        )

        const preMessage =
            config.image || hasMultimodalFile
                ? await this.ctx.chatluna.messageTransformer.transform(
                      session,
                      elements,
                      config.model
                  )
                : undefined

        const images = config.image
            ? await getImages(this.ctx, config.model, session, preMessage)
            : undefined

        const content = mapElementToString(
            session,
            session.content,
            elements,
            images
        )

        if (content.length < 1) {
            return
        }

        const quote = session.quote
            ? {
                  content: await (async () => {
                      const quoted = (this._messages[groupId] ?? []).find(
                          (msg) =>
                              msg.messageId != null &&
                              String(msg.messageId) === String(session.quote.id)
                      )
                      if (quoted) {
                          return quoted.content
                      }

Script executed:

rg -n 'class PendingMessageQueue' src/plugins/chat.ts -A 100

Repository: ChatLunaLab/chatluna-character


Repository: ChatLunaLab/chatluna-character
Exit code: 0

stdout:

79:class PendingMessageQueue extends MessageQueue {
80-    private _messages: {
81-        message: Message
82-        triggerReason?: string
83-    }[] = []
84-
85-    constructor(
86-        private _enableMessageId: boolean,
87-        private _onDrain?: (messages: Message[]) => void
88-    ) {
89-        super()
90-    }
91-
92-    pushRaw(message: Message, triggerReason?: string) {
93-        this._messages.push({ message, triggerReason })
94-        return true
95-    }
96-
97-    drain() {
98-        const result = super.drain()
99-
100-        if (this._messages.length < 1) {
101-            return result
102-        }
103-
104-        const entries = this._messages.splice(0)
105-        const messages = entries.map((entry) => entry.message)
106-        this._onDrain?.(messages)
107-
108-        result.push(
109-            new HumanMessage(
110-                'New messages arrived while using tools. Treat them as the latest updates in this turn.\n\n' +
111-                    messages
112-                        .map((message) =>
113-                            formatMessageString(message, this._enableMessageId)
114-                        )
115-                        .join('\n\n')
116-            )
117-        )
118-
119-        return result
120-    }
121-
122-    get pending() {
123-        return super.pending || this._messages.length > 0
124-    }
125-
126-    takeLatestTrigger() {
127-        for (let i = this._messages.length - 1; i >= 0; i--) {
128-            const entry = this._messages[i]
129-            if (!entry.triggerReason) {
130-                continue
131-            }
132-
133-            this._messages = []
134-            return entry
135-        }
136-    }
137-}
138-
139-function extractNextReplyReasonsFromTool(value: unknown) {
140-    if (typeof value === 'string' && value.trim()) {
141-        return [value.trim()]
142-    }
143-
144-    if (!Array.isArray(value)) {
145-        return []
146-    }
147-
148-    const reasons: string[] = []
149-
150-    for (const item of value) {
151-        if (!item || typeof item !== 'object' || Array.isArray(item)) {
152-            continue
153-        }
154-
155-        const group = item as NextReplyToolGroup
156-        if (!Array.isArray(group.conditions)) {
157-            continue
158-        }
159-
160-        const tokens = group.conditions
161-            .map((it) => {
162-                if (!it || typeof it !== 'object' || Array.isArray(it)) {
163-                    return undefined
164-                }
165-
166-                const condition = it as NextReplyToolCondition
167-                if (condition.type === 'message_from_user') {
168-                    if (
169-                        typeof condition.user_id === 'string' &&
170-                        condition.user_id.trim()
171-                    ) {
172-                        return `id_${condition.user_id.trim()}`
173-                    }
174-
175-                    return undefined
176-                }
177-
178-                if (condition.type === 'no_message_from_user') {
179-                    if (

Script executed:

rg -n '\bwillConsume\b' src/service/message.ts src/plugins/chat.ts

Repository: ChatLunaLab/chatluna-character


Repository: ChatLunaLab/chatluna-character
Exit code: 0

stdout:

src/plugins/chat.ts:1922:                            const willConsume =
src/plugins/chat.ts:1928:                                willConsume
src/service/message.ts:67:            willConsume: boolean
src/service/message.ts:141:            willConsume: false,
src/service/message.ts:146:    setPendingMessagesWillConsume(session: Session, willConsume: boolean) {
src/service/message.ts:153:        state.willConsume = willConsume

Script executed:

sed -n '2000,2080p' src/plugins/chat.ts

Repository: ChatLunaLab/chatluna-character


Repository: ChatLunaLab/chatluna-character
Exit code: 0

stdout:

                    await ctx.chatluna_character.broadcastOnBot(
                        session,
                        sendResult.sentMessages
                    )

                    if (sendResult.breakSay) {
                        break
                    }
                }
            } finally {
                service.stopPendingMessages(session)
            }

            if (!sentAny) {
                if (hasEmptyReplies && !hasNonEmptyReplies) {
                    await registerResponseTriggers(
                        ctx,
                        session,
                        key,
                        copyOfConfig,
                        nextReplyReasons,
                        wakeUpReplies
                    )
                }
                return
            }

            const persistedMessages = service.getMessages(key) ?? latestMessages
            if (persistedMessages.length > count) {
                temp.status = latestStatus
                await service.persistStatus(
                    session,
                    latestStatus,
                    persistedMessages[persistedMessages.length - 1]
                )
            }

            temp.completionMessages.push(persistedHumanMessage)
            if (lastResponseMessage) {
                temp.completionMessages.push(lastResponseMessage)
            }

            trimCompletionMessages(
                temp.completionMessages,
                copyOfConfig.modelCompletionCount
            )

            await registerResponseTriggers(
                ctx,
                session,
                key,
                copyOfConfig,
                nextReplyReasons,
                wakeUpReplies
            )

            service.muteAtLeast(session, copyOfConfig.coolDownTime * 1000)
        } catch (e) {
            logger.error(e)
        } finally {
            await service.releaseResponseLock(session)

            const pending = queue?.takeLatestTrigger()
            if (pending) {
                await service.triggerCollect(
                    session,
                    pending.triggerReason!,
                    pending.message
                )
            }
        }
    })
}

willConsume flag is set but never used; buffered messages are lost when willConsume remains false.

Lines 146–154 write willConsume, but the flag is never read anywhere in the codebase. In the finally block (chat.ts lines 2062–2069), queue.takeLatestTrigger() unconditionally picks only the last message with a triggerReason and discards all others, regardless of willConsume. When willConsume stays false, messages buffered via active.append() (broadcast method, line 697) without a triggerReason, or all messages if none have triggerReason, are permanently lost without re-injection into the normal trigger/cooldown path.

Either use willConsume to control whether pending messages bypass the normal flow, or replay all buffered messages unconditionally when the pending message handler completes.

Also applies to: 697–701, 2062–2069

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/service/message.ts` around lines 146 - 154, The willConsume flag set by
setPendingMessagesWillConsume is never read, causing buffered messages appended
via active.append to be discarded by queue.takeLatestTrigger; update the
pending-message flow so that either (A) the flag is respected: read
state.willConsume in the pending handler/queue.takeLatestTrigger path and, when
true, let buffered messages bypass cooldown/trigger filtering and be processed
normally, or (B) always replay all buffered messages when the pending handler
finishes: when clearing state in the pending-message completion code (the logic
around queue.takeLatestTrigger and the finally block in the chat handler),
re-insert or dispatch every message in state.buffer (not just the last
triggerReason message) back into the normal trigger/cooldown path; locate
setPendingMessagesWillConsume, active.append, and queue.takeLatestTrigger to
implement the chosen fix.

Comment on lines +121 to +183
video: {
split: (elements, idx, start) => ({
type: 'video',
start:
idx > start && elements[idx - 1]?.type === 'quote'
? idx - 1
: idx,
end: idx + 1
}),
send: async (session, part) => {
const el = part.elements[part.elements.length - 1]
const file = String(el.attrs['chatluna_file_url'] ?? '')
if (file.length < 1) {
logger.warn('video send skipped: missing file')
return []
}

if (session.platform !== 'onebot') {
el.attrs['src'] = file
const result = await session.send(part.elements)
return Array.isArray(result)
? result.map((id) => String(id))
: [String(result)]
}

const bot = session.bot as OneBotBot<Context>
const action = session.isDirect ? 'send_private_msg' : 'send_group_msg'
const data = (await bot.internal._request(
action,
session.isDirect
? {
user_id: Number(session.userId),
message: [
{
type: 'video',
data: { file }
}
]
}
: {
group_id: Number(session.guildId),
message: [
{
type: 'video',
data: { file }
}
]
}
)) as OneBotSendMessageResponse
if (data.status !== 'ok') {
const msg = data.wording || data.message || 'unknown error'
throw new Error(`${action} failed: ${msg}`)
}

const messageId = String(
data.data?.message_id ?? data.message_id ?? ''
).trim()
if (messageId.length < 1) {
throw new Error(`${action} did not return message_id`)
}

return [messageId]
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Preserve quotes on OneBot video sends.

split() pulls a leading quote into the video part, but the OneBot branch only serializes the video segment. Quoted video replies will lose their reply reference on OneBot.

🛠️ Suggested fix
         send: async (session, part) => {
             const el = part.elements[part.elements.length - 1]
             const file = String(el.attrs['chatluna_file_url'] ?? '')
@@
 
             const bot = session.bot as OneBotBot<Context>
             const action =
                 session.isDirect ? 'send_private_msg' : 'send_group_msg'
+            const message: OneBotMessageSegment[] = []
+            if (part.elements[0]?.type === 'quote') {
+                message.push({
+                    type: 'reply',
+                    data: {
+                        id: String(part.elements[0].attrs.id ?? '')
+                    }
+                })
+            }
+            message.push({
+                type: 'video',
+                data: { file }
+            })
             const data = (await bot.internal._request(
                 action,
                 session.isDirect
                     ? {
                           user_id: Number(session.userId),
-                          message: [
-                              {
-                                  type: 'video',
-                                  data: { file }
-                              }
-                          ]
+                          message
                       }
                     : {
                           group_id: Number(session.guildId),
-                          message: [
-                              {
-                                  type: 'video',
-                                  data: { file }
-                              }
-                          ]
+                          message
                       }
             )) as OneBotSendMessageResponse
🧰 Tools
🪛 ESLint

[error] 147-147: Replace ·?·'send_private_msg' with ⏎················?·'send_private_msg'⏎···············

(prettier/prettier)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/send.ts` around lines 121 - 183, The OneBot branch in send(...)
discards the leading quote pulled into the video part by split(...), causing
loss of reply context; update the OneBot send logic in send to include the
part.elements (including any leading quote element) when building the message
payload instead of only serializing a single video object—i.e., detect and
preserve any part.elements[0] of type 'quote' and serialize the full sequence
(quote + video) into the message array passed to bot.internal._request for
actions 'send_private_msg'/'send_group_msg' so OneBot retains the reply
reference.

Comment on lines +227 to +327
if (
session.platform === 'onebot' &&
part.type === 'default' &&
part.elements.some(isOneBotImageElement)
) {
const message: OneBotMessageSegment[] = []

for (const el of part.elements) {
if (el.type === 'quote') {
message.push({
type: 'reply',
data: {
id: String(el.attrs.id ?? '')
}
})
continue
}

if (el.type === 'text') {
message.push({
type: 'text',
data: {
text: String(el.attrs.content ?? '')
}
})
continue
}

if (el.type === 'at') {
message.push({
type: 'at',
data: {
qq: String(el.attrs.id ?? '')
}
})
continue
}

if (el.type === 'face') {
message.push({
type: 'face',
data: {
id: String(el.attrs.id ?? '')
}
})
continue
}

if (el.type === 'img') {
const file = String(
el.attrs.src ??
el.attrs.url ??
el.attrs.imageUrl ??
''
)
if (file.length < 1) {
continue
}

message.push({
type: 'image',
data: {
file,
url: file,
sub_type: el.attrs.sticker ? 1 : 0
}
})
}
}

if (message.length < 1) {
continue
}

const bot = session.bot as OneBotBot<Context>
const action = session.isDirect ? 'send_private_msg' : 'send_group_msg'
const data = (await bot.internal._request(
action,
session.isDirect
? {
user_id: Number(session.userId),
message
}
: {
group_id: Number(session.guildId),
message
}
)) as OneBotSendMessageResponse
if (data.status !== 'ok') {
const msg = data.wording || data.message || 'unknown error'
throw new Error(`${action} failed: ${msg}`)
}

const messageId = String(
data.data?.message_id ?? data.message_id ?? ''
).trim()
if (messageId.length > 0) {
ids.push(messageId)
}
continue
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard the OneBot image fast-path against mixed fragments.

This branch only serializes quote, text, at, face, and img. Any other element that lands in the same fragment as an image is silently dropped, because the continue at Line 326 skips the normal session.send() fallback. That's reachable for mixed-media replies.

🧰 Tools
🪛 ESLint

[error] 277-280: Replace ⏎····························el.attrs.url·??⏎····························el.attrs.imageUrl·??⏎··························· with ·el.attrs.url·??·el.attrs.imageUrl·??

(prettier/prettier)


[error] 302-302: Replace ·?·'send_private_msg' with ⏎················?·'send_private_msg'⏎···············

(prettier/prettier)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/send.ts` around lines 227 - 327, This fast-path assumes every
element in part.elements is one of the handled types and silently drops unknown
types; before serializing, add a guard that checks part.elements only contains
the allowed types ('quote','text','at','face','img') (or passes
isOneBotImageElement for image detection) and if any element has an unhandled
type, skip this OneBot fast-path so normal session.send() fallback handles the
mixed fragment; update the check near the branch that uses session.platform,
part.type and part.elements and bail out early (do not build message or call
bot.internal._request) when unknown elements are present.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant