fail steps with oversized return values instead of getting stuck#236
fail steps with oversized return values instead of getting stuck#236ianmacartney merged 10 commits intomainfrom
Conversation
This stack of pull requests is managed by Graphite. Learn more about stacking. |
commit: |
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository: get-convex/coderabbit/.coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughAdds an 800KB return-value size check that converts oversized successful step results into failed RunResults with descriptive errors; integrates this check into event and step result assignment paths and includes example workflows plus tests that assert workflows fail when return values are too large. Changes
Sequence Diagram(s)sequenceDiagram
participant Workflow as Workflow Step
participant EventComp as Event Component
participant Oversize as oversizedValues.checkForOversizedResult
participant DB as Database (steps/flows)
participant Pool as onComplete Handler
Workflow->>DB: persist step completion (runResult)
EventComp->>Oversize: checkForOversizedResult(ctx, result, {stepId})
Oversize-->>EventComp: RunResult (success or converted failed)
EventComp->>DB: write journalEntry.step.runResult
Pool->>DB: load journalEntry
Pool->>Oversize: checkForOversizedResult(ctx, args.result, {stepId})
Oversize-->>Pool: RunResult (possibly failed)
Pool->>DB: replace step with updated runResult / enqueue onComplete
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
144c287 to
5dc3ce0
Compare
cd6d849 to
874320f
Compare
5e7187e to
c7a6a3f
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/component/event.ts (1)
141-143:⚠️ Potential issue | 🟠 MajorHandle oversized payloads in the
"created" -> "sent"path too.The new storage flow only runs once an event is already waiting. If
send()lands in the"created"branch, Line 142 still writesargs.resultstraight intoevents.state.result, so a>1 MiBsuccess payload can still fail on the event document before any waiter exists.Also applies to: 154-158
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/component/event.ts` around lines 141 - 143, When send() transitions an event from "created" to "sent" it currently writes args.result directly into events.state.result via ctx.db.patch(event._id, ...), which will still fail for >1 MiB payloads; instead use the same large-payload storage flow used elsewhere: if args.result exceeds the inline size threshold, store it in external storage (or create a storage pointer) and write that pointer/metadata into events.state.result rather than the raw payload. Update the ctx.db.patch call in the "created" -> "sent" path (and mirror the same change for the other branch referenced around lines 154-158) so both use the shared store-large-result helper/path and write a pointer when needed instead of args.result.
🧹 Nitpick comments (1)
src/component/schema.ts (1)
126-132: Make the lookup key match the table shape.This schema allows both
"returnValue"and"args"rows for the same step, but the only index isstepId. The current read/delete helpers become ambiguous as soon as both kinds exist for one step. Either key this table by(stepId, kind)now or constrain it to one row per step.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/component/schema.ts` around lines 126 - 132, The oversizedValues table currently allows two kinds ("returnValue" and "args") but only indexes by stepId, causing ambiguous lookups; update the table definition in defineTable("oversizedValues") so the index includes both stepId and kind (change .index("stepId", ["stepId"]) to .index("stepId_kind", ["stepId","kind"])), or alternatively restrict the kind field to a single literal if you intend one row per step (replace v.union(...) with a single v.literal(...)); update any read/delete helpers to use the new composite key (stepId + kind) if you choose the composite index.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/component/oversizedValues.ts`:
- Around line 63-64: The current serialization uses JSON.stringify(args.value)
which loses Convex non-JSON types; change serialization to
JSON.stringify(convexToJson(args.value)) before creating the Blob and storing
via ctx.storage.store(blob), and when reading/deserializing the stored payload
(where you currently use JSON.parse(...) and convert back), replace that with
jsonToConvex(JSON.parse(...)) so Int64/bigint and bytes are preserved; update
the code paths around the Blob creation (variable blob) and the corresponding
storage retrieval/deserialization logic to use convexToJson/jsonToConvex
respectively.
In `@src/component/pool.ts`:
- Around line 151-155: The event emission still uses the original
args.result.kind instead of the possibly-transformed result returned by
storeOversizedResult, so change any uses that emit the stepCompleted payload to
reference journalEntry.step.runResult (or its .kind) after calling
storeOversizedResult; specifically update the stepCompleted emission(s)
surrounding journalEntry.step.runResult = await storeOversizedResult(ctx,
args.result, { stepId }) and the later emission block (lines ~159-166) to derive
state/kind from journalEntry.step.runResult rather than args.result to ensure
oversized results that became "failed" are reported correctly.
---
Outside diff comments:
In `@src/component/event.ts`:
- Around line 141-143: When send() transitions an event from "created" to "sent"
it currently writes args.result directly into events.state.result via
ctx.db.patch(event._id, ...), which will still fail for >1 MiB payloads; instead
use the same large-payload storage flow used elsewhere: if args.result exceeds
the inline size threshold, store it in external storage (or create a storage
pointer) and write that pointer/metadata into events.state.result rather than
the raw payload. Update the ctx.db.patch call in the "created" -> "sent" path
(and mirror the same change for the other branch referenced around lines
154-158) so both use the shared store-large-result helper/path and write a
pointer when needed instead of args.result.
---
Nitpick comments:
In `@src/component/schema.ts`:
- Around line 126-132: The oversizedValues table currently allows two kinds
("returnValue" and "args") but only indexes by stepId, causing ambiguous
lookups; update the table definition in defineTable("oversizedValues") so the
index includes both stepId and kind (change .index("stepId", ["stepId"]) to
.index("stepId_kind", ["stepId","kind"])), or alternatively restrict the kind
field to a single literal if you intend one row per step (replace v.union(...)
with a single v.literal(...)); update any read/delete helpers to use the new
composite key (stepId + kind) if you choose the composite index.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 119d2889-273a-4228-b2de-f70b8fbabb3a
⛔ Files ignored due to path filters (1)
src/component/_generated/api.tsis excluded by!**/_generated/**
📒 Files selected for processing (4)
src/component/event.tssrc/component/oversizedValues.tssrc/component/pool.tssrc/component/schema.ts
c7a6a3f to
f8e09c8
Compare
| if (!sizeError) { | ||
| return result; | ||
| } | ||
| await ctx.scheduler.runAfter(0, internal.oversizedValues.store, { |
There was a problem hiding this comment.
@goffrie how bad is it to be (ab)using scheduled functions as an escape hatch here?
There was a problem hiding this comment.
i really do not like this, it only works by accident.
if you need this right now then I guess it's ok as a stopgap, but we should look into making ctx.storage.store work in mutations instead
There was a problem hiding this comment.
I can have it just throw for now and mark it as a failure and put it on them to recover the data.
This fallback doesn't transparently work - it's marked as failure, then they need to call a function from an action to get the original result.
I didn't expect writing to storage from a mutation would be on the table! That would be preferable. I guess in Convex Kernel land, args are cheaper in a fork, since it's all in-memory, so maybe this pattern would work there.
It makes me think of if mutations could append to file storage as a way of modeling [audit] logging / metrics
There was a problem hiding this comment.
ok now it just throws and I made a stacked diff with the saving in case a customer asks
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/component/pool.ts (1)
151-170:⚠️ Potential issue | 🟡 MinorUse the transformed result when reporting completion status.
checkForOversizedResult()can turn an oversized"success"into a persisted"failed", but this block still logsargs.result.kind. That makesstepCompletedand the follow-up error log report success even though the stored step state is failed.🩹 Proposed fix
console.event("stepCompleted", { workflowId, workflowName: workflow.name, - status: args.result.kind, + status: journalEntry.step.runResult.kind, stepName: journalEntry.step.name, stepNumber: journalEntry.stepNumber, durationMs: journalEntry.step.completedAt - journalEntry.step.startedAt, }); @@ console.error( - `Workflow: ${workflowId} already ${workflow.runResult.kind} when completing ${stepId} with status ${args.result.kind}`, + `Workflow: ${workflowId} already ${workflow.runResult.kind} when completing ${stepId} with status ${journalEntry.step.runResult.kind}`, );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/component/pool.ts` around lines 151 - 170, The code reports completion using the original args.result instead of the possibly-transformed persisted result; change the reporting to read the persisted run result (journalEntry.step.runResult) after checkForOversizedResult returns. Update the console.event("stepCompleted", {... status: ...}) to use journalEntry.step.runResult.kind, and use that same persisted kind when constructing the subsequent error log comparison and message (replace usages of args.result.kind with journalEntry.step.runResult.kind and ensure the duration and other metadata still come from journalEntry).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/component/oversizedValues.ts`:
- Around line 21-25: checkReturnValueSize currently measures only the raw
returnValue, ignoring the wrapper/metadata that will be persisted and can push
the stored document over MAX_RETURN_VALUE_SIZE; update checkReturnValueSize to
measure the actual persisted shape (e.g., call getConvexSize on the wrapper like
{ kind: "success", returnValue } or the full step document shape you persist)
and compare that size to MAX_RETURN_VALUE_SIZE (alternatively reserve a fixed
headroom if you prefer); reference the existing getConvexSize,
MAX_RETURN_VALUE_SIZE and the checkReturnValueSize function to implement this
change so the persisted payload size is validated rather than just the bare
returnValue.
- Around line 43-47: The scheduled call to
ctx.scheduler.runAfter(internal.oversizedValues.store) defers creation of the
oversizedValues record, allowing the step to be marked failed and persisted
before the blob record exists; this can lead to oversizedValues.read() throwing
and deleteSteps() removing the step while leaving orphaned blobs. Fix by
persisting the oversized value synchronously (call the
internal.oversizedValues.save/store logic and await its completion) before
returning/marking the step failed or, alternatively, set a pending/pendingStore
flag on the oversizedValues record and persist a compensated state that
deleteSteps() respects; update usage sites around ctx.scheduler.runAfter,
internal.oversizedValues.store, oversizedValues.read, deleteSteps, and the save
mutation to ensure the oversized record exists (or is marked pending) before the
failure is persisted.
---
Duplicate comments:
In `@src/component/pool.ts`:
- Around line 151-170: The code reports completion using the original
args.result instead of the possibly-transformed persisted result; change the
reporting to read the persisted run result (journalEntry.step.runResult) after
checkForOversizedResult returns. Update the console.event("stepCompleted", {...
status: ...}) to use journalEntry.step.runResult.kind, and use that same
persisted kind when constructing the subsequent error log comparison and message
(replace usages of args.result.kind with journalEntry.step.runResult.kind and
ensure the duration and other metadata still come from journalEntry).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8987f9b7-20d1-419e-8fd5-9e1abbd17026
⛔ Files ignored due to path filters (1)
src/component/_generated/api.tsis excluded by!**/_generated/**
📒 Files selected for processing (5)
src/component/event.tssrc/component/oversizedValues.tssrc/component/pool.tssrc/component/schema.tssrc/component/workflow.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/component/event.ts
- src/component/schema.ts
b85afe6 to
58172fc
Compare
970b03f to
3f9148d
Compare
58172fc to
777bef8
Compare
777bef8 to
db88352
Compare
|
@cursor bugbot |
|
You need to increase your spend limit or enable usage-based billing to run background agents. Go to Cursor |
db88352 to
25a835e
Compare
When a step returns a value >1MB, the db.replace call fails and the workflow gets stuck with the step permanently in progress. Instead, check the size and mark the step as failed with a truncated preview of the data (128KB prefix + suffix). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
25a835e to
52b97d5
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
src/component/oversizedValues.ts (1)
6-7: Inconsistent units in comments (KiBvsKB).Both constants use
<< 10(binary kibibytes), butPREVIEW_SIZEis documented as "128 KB". Align the comment withMAX_RETURN_VALUE_SIZEfor consistency.📝 Proposed tweak
-const PREVIEW_SIZE = 128 << 10; // 128 KB +const PREVIEW_SIZE = 128 << 10; // 128 KiB🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/component/oversizedValues.ts` around lines 6 - 7, Comment inconsistency: PREVIEW_SIZE uses bit-shift (<< 10) like MAX_RETURN_VALUE_SIZE but its comment says "128 KB"; update the PREVIEW_SIZE comment to use "KiB" (e.g., "128 KiB") to match the binary unit used by MAX_RETURN_VALUE_SIZE and keep units consistent for MAX_RETURN_VALUE_SIZE and PREVIEW_SIZE.example/convex/oversized.test.ts (2)
87-89: CallingfinishAllScheduledFunctionstwice — intentional or papering over ordering?Two back-to-back drains of the scheduled-function queue suggest the second pass exists to flush callbacks scheduled by the first. If that's load-bearing for the test, a brief comment explaining what each pass advances (event delivery → workflow completion → onComplete) would help future readers. Otherwise, consider a single drain in a loop until quiescent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example/convex/oversized.test.ts` around lines 87 - 89, The test currently calls t.finishAllScheduledFunctions(vi.runAllTimers) twice after invoking internal.oversized.sendBigEvent — clarify intent by either adding a brief inline comment explaining what each pass flushes (e.g., first pass delivers events, second lets workflow onComplete run) or replace the two calls with a single loop that repeatedly calls t.finishAllScheduledFunctions(vi.runAllTimers) until no scheduled functions remain (quiescent), so the test no longer relies on an implicit ordering; update references to t.finishAllScheduledFunctions and vi.runAllTimers accordingly.
44-45: Brittle assertion on the exact byte count900002.The literal
"900002 bytes"couples the test to the currentgetConvexSizeoverhead for a 900 000-char ASCII string. Any internal change to size accounting (or a switch to a multi-byte payload) silently breaks this without indicating a real regression. Consider asserting on the structural parts only — e.g. matching/Step return value too large \(\d+ bytes\)/— and leaving the exact size out.📝 Suggested change
- expect(status.error).toContain("Step return value too large"); - expect(status.error).toContain("900002 bytes"); + expect(status.error).toMatch(/Step return value too large \(\d+ bytes\)/);Also applies to: 96-97
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example/convex/oversized.test.ts` around lines 44 - 45, The test in example/convex/oversized.test.ts is brittle because it asserts the exact byte count ("900002 bytes") in status.error; change the assertions (around the expect calls that reference "Step return value too large" and "900002 bytes", including the other occurrences at lines ~96-97) to assert the structural message only — e.g. use a regex match like /Step return value too large \(\d+ bytes\)/ or separate assertions that the message contains "Step return value too large" and a digit sequence with "bytes" — so the test verifies format without depending on an exact byte number.example/convex/oversized.ts (1)
77-119: Duplication betweenstartLargeReturnandstartEventWorkflow.The two starter mutations differ only in the workflow reference and the
inlabel. Fine for an example, but a small helper would make intent clearer if you expect to add more scenarios.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example/convex/oversized.ts` around lines 77 - 119, Extract the duplicated logic in startLargeReturn and startEventWorkflow into a small helper (e.g., startWorkflow) that accepts the workflow reference and the "in" label, calls workflow.start with the existing options (onComplete: internal.oversized.onComplete, startAsync: true), inserts into ctx.db.insert("flows") with the given label and workflowId, and returns the workflowId; then make startLargeReturn and startEventWorkflow simple thin wrappers that call this helper with internal.oversized.largeReturnWorkflow/"largeReturn" and internal.oversized.eventWorkflow/"eventWorkflow" respectively, preserving args: {} and returns: vWorkflowId and using internalMutation as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/component/oversizedValues.ts`:
- Around line 9-25: The embedded JSON preview in checkReturnValueSize (via
truncatedPreview) can push the persisted error object over the DB 1MiB limit;
change this by either (A) removing the embedded preview from the returned error
string in checkReturnValueSize and only include numeric info (size and
MAX_RETURN_VALUE_SIZE), or (B) if you must keep a preview, replace PREVIEW_SIZE
usage with a small byte cap (e.g., a few KB) and make truncatedPreview compute
byte length (use TextEncoder or Buffer.byteLength) when slicing so you never
embed more than that small byte cap; update functions truncatedPreview and
checkReturnValueSize and the PREVIEW_SIZE/PREVIEW_BYTES symbol usage
accordingly.
---
Nitpick comments:
In `@example/convex/oversized.test.ts`:
- Around line 87-89: The test currently calls
t.finishAllScheduledFunctions(vi.runAllTimers) twice after invoking
internal.oversized.sendBigEvent — clarify intent by either adding a brief inline
comment explaining what each pass flushes (e.g., first pass delivers events,
second lets workflow onComplete run) or replace the two calls with a single loop
that repeatedly calls t.finishAllScheduledFunctions(vi.runAllTimers) until no
scheduled functions remain (quiescent), so the test no longer relies on an
implicit ordering; update references to t.finishAllScheduledFunctions and
vi.runAllTimers accordingly.
- Around line 44-45: The test in example/convex/oversized.test.ts is brittle
because it asserts the exact byte count ("900002 bytes") in status.error; change
the assertions (around the expect calls that reference "Step return value too
large" and "900002 bytes", including the other occurrences at lines ~96-97) to
assert the structural message only — e.g. use a regex match like /Step return
value too large \(\d+ bytes\)/ or separate assertions that the message contains
"Step return value too large" and a digit sequence with "bytes" — so the test
verifies format without depending on an exact byte number.
In `@example/convex/oversized.ts`:
- Around line 77-119: Extract the duplicated logic in startLargeReturn and
startEventWorkflow into a small helper (e.g., startWorkflow) that accepts the
workflow reference and the "in" label, calls workflow.start with the existing
options (onComplete: internal.oversized.onComplete, startAsync: true), inserts
into ctx.db.insert("flows") with the given label and workflowId, and returns the
workflowId; then make startLargeReturn and startEventWorkflow simple thin
wrappers that call this helper with
internal.oversized.largeReturnWorkflow/"largeReturn" and
internal.oversized.eventWorkflow/"eventWorkflow" respectively, preserving args:
{} and returns: vWorkflowId and using internalMutation as before.
In `@src/component/oversizedValues.ts`:
- Around line 6-7: Comment inconsistency: PREVIEW_SIZE uses bit-shift (<< 10)
like MAX_RETURN_VALUE_SIZE but its comment says "128 KB"; update the
PREVIEW_SIZE comment to use "KiB" (e.g., "128 KiB") to match the binary unit
used by MAX_RETURN_VALUE_SIZE and keep units consistent for
MAX_RETURN_VALUE_SIZE and PREVIEW_SIZE.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: eaada59b-ef11-4760-a207-0526b33a25ba
⛔ Files ignored due to path filters (2)
example/convex/_generated/api.d.tsis excluded by!**/_generated/**src/component/_generated/api.tsis excluded by!**/_generated/**
📒 Files selected for processing (5)
example/convex/oversized.test.tsexample/convex/oversized.tssrc/component/event.tssrc/component/oversizedValues.tssrc/component/pool.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/component/event.ts
- src/component/pool.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
example/convex/oversized.ts (1)
58-74: ExampleonCompleteskips generation/context checks — acceptable for this test fixture, call out in code.Compared to the standard
onCompleteHandlerinsrc/component/pool.ts, this example handler does nogenerationNumber/onCompleteContextvalidation and will overwriteflows.outunconditionally. Fine for the test scenarios inoversized.test.ts, but a brief comment noting that this is an intentionally minimal example would help readers who copy from it. No functional change needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example/convex/oversized.ts` around lines 58 - 74, The example onComplete handler skips generationNumber and onCompleteContext validation and overwrites flows.out unconditionally; add a brief inline comment above the onComplete export (referencing the onComplete function/handler) noting this is an intentionally minimal test fixture that omits generation/context checks and should not be used as-is in production, so readers copying it understand the behavior and limitations.example/convex/oversized.test.ts (2)
96-97: Brittle byte-count assertion.
toContain("900002 bytes")is tightly coupled to the exact serialized size (900000 bytes of"y"+ 2 bytes of JSON quoting). Any future change to how the event payload is measured (e.g., counting the envelope, including other fields, using bytes of UTF-8 vs. JSON) will silently flake this single test. Consider relaxing to a regex like/\d{6,}\s*bytes/or asserting> 900000. Optional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example/convex/oversized.test.ts` around lines 96 - 97, The assertion is brittle because it expects the exact byte count string "900002 bytes"; update the test around the `status2.error` checks to relax the byte-count assertion by matching a regex like /\d{6,}\s*bytes/ or by parsing the number from the message and asserting it is > 900000 (keep the existing `toContain("Step return value too large")` check). Locate the assertions against `status2.error` and replace the exact string check with one of the suggested relaxed checks so the test won't fail if serialization/envelope/encoding details change.
18-57: Consider assertingonCompletewas actually invoked, not just thatflows.outis set.The test titles say "...and calls onComplete", and verifying
flows.out.kind === "failed"is a reasonable proxy since only the example'sonCompletewrites there. Good coverage. One small enhancement: if the oversized-size-check path is ever re-wired to patchflowsfrom some other code path, these tests would still pass. A spy/counter on theonCompletemutation (or a distinctive sentinel in thecontext) would make the "onComplete was called" guarantee explicit. Optional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example/convex/oversized.test.ts` around lines 18 - 57, The test currently infers onComplete ran by inspecting flows.out but should assert onComplete was invoked explicitly; update the test that calls workflow.start (the invocation using internal.oversized.largeReturnWorkflow with onComplete: internal.oversized.onComplete) to include a distinctive sentinel in the start context or attach a spy/counter to internal.oversized.onComplete, then after await t.finishAllScheduledFunctions(vi.runAllTimers) assert that the sentinel was written (e.g., a unique field in the flows row) or that the spy/counter shows a call; reference internal.oversized.onComplete, workflow.start and the flows row for where to write/read the sentinel or check the spy.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@example/convex/oversized.test.ts`:
- Around line 53-56: Replace the paired expect() null-checks with assert() calls
for consistent TypeScript narrowing: instead of expect(flow).not.toBeNull();
expect(flow!.out).not.toBeNull(); use assert(flow) and assert(flow.out) (or the
existing assert pattern) so subsequent accesses like flow.out.kind and
flow.out.error are safely narrowed; apply the same change to the second test
where flow and flow.out are checked to mirror the pattern used elsewhere in this
file.
---
Nitpick comments:
In `@example/convex/oversized.test.ts`:
- Around line 96-97: The assertion is brittle because it expects the exact byte
count string "900002 bytes"; update the test around the `status2.error` checks
to relax the byte-count assertion by matching a regex like /\d{6,}\s*bytes/ or
by parsing the number from the message and asserting it is > 900000 (keep the
existing `toContain("Step return value too large")` check). Locate the
assertions against `status2.error` and replace the exact string check with one
of the suggested relaxed checks so the test won't fail if
serialization/envelope/encoding details change.
- Around line 18-57: The test currently infers onComplete ran by inspecting
flows.out but should assert onComplete was invoked explicitly; update the test
that calls workflow.start (the invocation using
internal.oversized.largeReturnWorkflow with onComplete:
internal.oversized.onComplete) to include a distinctive sentinel in the start
context or attach a spy/counter to internal.oversized.onComplete, then after
await t.finishAllScheduledFunctions(vi.runAllTimers) assert that the sentinel
was written (e.g., a unique field in the flows row) or that the spy/counter
shows a call; reference internal.oversized.onComplete, workflow.start and the
flows row for where to write/read the sentinel or check the spy.
In `@example/convex/oversized.ts`:
- Around line 58-74: The example onComplete handler skips generationNumber and
onCompleteContext validation and overwrites flows.out unconditionally; add a
brief inline comment above the onComplete export (referencing the onComplete
function/handler) noting this is an intentionally minimal test fixture that
omits generation/context checks and should not be used as-is in production, so
readers copying it understand the behavior and limitations.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ecfbfc2a-f36f-494c-89be-f5473b270e0f
📒 Files selected for processing (2)
example/convex/oversized.test.tsexample/convex/oversized.ts
| journalEntry.step.runResult = await checkForOversizedResult( | ||
| ctx, | ||
| args.result, | ||
| { stepId }, | ||
| ); |
There was a problem hiding this comment.
nit for safety: if you store the run result in a separate variable, you don’t need to do journalSentry.step.runResult! below
| journalEntry.step.runResult = await checkForOversizedResult( | |
| ctx, | |
| args.result, | |
| { stepId }, | |
| ); | |
| const runResult = await checkForOversizedResult( | |
| ctx, | |
| args.result, | |
| { stepId }, | |
| ); | |
| journalEntry.step.runResult = runResult; |
| const PREVIEW_SIZE = 128 << 10; // 128 KB | ||
|
|
||
| function truncatedPreview(returnValue: unknown): string { | ||
| const json = JSON.stringify(convexToJson(returnValue as Value)); |
There was a problem hiding this comment.
I think that this could fail if the return value contains a BigInt. Maybe consider wrapping the JSON.stringify call in a try/catch?
There was a problem hiding this comment.
convexToJson will turn bigint (any Convex Value) into a json-serializable value
| _opts: { | ||
| stepId: Id<"steps">; | ||
| }, |
There was a problem hiding this comment.
Nit: this parameter can be removed since it’s unused
There was a problem hiding this comment.
yeah I think there's a stacked branch that uses it and ctx. this version doesn't even need to be async, but I wanted to ensure the callsites had enough info for future use.

When a step returns a value >1MiB, the db.replace call fails and the
workflow gets stuck with the step permanently in progress. Instead,
check the size and mark the step as failed with a truncated preview
of the data (128KB prefix + suffix).
Fixes #227
Summary by CodeRabbit
New Features
Tests