-
Notifications
You must be signed in to change notification settings - Fork 1.1k
feat: Massive performance overhaul for editor playback #1447
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdded LZ4 compression/decompression and created_at timestamps to WS frames, introduced pipelined GPU readbacks and double-buffered textures, expanded decoder pixel-format support and caching, implemented YUV→RGBA GPU converters, and added broad telemetry/instrumentation across rendering, playback, and socket paths. Changes
Sequence Diagram(s)sequenceDiagram
participant Camera as Camera / Producer
participant FrameWS as frame_ws (compress + send)
participant WS as WebSocket
participant Client as Browser (socket.ts)
participant Renderer as Canvas Renderer
Camera->>FrameWS: produce WSFrame {pixels, stride, w, h, created_at}
FrameWS->>FrameWS: compress_frame_data(...) (prepend size + LZ4)
FrameWS->>WS: send(ArrayBuffer packed payload)
WS->>Client: deliver(ArrayBuffer)
Client->>Client: decompressLz4(ArrayBuffer) -> header + pixel bytes
Client->>Client: parse header (w,h,stride,created_at) and build ImageData
Client->>Renderer: draw ImageData to canvas
Note over FrameWS,Client: Both sides record timings (latency, compress/decompress, render) and emit periodic metrics
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/desktop/src/utils/socket.ts (1)
91-99: Potential buffer offset issue with DataView.When
decompressedis created fromlz4.decompressBlock, itsbufferproperty references the underlyingArrayBuffer, butdecompressed.byteOffsetmay not be 0. TheDataViewat line 99 usesmetadataOffsetrelative to the start ofbuffer, not relative todecompressed.byteOffset.This could cause incorrect metadata reads if the underlying buffer has a non-zero offset.
- const buffer = decompressed.buffer; const clamped = new Uint8ClampedArray(decompressed); if (clamped.length < 12) { console.error("Received frame too small to contain metadata"); return; } const metadataOffset = clamped.length - 12; - const meta = new DataView(buffer, metadataOffset, 12); + const meta = new DataView( + clamped.buffer, + clamped.byteOffset + metadataOffset, + 12, + );
🧹 Nitpick comments (16)
crates/rendering/src/lib.rs (1)
1795-1797: No-op method calls.These
copy_to_texturecalls are no-ops in bothDisplayLayerandCameraLayer. If this is scaffolding for future functionality, consider adding a TODO tracking issue. If not needed, remove them.crates/rendering/src/layers/display.rs (1)
97-97: Empty method implementation.This no-op
copy_to_textureappears to be API scaffolding. Consider documenting its intended purpose or removing if unused.crates/rendering/src/layers/camera.rs (1)
111-111: Empty method implementation.Same observation as
DisplayLayer- this is a no-op placeholder.crates/rendering/src/frame_pipeline.rs (4)
108-123: Buffer resizing recreates all three buffers.When size increases, all three buffers are recreated. This is correct but worth noting that it temporarily increases memory pressure. Consider adding a small growth factor to reduce reallocations for slight size changes.
264-289: Texture size update recreates both textures unconditionally.Consider adding a size check to avoid unnecessary recreation when dimensions haven't changed:
pub fn update_texture_size(&mut self, device: &wgpu::Device, width: u32, height: u32) { + if self.textures.0.width() == width && self.textures.0.height() == height { + return; + } + let make_texture = || {
439-443: Consider defensive handling instead of expect().While the logic ensures a pending readback exists after
submit_readback, usingexpect()creates a panic risk if the code is refactored. Consider returning an error instead:- let pending = session - .pipelined_readback - .take_pending() - .expect("just submitted a readback"); + let pending = session + .pipelined_readback + .take_pending() + .ok_or(RenderingError::BufferMapWaitingFailed)?;
388-389: Unused variablehas_previous.The
has_previousvariable is declared but only used in debug log output, not for control flow. Consider either removing it or using it in the tracing logs for consistency.- let previous_pending = session.pipelined_readback.take_pending(); - let has_previous = previous_pending.is_some(); + let previous_pending = session.pipelined_readback.take_pending();Or include it in tracing:
tracing::debug!(has_previous = previous_pending.is_some(), ...);crates/rendering/src/decoder/avassetreader.rs (2)
357-362: Consider destructuring for clearer closure pattern.The closure and tuple access could be more readable by destructuring:
- if let Some(most_recent_prev_frame) = - cache.iter().rev().find(|v| *v.0 < requested_frame) + if let Some((&_frame_num, cached)) = + cache.iter().rev().find(|(&k, _)| k < requested_frame) && let Some(sender) = sender.take() { - (sender)(most_recent_prev_frame.1.data().clone()); + (sender)(cached.data().clone()); }</review_comment_end>
404-407: Remove commented-out code.As per coding guidelines, commented-out code should be removed. If this debug information is needed, convert it to a tracing call; otherwise delete it.
- // info!( - // "sending previous frame {} for {requested_frame}", - // last_sent_frame.0 - // ); -</review_comment_end>
apps/desktop/src/utils/socket.ts (1)
16-29: Consider validating decompression output length.The size-equality heuristic at line 21 assumes uncompressed data if
dataAfterSize === uncompressedSize. While rare, LZ4 can produce compressed output equal to or larger than input for incompressible data.Additionally,
lz4.decompressBlockreturns the number of bytes written. Consider validating this return value to ensure decompression succeeded completely.function decompressLz4(compressedBuffer: ArrayBuffer): Uint8Array { const view = new DataView(compressedBuffer); const uncompressedSize = view.getUint32(0, true); const dataAfterSize = compressedBuffer.byteLength - 4; if (dataAfterSize === uncompressedSize) { return new Uint8Array(compressedBuffer.slice(4)); } const compressedData = new Uint8Array(compressedBuffer, 4); const output = new Uint8Array(uncompressedSize); - lz4.decompressBlock(compressedData, output, 0, compressedData.length, 0); + const bytesWritten = lz4.decompressBlock(compressedData, output, 0, compressedData.length, 0); + if (bytesWritten !== uncompressedSize) { + throw new Error(`LZ4 decompression size mismatch: expected ${uncompressedSize}, got ${bytesWritten}`); + } return output; }apps/desktop/src-tauri/src/frame_ws.rs (3)
49-50: Remove duplicateprintln!statements.Using both
println!andtracing::info!for the same event is redundant. Prefertracingexclusively for consistent, structured logging with proper log levels and filtering.- println!("socket connection established"); tracing::info!("Socket connection established");
161-163: Remove redundantprintln!statements throughout the file.Multiple locations use
println!alongsidetracing(lines 49, 162, 181, 211, 331, 350). Usetracingconsistently for all logging.let elapsed = now.elapsed(); - println!("Websocket closing after {elapsed:.2?}"); tracing::info!("Websocket closing after {elapsed:.2?}");
170-172: Consider propagating bind errors.The
.unwrap()onTcpListener::bindwill panic if the port is unavailable. Consider returning aResultfrom this function to allow callers to handle bind failures gracefully.crates/editor/src/editor.rs (2)
91-100: Perf counters and periodic/final logging look consistentThe counters (
frames_rendered,frames_dropped, per‑frame times, maxima) and both the final Stop log and periodic 2‑second metrics log are internally consistent and guard against divide‑by‑zero; this is a solid instrumentation pass.If you find yourself extending these metrics further, consider extracting a small helper to compute/emit the summary (avg/max) so the Stop and periodic paths stay in lockstep, but that’s optional.
Also applies to: 122-142, 280-293
323-323: Awaitingfinished_rxis fine; ignoring cancellation is acceptable hereSwitching from
.await.ok()tolet _ = finished_rx.await;keeps the semantics (we wait for the renderer to signal completion but don’t treat a dropped sender as an error), while avoiding the extraok()call. This is reasonable for a fire‑and‑forget render request where a canceled oneshot just means “renderer went away”.If you ever need to debug unexpected cancellations, you could match on the
Resultand log adebug!on the error path, but that’s optional.crates/editor/src/playback.rs (1)
96-99: RwLock usage is safe; consider async-aware lock and clearer poison handling (optional)The
Arc<RwLock<HashSet<u32>>>protectingin_flight_framesis used carefully (short read/write sections, no lock held across.await), so there are no obvious race or deadlock issues.Two optional cleanups you might consider:
- Using
tokio::sync::RwLockinstead ofstd::sync::RwLockwould make the intent clearer in this async context and avoid blocking an executor thread if contention ever grows.- The current
read().map(...).unwrap_or(false)/write().ok()patterns silently treat lock poisoning as “no entry in set”. If poisoning is unexpected here, logging in the error branch instead of silently ignoring it could make debugging easier.These are not blockers, but worth considering if you continue iterating on this path.
Also applies to: 122-124, 144-148, 170-172, 199-203, 420-424, 552-554
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (16)
.claude/settings.local.json(1 hunks)apps/desktop/package.json(2 hunks)apps/desktop/src-tauri/Cargo.toml(1 hunks)apps/desktop/src-tauri/src/camera_legacy.rs(2 hunks)apps/desktop/src-tauri/src/editor_window.rs(3 hunks)apps/desktop/src-tauri/src/frame_ws.rs(6 hunks)apps/desktop/src-tauri/src/screenshot_editor.rs(2 hunks)apps/desktop/src/utils/socket.ts(4 hunks)crates/editor/src/editor.rs(9 hunks)crates/editor/src/playback.rs(10 hunks)crates/rendering/src/composite_frame.rs(3 hunks)crates/rendering/src/decoder/avassetreader.rs(9 hunks)crates/rendering/src/frame_pipeline.rs(2 hunks)crates/rendering/src/layers/camera.rs(4 hunks)crates/rendering/src/layers/display.rs(3 hunks)crates/rendering/src/lib.rs(8 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
apps/desktop/src-tauri/src/screenshot_editor.rsapps/desktop/src-tauri/src/camera_legacy.rsapps/desktop/src-tauri/src/frame_ws.rscrates/editor/src/playback.rscrates/rendering/src/layers/camera.rsapps/desktop/src-tauri/src/editor_window.rscrates/editor/src/editor.rscrates/rendering/src/composite_frame.rscrates/rendering/src/lib.rscrates/rendering/src/layers/display.rscrates/rendering/src/decoder/avassetreader.rscrates/rendering/src/frame_pipeline.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
apps/desktop/src-tauri/src/screenshot_editor.rsapps/desktop/src-tauri/src/camera_legacy.rsapps/desktop/src/utils/socket.tsapps/desktop/src-tauri/src/frame_ws.rscrates/editor/src/playback.rscrates/rendering/src/layers/camera.rsapps/desktop/src-tauri/src/editor_window.rscrates/editor/src/editor.rscrates/rendering/src/composite_frame.rscrates/rendering/src/lib.rscrates/rendering/src/layers/display.rscrates/rendering/src/decoder/avassetreader.rscrates/rendering/src/frame_pipeline.rs
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use TanStack Query v5 for all client-side server state and data fetching in TypeScript files
Files:
apps/desktop/src/utils/socket.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx,js,jsx}: Use strict TypeScript; avoid any type; leverage shared types from @cap/* packages
Follow camelCase naming for variables and functions; PascalCase for components; hooks must start with 'use' prefix
Use Biome for linting and formatting; match existing formatting conventions in the codebase
Use Tailwind CSS for styling in web components; stay consistent with spacing and tokens
Use static skeletons for loading states that mirror content; avoid bouncing animations
Memoize expensive work, code-split naturally, and use Next/Image for remote assets
**/*.{ts,tsx,js,jsx}: Use 2-space indent for TypeScript files; format with Biome usingpnpm format
Use Biome for code formatting and linting; runpnpm formatregularly
Use kebab-case for file names (e.g.,user-menu.tsx); use PascalCase for components
Files:
apps/desktop/src/utils/socket.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Never add any form of comments to code (single-line //, multi-line /* /, JSDoc /* */, or any other comment syntax); code must be self-explanatory through naming, types, and structure
Directory naming must use lowercase-dashed convention
Files:
apps/desktop/src/utils/socket.ts
apps/desktop/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
apps/desktop/**/*.ts: Use @tanstack/solid-query for server state management in SolidJS components
Use generated commands and events from tauri_specta for IPC; never manually construct IPC calls
Listen directly to generated events from tauri_specta and use typed event interfaces
Files:
apps/desktop/src/utils/socket.ts
🧠 Learnings (6)
📚 Learning: 2025-12-07T14:29:19.166Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.166Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use Tailwind CSS for styling in web components; stay consistent with spacing and tokens
Applied to files:
apps/desktop/package.json
📚 Learning: 2025-12-07T14:29:19.165Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.165Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use strict TypeScript; avoid any type; leverage shared types from cap/* packages
Applied to files:
apps/desktop/package.json
📚 Learning: 2025-12-07T14:29:40.721Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-07T14:29:40.721Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use 2-space indent for TypeScript files; format with Biome using `pnpm format`
Applied to files:
apps/desktop/package.json
📚 Learning: 2025-10-10T02:06:05.438Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1165
File: apps/web-cluster/Dockerfile:15-15
Timestamp: 2025-10-10T02:06:05.438Z
Learning: Node.js version 24 and later includes built-in TypeScript support (type-stripping) and can run .ts files directly without requiring a separate TypeScript runtime or transpilation step for basic type annotations. The command `node script.ts` works natively in Node.js v24+.
Applied to files:
apps/desktop/package.json
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
apps/desktop/src-tauri/src/camera_legacy.rsapps/desktop/src-tauri/src/frame_ws.rscrates/editor/src/playback.rscrates/editor/src/editor.rscrates/rendering/src/lib.rscrates/rendering/src/decoder/avassetreader.rs
📚 Learning: 2025-12-07T14:29:19.165Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.165Z
Learning: Applies to **/*.rs : Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Applied to files:
crates/editor/src/editor.rs
🧬 Code graph analysis (4)
crates/editor/src/editor.rs (1)
crates/editor/src/editor_instance.rs (1)
new(36-189)
crates/rendering/src/lib.rs (1)
crates/rendering/src/frame_pipeline.rs (1)
finish_encoder(379-492)
crates/rendering/src/layers/display.rs (2)
crates/rendering/src/composite_frame.rs (4)
new(94-117)create_frame_texture(179-198)default(41-67)bind_group(153-177)crates/rendering/src/layers/camera.rs (2)
new(19-49)render(113-121)
crates/rendering/src/decoder/avassetreader.rs (2)
crates/rendering/src/decoder/mod.rs (2)
new(20-26)data(28-30)crates/video-decode/src/avassetreader.rs (1)
frames(135-139)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Clippy (aarch64-apple-darwin, macos-latest)
- GitHub Check: Typecheck
- GitHub Check: Cursor Bugbot
- GitHub Check: Analyze (rust)
🔇 Additional comments (36)
.claude/settings.local.json (1)
34-34: Clarify the purpose of the newgit checkoutpermission and consider restricting the pattern.The addition of
"Bash(git checkout:*)"grants broad permission to run anygit checkoutcommand without restrictions. While this may be intentional, the wildcard pattern is permissive and could allow unintended operations.
Purpose verification: The PR focuses on video playback performance improvements. Why does this configuration change need git checkout permissions? Is this required for CI/CD, development workflows, or build processes?
Pattern restriction: If this permission is necessary, consider using a more restrictive pattern. For example:
"Bash(git checkout:origin/*)"to limit to remote branches"Bash(git checkout:refs/tags/*)"to limit to tags only"Bash(git checkout:main)"or"Bash(git checkout:develop)"for specific branchescrates/rendering/src/composite_frame.rs (1)
9-9: Good optimization: Persistent sampler reuse.Moving sampler creation from per-bind-group to struct initialization eliminates redundant GPU resource allocation. The sampler configuration is appropriate for video frame compositing.
Also applies to: 102-116
crates/rendering/src/lib.rs (1)
1750-1763: Well-structured performance tracing.The
tracing::debug!instrumentation with structured fields is good for performance monitoring. The conditional logging (only when exceeding thresholds) avoids log spam.Also applies to: 1927-1936
crates/rendering/src/layers/display.rs (2)
9-14: Good double-buffering implementation.The double-buffering pattern correctly prevents GPU contention by alternating between textures. The implementation mirrors
CameraLayerfor consistency across the codebase.Also applies to: 31-38
49-67: Texture recreation logic is sound.The conditional texture/view/bind_group recreation when dimensions change is correctly scoped to the
next_textureslot, avoiding disruption to the currently displayed frame.crates/rendering/src/layers/camera.rs (2)
9-15: Consistent double-buffering implementation.The double-buffering pattern matches
DisplayLayer, maintaining architectural consistency. Resource initialization is correct.Also applies to: 40-48
63-81: Correct next-buffer texture management.The logic correctly handles dimension changes by recreating only the next buffer's resources, ensuring the current frame remains intact during transitions.
crates/rendering/src/frame_pipeline.rs (4)
7-14: Well-designed pending readback structure.The
PendingReadbackstruct encapsulates all necessary state for async GPU readback completion tracking, including timing for performance analysis.
21-45: Robust polling loop with cooperative yielding.The poll loop correctly handles the three possible states:
- Success: break and continue
- Empty: poll device and yield periodically
- Closed: return error
The
yield_now()every 10 polls prevents CPU monopolization.
66-71: Good performance warning threshold.Logging a warning when GPU poll exceeds 10ms helps identify bottlenecks in production.
186-190: Ignoring oneshot send result is acceptable here.The receiver may have been dropped if the wait was cancelled. This pattern is standard for fire-and-forget completion signals.
crates/rendering/src/decoder/avassetreader.rs (2)
146-160: LGTM! Clean wrapper pattern.The refactored
CachedFrameimplementation properly encapsulatesProcessedFramewith a clean interface vianew()anddata()methods.</review_comment_end>
290-331: LGTM! Comprehensive reset instrumentation.The cache reset logic properly:
- Uses
saturating_subto avoid panics (line 287)- Determines reset necessity based on cache bounds
- Retains frames within the new cache window
- Tracks and logs detailed performance metrics (timing, cleared/retained entries, total resets)
The instrumentation will be valuable for performance analysis.
</review_comment_end>
apps/desktop/src-tauri/src/screenshot_editor.rs (2)
18-18: LGTM!Import addition for
Instantis correct and necessary for the timestamp feature.
350-358: LGTM!The
created_attimestamp is correctly captured at frame emission time, enabling latency tracking downstream.apps/desktop/package.json (2)
80-80: LGTM!Type definitions for
lz4jsare correctly placed in devDependencies.
59-59: LGTM! Thelz4jsruntime dependency (^0.2.0) correctly specifies the latest available version for client-side decompression of compressed WebSocket frames, with matching type definitions (@types/lz4js@^0.2.1) added as a devDependency.apps/desktop/src-tauri/src/camera_legacy.rs (2)
1-2: LGTM!Import for
Instantcorrectly added for timestamp support.
61-69: LGTM!The
created_attimestamp is correctly captured when constructing theWSFrame, consistent with the timestamping pattern across all frame emission paths.apps/desktop/src-tauri/src/editor_window.rs (3)
1-1: LGTM!The
time::Instantimport is cleanly integrated into the existingstdimports.
30-37: LGTM!The
created_attimestamp is correctly captured in the prewarm frame emission path.
186-194: LGTM!The
created_attimestamp is correctly captured in the on-demand frame creation path, maintaining consistency with the prewarm path.apps/desktop/src/utils/socket.ts (5)
1-14: LGTM!The
SocketMetricsinterface is well-defined with appropriate numeric types for tracking frame processing performance.
38-47: LGTM!Metrics are properly initialized with zeroed counters and the current timestamp.
55-68: LGTM!Good practice to log a session summary on close. The division is safely guarded by the
framesReceived > 0check.
128-149: LGTM!The stride handling correctly accounts for GPU texture row padding, using an efficient direct subarray when stride matches and a row-by-row copy otherwise.
151-186: LGTM!The metrics tracking and periodic logging provide valuable performance insights. Note that the compression ratio shown in periodic logs reflects only the most recent frame, not a session average—this is fine for monitoring real-time compression behavior.
apps/desktop/src-tauri/Cargo.toml (1)
110-110: LGTM!The
lz4_flexdependency is appropriate for the frame compression feature. Version 0.11 will resolve to 0.11.5 (the latest patch), which is a pure Rust implementation with good performance characteristics.apps/desktop/src-tauri/src/frame_ws.rs (2)
17-24: LGTM!The
created_at: Instantfield is a clean addition for latency tracking. The struct is appropriately defined.
151-159: LGTM!Proper guard against division by zero with
if frames_sent > 0, and good use of structured logging for session metrics.crates/editor/src/editor.rs (4)
2-2: Tracing/Instant imports correctly support the new instrumentationBringing in
Instantandtracing::{debug, info}is appropriate for the timing and perf logs added below; no issues here.Also applies to: 10-10
61-61: Channel capacity increase to 8 is a safe backpressure tweakBumping the
mpscchannel size to 8 should reduce renderer backpressure without meaningfully increasing memory usage; this looks fine.
153-179: Batched frame dropping behavior is correct for catch‑upUsing
dropped_in_batchwithtry_recvto always render the latest frame, explicitly sendingfinishedon the dropped ones, and incrementingframes_droppedonly when you actually skip frames gives the right backpressure behavior without leaving any waiters hanging. No issues from a correctness standpoint.
230-241: Per‑frame timing and metric accumulation are correct and low‑overheadCapturing
render_start/callback_startwithInstant::now(), computingrender_timeandcallback_time, and then aggregating intototal_*_usandmax_*_uswith debug‑level per‑frame logs is a good balance of detail and overhead. The periodic 2‑second info log leverages these aggregates correctly.No changes needed here.
Also applies to: 261-293
crates/editor/src/playback.rs (2)
239-246: Warmup and prefetch buffer management look correct and well‑boundedThe warmup loop (first‑frame timeout, additional warmup until
warmup_target_frames, and early exit on stop) plus the subsequent sort ofprefetch_bufferbyframe_numberlook sound and should give a good starting cache without risking unbounded waiting.Similarly, the post‑warmup trimming logic that bounds
prefetch_buffertoPREFETCH_BUFFER_SIZEand preferentially evicts the farthest‑ahead frames is reasonable given the small buffer size.No functional issues stand out here.
Also applies to: 254-303, 300-303, 304-311, 335-336, 337-355
239-246: Playback metrics instrumentation is comprehensive and low‑riskThe new counters and logs for:
- total rendered and skipped frames,
- prefetch hit/miss counts and hit‑rate,
- per‑frame timings (fetch, uniforms, render),
- periodic (~2s) metrics, and
- final summary including effective FPS
are well‑structured and integrated cleanly with
tracing. These should be very helpful for diagnosing playback behaviour without affecting correctness.Looks good as is.
Also applies to: 650-654, 655-663, 709-715, 722-745, 748-771
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") { | ||
| let _ = writeln!(f, r#"{{"hypothesisId":"D","location":"frame_ws.rs:ws_send","message":"WebSocket frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"original_bytes":{},"compressed_bytes":{}}},"timestamp":{}}}"#, | ||
| latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), original_size, compressed_size, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis()); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove debug logging to hardcoded local path.
This block writes to a hardcoded user-specific path (/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log) which:
- Will silently fail on any other machine
- Is clearly development/debugging code not intended for production
- Contains comments violating coding guidelines ("Never add comments to code")
- Uses
.unwrap()onduration_since()which could panic
Remove this entire block. If persistent debug logging is needed, use proper tracing subscribers or configurable file appenders.
- // #region agent log
- use std::io::Write;
- if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") {
- let _ = writeln!(f, r#"{{"hypothesisId":"D","location":"frame_ws.rs:ws_send","message":"WebSocket frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"original_bytes":{},"compressed_bytes":{}}},"timestamp":{}}}"#,
- latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), original_size, compressed_size, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis());
- }
- // #endregion📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // #region agent log | |
| use std::io::Write; | |
| if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") { | |
| let _ = writeln!(f, r#"{{"hypothesisId":"D","location":"frame_ws.rs:ws_send","message":"WebSocket frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"original_bytes":{},"compressed_bytes":{}}},"timestamp":{}}}"#, | |
| latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), original_size, compressed_size, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis()); | |
| } | |
| // #endregion |
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/frame_ws.rs around lines 131 to 137, remove the
entire debug logging block that opens and writes to the hardcoded path
/Users/macbookuser/.../.cursor/debug.log; delete this development-only code
(including the OpenOptions, writeln! call and the
SystemTime::now().duration_since(...).unwrap()) and replace it with either no
logging or a proper tracing/log call wired to the application's configurable
logging/subscriber (or a configurable file appender) that does not use hardcoded
paths or unwraps; ensure any time calculations handle errors without panicking.
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") { | ||
| let _ = writeln!(f, r#"{{"hypothesisId":"WS_BROADCAST","location":"frame_ws.rs:broadcast_send","message":"WebSocket broadcast frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"width":{},"height":{}}},"timestamp":{}}}"#, | ||
| latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), frame.width, frame.height, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis()); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove duplicate debug logging block.
Same issue as the other debug block—hardcoded local path, comments violating guidelines, and development-only code.
- // #region agent log
- use std::io::Write;
- if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") {
- let _ = writeln!(f, r#"{{"hypothesisId":"WS_BROADCAST","location":"frame_ws.rs:broadcast_send","message":"WebSocket broadcast frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"width":{},"height":{}}},"timestamp":{}}}"#,
- latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), frame.width, frame.height, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis());
- }
- // #endregion📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // #region agent log | |
| use std::io::Write; | |
| if let Ok(mut f) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") { | |
| let _ = writeln!(f, r#"{{"hypothesisId":"WS_BROADCAST","location":"frame_ws.rs:broadcast_send","message":"WebSocket broadcast frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"width":{},"height":{}}},"timestamp":{}}}"#, | |
| latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), frame.width, frame.height, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis()); | |
| } | |
| // #endregion |
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/frame_ws.rs around lines 262 to 268, there is a
duplicate development-only debug logging block that writes to a hardcoded local
path and includes guideline-violating comments; remove this duplicated block
entirely and ensure only the approved centralized logging mechanism remains (or
retain a single, non-hardcoded, conditional debug log behind a feature flag/ENV
check if needed), delete the hardcoded path and accompanying comment markers,
and run cargo fmt/check to confirm no unused imports or variables remain.
| if dropped_in_batch > 0 { | ||
| frames_dropped += dropped_in_batch as u64; | ||
| debug!( | ||
| dropped_frames = dropped_in_batch, | ||
| total_dropped = frames_dropped, | ||
| "[PERF:EDITOR_RENDER] dropped frames to catch up" | ||
| ); | ||
|
|
||
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new() | ||
| .create(true) | ||
| .append(true) | ||
| .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") | ||
| { | ||
| let _ = writeln!( | ||
| f, | ||
| r#"{{"hypothesisId":"A","location":"editor.rs:frames_dropped","message":"Renderer dropped frames due to backpressure","data":{{"dropped_in_batch":{},"total_dropped":{},"rendering_frame":{}}},"timestamp":{}}}"#, | ||
| dropped_in_batch, | ||
| frames_dropped, | ||
| current.frame_number, | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_millis() | ||
| ); | ||
| } | ||
| // #endregion | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove agent log file I/O and comments from the hot render path
These three “agent log” blocks introduce multiple problems:
- They perform synchronous file I/O (
OpenOptions::open+writeln!) on every frame or drop event in the hottest part of the render loop, which can severely distort the very performance you’re measuring. - They write to a hard‑coded, machine‑specific absolute path (
/Users/macbookuser/.../.cursor/debug.log), which will fail on other environments and leaks a personal user path into the codebase. - They rely on
SystemTime::now().duration_since(UNIX_EPOCH).unwrap(), which can panic if the system clock is misconfigured. - They add new
// #region/// #endregioncomments in a.rsfile, which conflicts with the “no comments in Rust code” guideline.
This instrumentation should not ship as part of the renderer loop; the existing tracing‑based perf logs are sufficient and portable.
I recommend removing these blocks entirely and keeping just the tracing metrics. For example:
- if dropped_in_batch > 0 {
- frames_dropped += dropped_in_batch as u64;
- debug!(
- dropped_frames = dropped_in_batch,
- total_dropped = frames_dropped,
- "[PERF:EDITOR_RENDER] dropped frames to catch up"
- );
-
- // #region agent log
- use std::io::Write;
- if let Ok(mut f) = std::fs::OpenOptions::new()
- .create(true)
- .append(true)
- .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log")
- {
- let _ = writeln!(
- f,
- r#"{{"hypothesisId":"A","location":"editor.rs:frames_dropped","message":"Renderer dropped frames due to backpressure","data":{{"dropped_in_batch":{},"total_dropped":{},"rendering_frame":{}}},"timestamp":{}}}"#,
- dropped_in_batch,
- frames_dropped,
- current.frame_number,
- std::time::SystemTime::now()
- .duration_since(std::time::UNIX_EPOCH)
- .unwrap()
- .as_millis()
- );
- }
- // #endregion
- }
+ if dropped_in_batch > 0 {
+ frames_dropped += dropped_in_batch as u64;
+ debug!(
+ dropped_frames = dropped_in_batch,
+ total_dropped = frames_dropped,
+ "[PERF:EDITOR_RENDER] dropped frames to catch up"
+ );
+ }- // #region agent log
- use std::io::Write;
- if let Ok(mut f) = std::fs::OpenOptions::new()
- .create(true)
- .append(true)
- .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log")
- {
- let _ = writeln!(
- f,
- r#"{{"hypothesisId":"A","location":"editor.rs:render_start","message":"Starting GPU render","data":{{"frame_number":{}}},"timestamp":{}}}"#,
- current.frame_number,
- std::time::SystemTime::now()
- .duration_since(std::time::UNIX_EPOCH)
- .unwrap()
- .as_millis()
- );
- }
- // #endregion
-
let render_start = Instant::now();- // #region agent log
- if let Ok(mut f) = std::fs::OpenOptions::new()
- .create(true)
- .append(true)
- .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log")
- {
- let _ = writeln!(
- f,
- r#"{{"hypothesisId":"A","location":"editor.rs:render_complete","message":"GPU render complete","data":{{"frame_number":{},"render_time_us":{}}},"timestamp":{}}}"#,
- current.frame_number,
- render_time.as_micros(),
- std::time::SystemTime::now()
- .duration_since(std::time::UNIX_EPOCH)
- .unwrap()
- .as_millis()
- );
- }
- // #endregion
-
let callback_start = Instant::now();This keeps all of the useful perf metrics while avoiding the portability and perf issues and aligning with the “no comments in Rust” rule.
Also applies to: 211-228, 242-259
🤖 Prompt for AI Agents
In crates/editor/src/editor.rs around lines 181 to 209 (also remove the same
pattern at 211-228 and 242-259), remove the synchronous "agent log" block that
opens a hard-coded file path and writes using writeln!; this code performs
blocking I/O in the hot render path, references a machine-specific absolute
path, uses SystemTime::now().duration_since(...).unwrap() which can panic, and
includes noncompliant region comments. Replace each block by simply keeping the
existing tracing/debug log (frames_dropped metrics) and delete the
OpenOptions/Open+write, the timestamp unwrap, and the // #region / // #endregion
comments so the render loop only emits tracing-based metrics and no file I/O.
| let (playback_position_tx, playback_position_rx) = watch::channel(self.start_frame_number); | ||
|
|
||
| let in_flight_frames: Arc<RwLock<HashSet<u32>>> = Arc::new(RwLock::new(HashSet::new())); | ||
| let prefetch_in_flight = in_flight_frames.clone(); | ||
| let main_in_flight = in_flight_frames; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In‑flight frame tracking logic looks solid; consider a couple of small robustness tweaks
The shared Arc<RwLock<HashSet<u32>>> in‑flight set and the way it’s consulted from both prefetch and main playback paths look correct and should avoid duplicate decodes while allowing overlap.
Two concrete improvements you may want to make:
- Use saturating add for the prefetch window cap to avoid any chance of
u32overflow on very long timelines:
- let max_prefetch_frame = current_playback_frame + MAX_PREFETCH_AHEAD;
+ let max_prefetch_frame =
+ current_playback_frame.saturating_add(MAX_PREFETCH_AHEAD);- When the in‑flight wait times out and you decide to skip that frame (
frame_number = frame_number.saturating_add(1); total_frames_skipped += 1; continue;),playback_position_txandframe_request_txare not updated, so the prefetch task and any observers see a stale playback position for that skip path only. For consistency with the normal advancement and the catch‑up skip branch, consider updating both beforecontinue:
- } else {
- frame_number = frame_number.saturating_add(1);
- total_frames_skipped += 1;
- continue;
- }
+ } else {
+ frame_number = frame_number.saturating_add(1);
+ total_frames_skipped += 1;
+ let _ = frame_request_tx.send(frame_number);
+ let _ = playback_position_tx.send(frame_number);
+ continue;
+ }This keeps the in‑flight and prefetch logic fully aligned with the actual frame index in all advancement paths.
Also applies to: 118-130, 144-152, 170-173, 199-203, 420-471, 494-510, 552-574, 668-670, 719-720
🤖 Prompt for AI Agents
In crates/editor/src/playback.rs around lines 94 to 99 (and also apply the same
fixes at 118-130, 144-152, 170-173, 199-203, 420-471, 494-510, 552-574, 668-670,
719-720): replace any plain addition used to compute prefetch window bounds with
saturating_add to prevent u32 overflow on very long timelines, and when you
handle the in‑flight wait timeout branch where you increment frame_number and
total_frames_skipped then continue, also update playback_position_tx and
frame_request_tx with the new frame_number (and flush/send accordingly) before
the continue so prefetch and observers see the advanced position consistent with
the skip path.
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new() | ||
| .create(true) | ||
| .append(true) | ||
| .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") | ||
| { | ||
| let _ = writeln!( | ||
| f, | ||
| r#"{{"hypothesisId":"GPU_PIPELINE","location":"frame_pipeline.rs:finish_encoder","message":"Pipelined finish (waited for previous)","data":{{"submit_us":{},"wait_us":{},"total_us":{},"has_previous":true}},"timestamp":{}}}"#, | ||
| submit_time.as_micros(), | ||
| wait_time.as_micros(), | ||
| total_start.elapsed().as_micros(), | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_millis() | ||
| ); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove hardcoded debug log paths and comments.
Same issue as in lib.rs - the hardcoded path /Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log and // #region agent log comments should be removed or made configurable before merging. As per coding guidelines, comments are not permitted.
Also applies to: 459-479
🤖 Prompt for AI Agents
crates/rendering/src/frame_pipeline.rs lines 408-427 (and similarly 459-479):
remove the hardcoded debug file path and the `// #region agent log` comment and
replace this ad-hoc file write with a configurable logging approach; read a log
path from a configuration or environment variable (with a sensible default such
as using the system temp dir or the application's configured log directory), or
better yet use the project's logging/tracing facility to emit structured JSON
events instead of manually opening files, and ensure errors opening/writing the
log are handled (no panics) and that the code no longer contains disallowed
comments.
crates/rendering/src/lib.rs
Outdated
| // #region agent log | ||
| use std::io::Write; | ||
| let texture_update_start = std::time::Instant::now(); | ||
| // #endregion | ||
|
|
||
| session.update_texture_size( | ||
| &self.constants.device, | ||
| uniforms.output_size.0, | ||
| uniforms.output_size.1, | ||
| ); | ||
|
|
||
| // #region agent log | ||
| let texture_update_time = texture_update_start.elapsed(); | ||
| if texture_update_time.as_micros() > 100 { | ||
| if let Ok(mut f) = std::fs::OpenOptions::new() | ||
| .create(true) | ||
| .append(true) | ||
| .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") | ||
| { | ||
| let _ = writeln!( | ||
| f, | ||
| r#"{{"hypothesisId":"E","location":"lib.rs:update_texture_size","message":"Texture size update took significant time","data":{{"time_us":{},"width":{},"height":{}}},"timestamp":{}}}"#, | ||
| texture_update_time.as_micros(), | ||
| uniforms.output_size.0, | ||
| uniforms.output_size.1, | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_millis() | ||
| ); | ||
| } | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove hardcoded debug log path and comments before merging.
The hardcoded path /Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log won't work on other machines and exposes local file structure. Additionally, the // #region agent log comments violate the coding guidelines which prohibit comments in code.
Consider either:
- Removing this debug logging entirely before merge
- Using an environment variable for the log path (e.g.,
std::env::var("CAP_DEBUG_LOG"))
- // #region agent log
- use std::io::Write;
- let texture_update_start = std::time::Instant::now();
- // #endregion
+ let texture_update_start = std::time::Instant::now();Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In crates/rendering/src/lib.rs around lines 1556 to 1588, remove the hardcoded
debug log path and the `// #region agent log` comments; either delete the entire
debug logging block before merging or replace the path with a configurable
option (e.g., read std::env::var("CAP_DEBUG_LOG") and only write if set) and
fall back to stderr/stdout when the env var is absent, and eliminate the
prohibited region comments so the logging obeys coding guidelines.
crates/rendering/src/lib.rs
Outdated
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new() | ||
| .create(true) | ||
| .append(true) | ||
| .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") | ||
| { | ||
| let _ = writeln!( | ||
| f, | ||
| r#"{{"hypothesisId":"PREPARE_BREAKDOWN","location":"lib.rs:prepare","message":"Layer prepare breakdown","data":{{"bg_us":{},"blur_us":{},"display_us":{},"cursor_us":{},"camera_us":{},"camera_only_us":{},"text_us":{},"captions_us":{},"total_us":{}}},"timestamp":{}}}"#, | ||
| bg_time.as_micros(), | ||
| blur_time.as_micros(), | ||
| display_time.as_micros(), | ||
| cursor_time.as_micros(), | ||
| camera_time.as_micros(), | ||
| camera_only_time.as_micros(), | ||
| text_time.as_micros(), | ||
| captions_time.as_micros(), | ||
| total_time.as_micros(), | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_millis() | ||
| ); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same issue: Hardcoded debug log path and comments.
This block has the same problems - hardcoded path and comment markers. As per coding guidelines, comments should be removed.
crates/rendering/src/lib.rs
Outdated
| // #region agent log | ||
| use std::io::Write; | ||
| if let Ok(mut f) = std::fs::OpenOptions::new() | ||
| .create(true) | ||
| .append(true) | ||
| .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") | ||
| { | ||
| let _ = writeln!( | ||
| f, | ||
| r#"{{"hypothesisId":"GPU_BREAKDOWN","location":"lib.rs:produce_frame","message":"GPU render breakdown","data":{{"prepare_us":{},"encoder_create_us":{},"render_pass_us":{},"finish_encoder_us":{},"total_us":{}}},"timestamp":{}}}"#, | ||
| prepare_time.as_micros(), | ||
| encoder_create_time.as_micros(), | ||
| render_time.as_micros(), | ||
| finish_time.as_micros(), | ||
| total_time.as_micros(), | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_millis() | ||
| ); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same issue: Hardcoded debug log path and comments in produce_frame.
Consistent with previous comments, this debug logging block should either be removed or use a configurable path.
🤖 Prompt for AI Agents
In crates/rendering/src/lib.rs around lines 1904 to 1925, there is a hardcoded
debug log write to "/Users/macbookuser/.../.cursor/debug.log" inside
produce_frame; remove this hardcoded path and debug block or replace it with a
configurable logger: either delete the entire ad-hoc file-writing block, or read
a configured log path (or use the existing app logger) and write the JSON entry
via that logger, ensuring path/config comes from settings/env and fallbacks to
stdout/stderr if unset; also remove or convert the inline comment markers to
normal code comments if keeping the block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the final PR Bugbot will review for you during this billing cycle
Your free Bugbot reviews will reset on January 8
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| let _ = writeln!(f, r#"{{"hypothesisId":"D","location":"frame_ws.rs:ws_send","message":"WebSocket frame sent","data":{{"frame_latency_us":{},"compress_time_us":{},"ws_send_time_us":{},"total_send_time_us":{},"original_bytes":{},"compressed_bytes":{}}},"timestamp":{}}}"#, | ||
| latency_us, compress_time.as_micros(), ws_send_time.as_micros(), send_time.as_micros(), original_size, compressed_size, std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis()); | ||
| } | ||
| // #endregion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Debug logging to hardcoded local path accidentally committed
Multiple debug logging blocks write to a hardcoded path /Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log throughout the codebase. This debugging code was accidentally committed and performs unnecessary filesystem I/O operations on every frame or operation. The code will silently fail on user machines since the path won't exist, but still introduces overhead from failed file open attempts. These // #region agent log blocks appear in frame_ws.rs, editor.rs, playback.rs, frame_pipeline.rs, and lib.rs.
Additional Locations (2)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/rendering/src/decoder/avassetreader.rs (1)
404-407: Remove commented-out code.As per coding guidelines, comments are not permitted. This dead code should be removed.
if let Some((sender, last_sent_frame)) = last_sent_frame.and_then(|l| Some((sender.take()?, l))) { - // info!( - // "sending previous frame {} for {requested_frame}", - // last_sent_frame.0 - // ); - (sender)(last_sent_frame);
♻️ Duplicate comments (2)
crates/rendering/src/decoder/avassetreader.rs (1)
222-228: Incomplete metrics tracking infrastructure.These metrics variables (
_cache_hits,_cache_misses,_total_requests, etc.) are declared and incremented but never emitted. The timer-reset block at lines 280-283 resetslast_metrics_logbut doesn't log the accumulated stats. Either complete the periodic metrics emission or remove the unused variables.crates/rendering/src/frame_pipeline.rs (1)
408-427: Remove hardcoded debug log paths and comments.The hardcoded path
/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.logand// #region agent logcomments must be removed before merging. As per coding guidelines, comments are not permitted, and hardcoded user-specific paths will fail on other systems.
🧹 Nitpick comments (2)
crates/rendering/src/frame_pipeline.rs (1)
264-289: Consider updating pipelined_readback size when textures resize.
update_texture_sizerecreates the double-buffered textures but doesn't proactively resize thepipelined_readbackbuffers. Whilesubmit_readbackwill callensure_size, proactively resizing here would maintain consistency and clear any pending readback with stale dimensions.pub fn update_texture_size(&mut self, device: &wgpu::Device, width: u32, height: u32) { let make_texture = || { device.create_texture(&wgpu::TextureDescriptor { // ... existing code ... }) }; self.textures = (make_texture(), make_texture()); self.texture_views = ( self.textures.0.create_view(&Default::default()), self.textures.1.create_view(&Default::default()), ); + + let padded = padded_bytes_per_row((width, height)); + let new_buffer_size = (padded * height) as u64; + self.pipelined_readback.ensure_size(device, new_buffer_size); }crates/recording/src/recovery.rs (1)
522-566: Refactor improves readability.The extraction of
segment_basemakes path construction clearer. The changes work correctly.Optional: Consider inlining
segment_indexfor consistency.The
segment_indexvariable at line 522 is only used once on line 523. For consistency with line 598 (which usesidirectly), you could simplify:- let segment_index = seg.index; - let segment_base = format!("content/segments/segment-{segment_index}"); + let segment_base = format!("content/segments/segment-{}", seg.index);This reduces an intermediate variable and matches the pattern used in
create_project_config.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
apps/desktop/src-tauri/src/captions.rs(31 hunks)apps/desktop/src-tauri/src/lib.rs(1 hunks)apps/desktop/src-tauri/src/screenshot_editor.rs(4 hunks)apps/desktop/src-tauri/src/update_project_names.rs(5 hunks)apps/desktop/src-tauri/src/upload.rs(1 hunks)crates/recording/src/recovery.rs(5 hunks)crates/rendering/src/decoder/avassetreader.rs(9 hunks)crates/rendering/src/frame_pipeline.rs(2 hunks)crates/utils/src/lib.rs(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- crates/utils/src/lib.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/desktop/src-tauri/src/screenshot_editor.rs
🧰 Additional context used
📓 Path-based instructions (2)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
apps/desktop/src-tauri/src/lib.rsapps/desktop/src-tauri/src/upload.rsapps/desktop/src-tauri/src/captions.rscrates/recording/src/recovery.rscrates/rendering/src/decoder/avassetreader.rsapps/desktop/src-tauri/src/update_project_names.rscrates/rendering/src/frame_pipeline.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
apps/desktop/src-tauri/src/lib.rsapps/desktop/src-tauri/src/upload.rsapps/desktop/src-tauri/src/captions.rscrates/recording/src/recovery.rscrates/rendering/src/decoder/avassetreader.rsapps/desktop/src-tauri/src/update_project_names.rscrates/rendering/src/frame_pipeline.rs
🧠 Learnings (3)
📚 Learning: 2025-12-07T14:29:19.165Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.165Z
Learning: Applies to **/*.rs : Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Applied to files:
apps/desktop/src-tauri/src/captions.rscrates/rendering/src/frame_pipeline.rs
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
crates/rendering/src/decoder/avassetreader.rs
📚 Learning: 2025-12-07T14:29:40.721Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-07T14:29:40.721Z
Learning: Applies to **/*.rs : Never use `dbg!()` macro in Rust code; use proper logging instead (Clippy: `dbg_macro` = deny)
Applied to files:
crates/rendering/src/frame_pipeline.rs
🧬 Code graph analysis (2)
crates/recording/src/recovery.rs (2)
crates/project/src/meta.rs (1)
camera_path(211-219)apps/desktop/src/utils/tauri.ts (2)
VideoMeta(504-504)AudioMeta(362-362)
apps/desktop/src-tauri/src/update_project_names.rs (1)
crates/utils/src/lib.rs (1)
ensure_unique_filename(52-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Clippy (aarch64-apple-darwin, macos-latest)
- GitHub Check: Cursor Bugbot
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (rust)
🔇 Additional comments (10)
apps/desktop/src-tauri/src/lib.rs (1)
487-487: LGTM! Minor readability improvement.The inline variable interpolation
{attempts}and{e}is functionally equivalent to the previous positional placeholders and improves readability by making it clear which variable corresponds to each placeholder.apps/desktop/src-tauri/src/update_project_names.rs (1)
20-20: LGTM! Consistent formatting improvements.The changes replace positional format arguments with named interpolation (e.g.,
format!("Failed: {e}")instead offormat!("Failed: {}", e)), which improves readability and is modern Rust style. All changes are consistent and introduce no behavioral modifications.Also applies to: 54-54, 157-157, 162-162, 200-200, 263-263, 271-271, 277-277
apps/desktop/src-tauri/src/upload.rs (1)
232-234: LGTM! Correct variable used for orgId parameter.The code correctly uses
org_id(bound from theorganization_idparameter) for theorgIdquery parameter. If this previously usedvideo_id, this change fixes a bug where the wrong value would have been sent.apps/desktop/src-tauri/src/captions.rs (1)
64-64: Consistent log formatting improvements throughout the file.The refactoring from placeholder-based formatting (e.g.,
"{:?}", var) to captured variable interpolation (e.g.,"{var:?}") is the idiomatic modern Rust style and improves readability. All log statements follow proper logging practices usinglog::*andtracing::*macros.crates/rendering/src/decoder/avassetreader.rs (3)
145-159: Clean encapsulation of CachedFrame processing.The refactoring to a private tuple struct with
new()anddata()methods provides better encapsulation and ensures frames are always processed consistently before caching.
237-284: Well-instrumented cache hit/miss handling.The timing instrumentation and structured logging provide valuable performance insights. The closure-based deferred execution pattern for cache misses is appropriate.
303-330: Comprehensive reset/seek handling with proper instrumentation.The reset logic correctly determines when a decoder reset is needed based on cache bounds, and the detailed logging of reset statistics (duration, cleared vs retained entries) aids debugging. The cache retention logic properly keeps frames within the new bounds.
crates/rendering/src/frame_pipeline.rs (2)
16-80: Solid async buffer wait implementation.The polling loop with
try_recv()anddevice.poll()is appropriate for wgpu's async mapping model. Theyield_now()every 10 iterations prevents starving the async runtime. The performance warning at >10ms is helpful for identifying bottlenecks.
379-491: Well-structured pipelined finish_encoder implementation.The pipelining strategy is sound: submit current frame's readback, wait for previous frame's result (if any), and prime the pipeline for the first frame. The
expectat line 442 is safe sincesubmit_readbackwas just called at line 398-400.crates/recording/src/recovery.rs (1)
458-463: LGTM! Improved error message formatting.The inline variable formatting (
{display_output:?},{e}) is more idiomatic and readable than positional arguments.
| pub fn ensure_size(&mut self, device: &wgpu::Device, required_size: u64) { | ||
| if self.buffer_size < required_size { | ||
| let make_buffer = || { | ||
| Arc::new(device.create_buffer(&wgpu::BufferDescriptor { | ||
| label: Some("Pipelined Readback Buffer"), | ||
| size: required_size, | ||
| usage: wgpu::BufferUsages::COPY_DST | wgpu::BufferUsages::MAP_READ, | ||
| mapped_at_creation: false, | ||
| })) | ||
| }; | ||
|
|
||
| self.buffers = [make_buffer(), make_buffer(), make_buffer()]; | ||
| self.buffer_size = required_size; | ||
| self.current_index = 0; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential stale buffer reference after resize.
When ensure_size recreates all buffers, any existing pending readback still holds an Arc<wgpu::Buffer> reference to an old buffer. While the Arc keeps the buffer alive, the data in it becomes irrelevant since it was sized for a different output. Consider clearing pending when resizing, or document that callers must drain pending before output size changes.
pub fn ensure_size(&mut self, device: &wgpu::Device, required_size: u64) {
if self.buffer_size < required_size {
+ self.pending = None;
+
let make_buffer = || {
Arc::new(device.create_buffer(&wgpu::BufferDescriptor {🤖 Prompt for AI Agents
In crates/rendering/src/frame_pipeline.rs around lines 108 to 123, currently
ensure_size recreates self.buffers but leaves any existing self.pending entries
pointing at old Arc<wgpu::Buffer> objects; while those Arcs keep the buffers
alive, their contents no longer match the new size and are stale. When resizing,
clear or reset self.pending (e.g., self.pending.clear() or replace with a fresh
empty collection) before swapping in the new buffers so no pending readbacks
reference old buffers, or alternatively add a documented precondition that
callers must drain pending before calling ensure_size.
| texture, | ||
| uniforms, | ||
| prime_encoder, | ||
| )?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: GPU pipeline returns previous frame instead of current frame
The pipelined GPU readback implementation causes incorrect frame ordering during playback. On the first call, the code correctly returns frame 1, but then "primes" the pipeline by submitting another readback of the same texture (containing stale frame 1 data). On subsequent calls, finish_encoder returns the previous pending readback rather than the current frame. This causes: (1) frame 1 to be displayed twice at playback start, and (2) all subsequent frames to be returned one frame late (~33ms delay at 30fps). The prime_encoder reads from texture which still contains the previous frame's data rather than the newly rendered frame, causing the callback to receive stale frame data.
Additional Locations (1)
| lastCanvasMetricsLog = performance.now(); | ||
| } | ||
| // #endregion | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Async canvas render may cause stale frame display
The createImageBitmap call is now asynchronous inside a createEffect, meaning if multiple frames arrive quickly, the .then() callbacks may execute out of order, displaying older frames after newer ones. The previous synchronous ctx.putImageData(frame.data, 0, 0) rendered frames immediately in order. The async bitmap creation lacks any mechanism to discard stale frame renders.
| export type DialogState = { open: false } | ({ open: boolean } & CurrentDialog); | ||
|
|
||
| export const FPS = 60; | ||
| export const FPS = 30; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: FPS reduced from 60 to 30 affects playback timing
The global FPS constant was changed from 60 to 30. This affects all playback and preview timing calculations throughout the editor, including startPlayback, seekTo, and playbackTime conversions. If this was unintentional during performance testing, it will cause playback to run at half the expected frame rate and may cause timing mismatches with recordings that expect 60fps playback.
| cursor, | ||
| frame_number, | ||
| }) | ||
| .await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Render frame no longer awaits completion signal
The render_frame method now creates a oneshot channel but assigns the receiver to _finished_rx (unused) and never awaits it. Previously, finished_rx.await.ok() was called to synchronize with frame completion. Removing this wait means callers won't know when rendering actually completes, potentially causing race conditions where frames are submitted faster than they can be processed.
| self.textures.0.create_view(&Default::default()), | ||
| self.textures.1.create_view(&Default::default()), | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Textures recreated unconditionally on every render frame
The update_texture_size method unconditionally creates two new GPU textures on every single render call without checking if the size actually changed. Since render() in FrameRenderer calls this method on every frame, this causes significant GPU memory churn and allocation overhead during playback, allocating and deallocating textures at 30-60fps instead of only when the output size changes.
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Warmup loop can hang indefinitely without timeout
The playback warmup loop only exits when should_start becomes true (after receiving the first frame) or when stop_rx signals stop. If the prefetch task fails to produce any frames (due to decoder errors, invalid video, etc.), first_frame_time remains None, should_start stays false, and the loop blocks on prefetch_rx.recv() forever. There's no overall timeout to handle this edge case, causing playback to hang indefinitely.
|
|
||
| if (dataAfterSize === uncompressedSize) { | ||
| return new Uint8Array(compressedBuffer.slice(4)); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: LZ4 decompression heuristic may misdetect uncompressed data
The decompressLz4 function assumes data is uncompressed if dataAfterSize === uncompressedSize. This heuristic is unreliable because compressed data could coincidentally have the same byte length as the claimed uncompressed size. The lz4_flex::compress_prepend_size function on the Rust side always compresses the data, so this shortcut path returning raw data without decompression would produce corrupted frames in edge cases where sizes happen to match.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 19
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
crates/editor/src/playback.rs (1)
150-207: Shared in-flight set is mutated by both tasks; prefetchclear()can break main-loop in-flight waits
prefetch_in_flightandmain_in_flightare the sameArc<RwLock<HashSet<u32>>>, but the prefetch taskclear()s it on seeks. That can cause the main playback path to stop “seeing” frames as in-flight (and skip/wait decisions become inconsistent), increasing duplicates/skips.Suggested fix: split into two sets (prefetch vs main) and consult
containsonprefetch || main, or store(frame -> enum { Prefetch, Main })and only clear the prefetch entries on seek.Also applies to: 545-576
crates/editor/src/editor_instance.rs (2)
242-267:playback_activecan get stucktrueon early returns (invalid FPS / lock contention / start failure)
start_playbacksetsplayback_active = truebeforetry_lock()and before validating/starting playback; multiple branches return without resetting it.pub async fn start_playback(self: &Arc<Self>, fps: u32, resolution_base: XY<u32>) { - let _ = self.playback_active.send(true); - let (mut handle, prev) = { let Ok(mut state) = self.state.try_lock() else { return; }; @@ let playback_handle = match (playback::Playback { ... }) .start(fps, resolution_base) .await { Ok(handle) => handle, Err(PlaybackStartError::InvalidFps) => { warn!(fps, "Skipping playback start due to invalid FPS"); + let _ = self.playback_active.send(false); return; } }; + let _ = self.playback_active.send(true);(Alternative: send
trueonly afterOk(handle)and ensure every return path setsfalse.)Also applies to: 287-293
321-431: Preview “agent log” file I/O + region comments violate guidelines and will skew perfSame issues as elsewhere: blocking
std::fsI/O in async code, hard-coded absolute path,SystemTime...unwrap(), plus//comments (explicitly disallowed for.rs).Recommended: replace with
trace!/debug!/info!events (or remove entirely).apps/desktop/src/utils/socket.ts (1)
130-150: Remove or usestrideConvertStart(likely fails withnoUnusedLocals).
strideConvertStartis declared but never used. In strict TS setups this is commonly a build-breaker.- const strideConvertStart = performance.now(); if (strideBytes === expectedRowBytes) { pixels = source.subarray(0, expectedLength); } else {crates/rendering/src/lib.rs (1)
1769-1799: All three layer copy_to_texture calls are no-ops, not just CameraLayer's.
Display, camera, and camera_only layers all have empty implementations ofcopy_to_texture(). These three calls inRendererLayers::renderproduce no effect and should either be removed or their implementations populated based on the intended dataflow.
♻️ Duplicate comments (11)
apps/desktop/src-tauri/src/frame_ws.rs (1)
105-151: Remove hardcoded-file “agent log” block (guideline violation + perf + potential build break).
This block reintroduces: (1) hardcoded user path, (2)SystemTime::duration_since(...).unwrap()panic risk, (3) filesystem I/O on every frame, (4) guideline-violating comments, and (5) likely#[must_use]lint violations (e.g., ignoring results via.ok()/ droppingOption). Please delete it entirely and rely on the existingtracing::debug!event above.- // #region agent log - use std::io::Write; - if let Ok(mut file) = std::fs::OpenOptions::new().create(true).append(true).open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") { - let log_entry = serde_json::json!({ - "location": "frame_ws.rs:ws_send", - "message": "websocket frame sent", - "data": { - "frame_latency_us": latency_us, - "send_time_us": send_time.as_micros() as u64, - "original_size_bytes": original_size, - "compressed_size_bytes": compressed_size, - "compression_ratio_pct": format!("{:.1}", (compressed_size as f64 / original_size as f64) * 100.0) - }, - "timestamp": std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis() as u64, - "sessionId": "debug-session", - "hypothesisId": "A" - }); - writeln!(file, "{}", log_entry).ok(); - } - // #endregioncrates/editor/src/playback.rs (5)
163-373: Remove hard-coded debug log path + blocking file I/O + region comments in async hot pathThe “agent log” blocks do all of the following in the prefetch loop: blocking
std::fsI/O, hard-coded absolute path,SystemTime::duration_since(...).unwrap(), and// #regioncomments (disallowed). This will regress playback performance and violates repo guidelines. This matches prior review feedback.- // #region agent log - use std::io::Write; - if let Ok(mut file) = std::fs::OpenOptions::new() - .create(true) - .append(true) - .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") - { - let ts = std::time::SystemTime::now() - .duration_since(std::time::UNIX_EPOCH) - .unwrap() - .as_millis() as u64; - writeln!(file, r#"{{...}}"#, ... , ts).ok(); - } - // #endregion + debug!( + old_frame, + new_frame = requested, + is_backward = is_backward_seek, + seek_distance, + in_flight_before, + in_flight_after, + futures_before, + futures_after, + "[PERF:PLAYBACK] prefetch task received seek" + );
235-237: Usesaturating_addfor prefetch window bounds (u32overflow risk)
let max_prefetch_frame = current_playback_frame + MAX_PREFETCH_AHEAD;can overflow on long timelines.- let max_prefetch_frame = current_playback_frame + MAX_PREFETCH_AHEAD; + let max_prefetch_frame = current_playback_frame.saturating_add(MAX_PREFETCH_AHEAD);
485-503: Buffer trimming uses non-saturatingu32addition (overflow risk)
p.frame_number > frame_number + PREFETCH_BUFFER_SIZE as u32should usesaturating_add.- p.frame_number > frame_number + PREFETCH_BUFFER_SIZE as u32 + p.frame_number + > frame_number.saturating_add(PREFETCH_BUFFER_SIZE as u32)
592-595: Skip-continuebranches advanceframe_numberwithout updating observers/request channelsSeveral branches
frame_number = frame_number.saturating_add(1); total_frames_skipped += 1; continue;but don’t updateframe_request_tx/playback_position_tx, so prefetch/observers can lag behind actual playback decisions. This was already called out in prior review feedback.- frame_number = frame_number.saturating_add(1); - total_frames_skipped += 1; - continue; + frame_number = frame_number.saturating_add(1); + total_frames_skipped += 1; + let _ = frame_request_tx.send(frame_number); + let _ = playback_position_tx.send(frame_number); + continue; ... - frame_number = frame_number.saturating_add(1); - total_frames_skipped += 1; - continue; + frame_number = frame_number.saturating_add(1); + total_frames_skipped += 1; + let _ = frame_request_tx.send(frame_number); + let _ = playback_position_tx.send(frame_number); + continue;Also applies to: 637-645
683-704: Timeout log mismatch + more blocking file I/O in hot pathThis branch uses
max_wait = 150ms, but logs"timeout_ms":100, and again does blocking file I/O + absolute path + unwrap + region comments.- // #region agent log - use std::io::Write; - if let Ok(mut file) = std::fs::OpenOptions::new() - .create(true) - .append(true) - .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") - { - let ts = std::time::SystemTime::now() - .duration_since(std::time::UNIX_EPOCH) - .unwrap() - .as_millis() as u64; - writeln!( - file, - r#"{{"location":"playback.rs:decoder_timeout","message":"direct decoder call timed out","data":{{"frame":{},"timeout_ms":100}},"timestamp":{} ...}}"#, - frame_number, ts - ).ok(); - } - // #endregion + warn!( + frame = frame_number, + timeout_ms = max_wait.as_millis() as u64, + "[PERF:PLAYBACK] direct decoder call timed out" + );crates/editor/src/editor.rs (1)
225-244: Remove “agent log” file I/O + region comments in render loop (hot path)Blocking file I/O, absolute path,
SystemTime...unwrap(), and comments in.rsare all problems here; this matches prior review feedback.- // #region agent log - use std::io::Write; - if let Ok(mut file) = std::fs::OpenOptions::new() - .create(true) - .append(true) - .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") - { - let ts = std::time::SystemTime::now() - .duration_since(std::time::UNIX_EPOCH) - .unwrap() - .as_millis() as u64; - writeln!(file, r#"{{...}}"#, ... , ts).ok(); - } - // #endregioncrates/rendering/src/frame_pipeline.rs (3)
403-470: Fix frame ordering: current priming logic can duplicate frame 0 and return frames 1-behind.
The “first frame, primed pipeline” branch submits a second readback from the sametextureusing an empty encoder, which will commonly read the same frame again and shifts output by one. This matches the reported bug in past review notes.--- a/crates/rendering/src/frame_pipeline.rs +++ b/crates/rendering/src/frame_pipeline.rs @@ - let previous_pending = session.pipelined_readback.take_pending(); - let _has_previous = previous_pending.is_some(); + let previous_pending = session.pipelined_readback.take_pending(); @@ - let result = if let Some(pending) = previous_pending { - let wait_start = Instant::now(); - let frame = pending.wait(device).await?; - let wait_time = wait_start.elapsed(); - - tracing::debug!( - wait_us = wait_time.as_micros() as u64, - total_us = total_start.elapsed().as_micros() as u64, - "[PERF:GPU_BUFFER] pipelined finish_encoder (pipelined, waited for previous)" - ); - - frame - } else { - let wait_start = Instant::now(); - - let pending = session - .pipelined_readback - .take_pending() - .expect("just submitted a readback"); - let frame = pending.wait(device).await?; - let wait_time = wait_start.elapsed(); - - let prime_start = Instant::now(); - let prime_encoder = device.create_command_encoder(&wgpu::CommandEncoderDescriptor { - label: Some("Pipeline Priming Encoder"), - }); - session.pipelined_readback.submit_readback( - device, - queue, - texture, - uniforms, - prime_encoder, - )?; - let prime_time = prime_start.elapsed(); - - tracing::debug!( - wait_us = wait_time.as_micros() as u64, - prime_us = prime_time.as_micros() as u64, - total_us = total_start.elapsed().as_micros() as u64, - "[PERF:GPU_BUFFER] pipelined finish_encoder (first frame, primed pipeline)" - ); - - frame - }; - - Ok(result) + if let Some(pending) = previous_pending { + pending.wait(device).await + } else { + let pending = session + .pipelined_readback + .take_pending() + .ok_or(RenderingError::BufferMapWaitingFailed)?; + pending.wait(device).await + }If you want to keep true pipelining and return the “current” frame in-order, this likely needs an API split (submit vs await) and a flush at end; happy to propose a concrete refactor if you confirm expected semantics for export vs live preview.
66-89: Remove hardcoded debug-log file writes + banned comment markers.
This adds disallowed// #region ...comments and writes to a hardcoded local path in a hot path. Prefer existingtracing::*events (already present) and configure file output via the tracing subscriber rather than ad-hoc file IO. As per coding guidelines, comments are not permitted.--- a/crates/rendering/src/frame_pipeline.rs +++ b/crates/rendering/src/frame_pipeline.rs @@ - // #region agent log - use std::io::Write; - if let Ok(mut file) = std::fs::OpenOptions::new() - .create(true) - .append(true) - .open("/Users/macbookuser/Documents/GitHub/cap/.cursor/debug.log") - { - let log_entry = serde_json::json!({ - "location": "frame_pipeline.rs:readback_wait", - "message": "GPU readback timing", - "data": { - "poll_us": poll_time.as_micros() as u64, - "data_copy_us": data_copy_time.as_micros() as u64, - "total_pipeline_us": total_time.as_micros() as u64, - "poll_count": poll_count - }, - "timestamp": std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_millis() as u64, - "sessionId": "debug-session", - "hypothesisId": "C" - }); - writeln!(file, "{}", log_entry).ok(); - } - // #endregion - if poll_time.as_millis() > 10 { tracing::warn!( poll_time_ms = poll_time.as_millis() as u64, "[PERF:GPU_BUFFER] GPU poll took longer than 10ms - potential bottleneck" ); }Also applies to: 59-65
132-147: Clear pending readback on buffer resize to avoid stale pending referencing old buffers.
ensure_sizerecreates buffers but leavesself.pendingintact, so callers can later await/read from a stale buffer with mismatched sizing semantics.--- a/crates/rendering/src/frame_pipeline.rs +++ b/crates/rendering/src/frame_pipeline.rs @@ pub fn ensure_size(&mut self, device: &wgpu::Device, required_size: u64) { if self.buffer_size < required_size { + self.pending = None; let make_buffer = || { Arc::new(device.create_buffer(&wgpu::BufferDescriptor { label: Some("Pipelined Readback Buffer"), size: required_size,crates/rendering/src/lib.rs (1)
202-225: Remove hardcoded debug-log blocks + banned comment markers (multiple sites).
These add disallowed comments and write to a hardcoded local file path; also the extra file IO/JSON formatting risks skewing perf measurements in the exact code you’re instrumenting. As per coding guidelines, comments are not permitted.Also applies to: 1736-1765, 1916-1942
🟡 Minor comments (5)
apps/desktop/src/utils/socket.ts-75-90 (1)
75-90: Timing metrics are inconsistent (parse includes decompress), causing misleading telemetry.
parseTimeis computed fromframeStart, butframeStartis before decompression—so decompress time is effectively counted twice (once indecompressTime, again insideparseTime).ws.onmessage = (event) => { - const frameStart = performance.now(); + const frameStart = performance.now(); const compressedBuffer = event.data as ArrayBuffer; const compressedSize = compressedBuffer.byteLength; const decompressStart = performance.now(); let decompressed: Uint8Array; try { decompressed = decompressLz4(compressedBuffer); } catch (e) { console.error("Failed to decompress frame:", e); return; } const decompressTime = performance.now() - decompressStart; + const parseStart = performance.now(); const buffer = decompressed.buffer; const clamped = new Uint8ClampedArray(decompressed); @@ - const parseTime = performance.now() - frameStart; + const parseTime = performance.now() - parseStart;Also applies to: 144-145, 151-164
crates/rendering/src/yuv_converter.rs-167-247 (1)
167-247: Chroma textures sized with/ 2assume even dimensions.
If odd frame sizes are possible, allocate chroma textures withdiv_ceil(2)(and adjust copy extents/layouts), or explicitly reject odd sizes forNv12/Yuv420p.crates/rendering/src/layers/camera.rs-70-76 (1)
70-76: Pointer-based “new frame” detection is fragile.
frame_ptrcan theoretically repeat due to allocator reuse; consider tracking a stable frame identifier if available (frame number/PTS) or combining pointer+len at minimum.crates/rendering/src/decoder/mod.rs-155-163 (1)
155-163: RemoveMpscSenderLentrait and implementation. This trait is unused in the codebase and should be deleted to avoid confusion during maintenance.crates/rendering/src/frame_pipeline.rs-209-215 (1)
209-215: Don’t silently ignoretx.send(...)failure.
This drops aResult(and violates the “don’t ignore Result/must_use” guideline). If the receiver is gone, explicitly handle/log it.--- a/crates/rendering/src/frame_pipeline.rs +++ b/crates/rendering/src/frame_pipeline.rs @@ buffer .slice(..) .map_async(wgpu::MapMode::Read, move |result| { - let _ = tx.send(result); + if tx.send(result).is_err() { + tracing::trace!("[PERF:GPU_BUFFER] readback receiver dropped"); + } });
🧹 Nitpick comments (13)
apps/desktop/src-tauri/src/frame_ws.rs (3)
5-11: Guard/optimize LZ4 packing + verify API expectations (prepend-size, metadata placement).
You’re mutatingdata(possibly reallocating 3 times) just to append 12 bytes, then immediately compressing. At leastreserve(12)to avoid extra realloc; also double-check the frontend expectsstride/height/widthto be part of the decompressed tail (since you’re compressing the pixel bytes + metadata together).fn compress_frame_data(mut data: Vec<u8>, stride: u32, height: u32, width: u32) -> Vec<u8> { + data.reserve(12); data.extend_from_slice(&stride.to_le_bytes()); data.extend_from_slice(&height.to_le_bytes()); data.extend_from_slice(&width.to_le_bytes()); lz4_flex::compress_prepend_size(&data) }
55-79: Avoid string formatting in hot-path tracing fields (and handle empty frames).
%format!("{:.1}%", ...)eagerly allocates even if the debug event is disabled, andoriginal_size == 0yieldsinf%. Prefer numeric fields (e.g.,compression_ratio_bpas basis points) guarded byoriginal_size > 0.
220-321: Metrics + lagged accounting are a good addition; watchcontinueskipping periodic metrics.
OnRecvError::Lagged, thecontinueskips the periodic-metrics block, so during sustained lag you may never emit periodic stats. Consider updating metrics/logging beforecontinue(or remove thecontinue).crates/editor/src/playback.rs (2)
69-119:FrameCachelikely cloning very large frame payloads (considerArcstorage / returns)
FrameCache::get()returnsSome(data.clone()), and callers alsosegment_frames.clone()again before insert. IfDecodedSegmentFramesis sizable (likely), this can erase the perf gains of caching.Suggested direction: store
Arc<DecodedSegmentFrames>in the cache (and inPrefetchedFrame) so cache hits don’t clone full frame payloads, while keeping cheap clones.
257-265:next_prefetch_frame += 1should be saturatingOn
u32::MAX(even if rare), this wraps. Prefersaturating_add(1)for consistency with the rest of the file.- next_prefetch_frame += 1; + next_prefetch_frame = next_prefetch_frame.saturating_add(1); ... - next_prefetch_frame += 1; + next_prefetch_frame = next_prefetch_frame.saturating_add(1);Also applies to: 295-296
crates/editor/src/editor_instance.rs (2)
395-427: Prefetch frame math should be saturating; also don’t ignore#[must_use] Optionfromget_frames
let prefetch_frame = frame_number + offset;should besaturating_add.let _ = decoders.get_frames(...).await;discards a#[must_use]Option.- let prefetch_frame = frame_number + offset; + let prefetch_frame = frame_number.saturating_add(offset); ... - let _ = decoders - .get_frames(...) - .await; + let _prefetched = decoders.get_frames(...).await;If you actually want to validate prefetch effectiveness without heavy logging, consider a lightweight counter/metric (via existing perf instrumentation) rather than per-frame logs.
416-427: CancellationToken check won’t cancel an in-flight decodeThe task checks
cancel_token.is_cancelled()only before awaitingget_frames. If cancellation happens after the await starts, it won’t stop work.Use
tokio::select!withcancel_token.cancelled()vs the decode future, and drop the decode result if cancelled.crates/editor/src/editor.rs (1)
277-287:finishedoneshot is now unused overhead (receiver dropped); remove or provide an awaiting API
render_framecreates(finished_tx, _finished_rx)and drops the receiver. Renderer still doescurrent.finished.send(()), but it will always fail. This is extra alloc/work per frame.Two options:
- Remove
finishedfromRendererMessage::RenderFrameentirely.- Keep it, but provide two APIs:
render_frame()(fire-and-forget, no oneshot) andrender_frame_and_wait()(returns/awaits the oneshot).Also, if this relies on tokio/oneshot semantics, please verify against the tokio version in the repo.
apps/desktop/src/utils/socket.ts (1)
55-68: Close logging: avoid unconditional perf logs in production paths.
Even at session end, this may spam logs for typical users; consider gating behind an explicit debug flag (env/setting) consistent with the app’s logging strategy.crates/rendering/src/decoder/ffmpeg.rs (1)
16-46: Avoid deep-cloning frame buffers into_decoded_frame()hot path.
(*self.data).clone()(Lines 30-44) clones the entireVec<u8>each time, negating the benefit of storingArc<Vec<u8>>. Consider addingDecodedFrame::from_arc_*constructors (or changingDecodedFrameto storeArc<[u8]>) soto_decoded_frame()can be zero-copy.crates/rendering/src/yuv_converter.rs (1)
364-406: Cache texture views + bind groups; avoid per-frame bind-group/view churn.
You recreatecreate_view(...)andcreate_bind_group(...)every call (e.g., Lines 364-387, 572-601). Since textures are cached by size/format, you can also cache:
y_view/uv_view/u_view/v_view/output_view- bind groups for NV12 and YUV420p
…and only rebuild them insideensure_textures(...)when textures change.Also applies to: 572-620
crates/rendering/src/layers/display.rs (1)
213-214: Remove or implementcopy_to_texture(currently dead surface area).
An empty public method is easy to misuse; either wire the YUV copy batching through it or drop it until needed.crates/rendering/src/decoder/avassetreader.rs (1)
448-453: Optional: clean up cache scan readability.
cache.iter().rev().find(|v| *v.0 < requested_frame)is hard to read; consider destructuring|(&frame, _)| frame < requested_frame.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
apps/desktop/src-tauri/src/frame_ws.rs(6 hunks)apps/desktop/src/routes/editor/Player.tsx(1 hunks)apps/desktop/src/routes/editor/context.ts(1 hunks)apps/desktop/src/utils/socket.ts(4 hunks)crates/editor/src/editor.rs(9 hunks)crates/editor/src/editor_instance.rs(8 hunks)crates/editor/src/playback.rs(11 hunks)crates/rendering/src/decoder/avassetreader.rs(8 hunks)crates/rendering/src/decoder/ffmpeg.rs(5 hunks)crates/rendering/src/decoder/mod.rs(4 hunks)crates/rendering/src/frame_pipeline.rs(2 hunks)crates/rendering/src/layers/camera.rs(3 hunks)crates/rendering/src/layers/display.rs(2 hunks)crates/rendering/src/lib.rs(10 hunks)crates/rendering/src/shaders/nv12_to_rgba.wgsl(1 hunks)crates/rendering/src/shaders/yuv420p_to_rgba.wgsl(1 hunks)crates/rendering/src/yuv_converter.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use TanStack Query v5 for all client-side server state and data fetching in TypeScript files
Files:
apps/desktop/src/utils/socket.tsapps/desktop/src/routes/editor/context.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx,js,jsx}: Use strict TypeScript; avoid any type; leverage shared types from @cap/* packages
Follow camelCase naming for variables and functions; PascalCase for components; hooks must start with 'use' prefix
Use Biome for linting and formatting; match existing formatting conventions in the codebase
Use Tailwind CSS for styling in web components; stay consistent with spacing and tokens
Use static skeletons for loading states that mirror content; avoid bouncing animations
Memoize expensive work, code-split naturally, and use Next/Image for remote assets
**/*.{ts,tsx,js,jsx}: Use 2-space indent for TypeScript files; format with Biome usingpnpm format
Use Biome for code formatting and linting; runpnpm formatregularly
Use kebab-case for file names (e.g.,user-menu.tsx); use PascalCase for components
Files:
apps/desktop/src/utils/socket.tsapps/desktop/src/routes/editor/Player.tsxapps/desktop/src/routes/editor/context.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Never add any form of comments to code (single-line //, multi-line /* /, JSDoc /* */, or any other comment syntax); code must be self-explanatory through naming, types, and structure
Directory naming must use lowercase-dashed convention
Files:
apps/desktop/src/utils/socket.tsapps/desktop/src/routes/editor/Player.tsxapps/desktop/src/routes/editor/context.ts
apps/desktop/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
apps/desktop/**/*.ts: Use @tanstack/solid-query for server state management in SolidJS components
Use generated commands and events from tauri_specta for IPC; never manually construct IPC calls
Listen directly to generated events from tauri_specta and use typed event interfaces
Files:
apps/desktop/src/utils/socket.tsapps/desktop/src/routes/editor/context.ts
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
apps/desktop/src/utils/socket.tsapps/desktop/src/routes/editor/Player.tsxapps/desktop/src-tauri/src/frame_ws.rscrates/rendering/src/yuv_converter.rscrates/editor/src/editor_instance.rscrates/editor/src/editor.rsapps/desktop/src/routes/editor/context.tscrates/rendering/src/frame_pipeline.rscrates/editor/src/playback.rscrates/rendering/src/layers/display.rscrates/rendering/src/decoder/ffmpeg.rscrates/rendering/src/layers/camera.rscrates/rendering/src/lib.rscrates/rendering/src/decoder/mod.rscrates/rendering/src/decoder/avassetreader.rs
**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
Use React Query hooks with Server Actions for mutations and perform precise cache updates using setQueryData/setQueriesData instead of broad invalidations
Files:
apps/desktop/src/routes/editor/Player.tsx
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
apps/desktop/src-tauri/src/frame_ws.rscrates/rendering/src/yuv_converter.rscrates/editor/src/editor_instance.rscrates/editor/src/editor.rscrates/rendering/src/frame_pipeline.rscrates/editor/src/playback.rscrates/rendering/src/layers/display.rscrates/rendering/src/decoder/ffmpeg.rscrates/rendering/src/layers/camera.rscrates/rendering/src/lib.rscrates/rendering/src/decoder/mod.rscrates/rendering/src/decoder/avassetreader.rs
🧠 Learnings (4)
📚 Learning: 2025-12-07T14:29:19.180Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.180Z
Learning: Applies to **/*.rs : Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Applied to files:
apps/desktop/src-tauri/src/frame_ws.rscrates/editor/src/editor.rscrates/rendering/src/frame_pipeline.rscrates/editor/src/playback.rscrates/rendering/src/lib.rs
📚 Learning: 2025-12-07T14:29:40.743Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-07T14:29:40.743Z
Learning: Applies to **/*.rs : Never use `dbg!()` macro in Rust code; use proper logging instead (Clippy: `dbg_macro` = deny)
Applied to files:
apps/desktop/src-tauri/src/frame_ws.rscrates/rendering/src/frame_pipeline.rscrates/rendering/src/lib.rs
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
apps/desktop/src-tauri/src/frame_ws.rscrates/editor/src/editor_instance.rscrates/editor/src/editor.rscrates/rendering/src/frame_pipeline.rscrates/editor/src/playback.rscrates/rendering/src/decoder/ffmpeg.rscrates/rendering/src/lib.rscrates/rendering/src/decoder/mod.rscrates/rendering/src/decoder/avassetreader.rs
📚 Learning: 2025-10-17T05:58:22.586Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1219
File: crates/enc-avfoundation/src/mp4.rs:350-373
Timestamp: 2025-10-17T05:58:22.586Z
Learning: In crates/enc-avfoundation/src/mp4.rs, the `finish()` method intentionally skips video extension when `is_paused` is true. This is correct behavior because if recording is paused, the video should not be extended beyond the pause point—the pause is user-initiated, unlike the case where ScreenCaptureKit stops providing frames during static content.
Applied to files:
crates/editor/src/playback.rs
🧬 Code graph analysis (9)
apps/desktop/src/routes/editor/Player.tsx (1)
crates/export/src/mp4.rs (1)
frame(198-206)
apps/desktop/src-tauri/src/frame_ws.rs (1)
apps/desktop/src-tauri/src/camera_legacy.rs (1)
tokio(11-11)
crates/rendering/src/yuv_converter.rs (2)
crates/rendering/src/layers/display.rs (1)
new(21-46)crates/rendering/src/decoder/mod.rs (1)
new(31-40)
crates/editor/src/editor_instance.rs (2)
crates/editor/src/playback.rs (2)
new(76-82)spawn(964-1030)crates/editor/src/editor.rs (1)
spawn(39-73)
crates/editor/src/editor.rs (1)
crates/editor/src/editor_instance.rs (1)
new(38-194)
crates/editor/src/playback.rs (1)
crates/rendering/src/lib.rs (7)
new(112-182)new(391-436)new(478-480)new(1068-1543)new(1560-1565)new(1613-1625)get_frames(184-232)
crates/rendering/src/decoder/ffmpeg.rs (2)
crates/rendering/src/decoder/avassetreader.rs (5)
to_decoded_frame(34-52)new(61-63)new(194-209)new(222-227)data(211-213)crates/rendering/src/decoder/mod.rs (4)
new(31-40)new_nv12(42-51)new_yuv420p(53-68)data(70-72)
crates/rendering/src/layers/camera.rs (3)
crates/rendering/src/layers/display.rs (2)
new(21-46)render(215-221)crates/rendering/src/yuv_converter.rs (2)
new(20-165)output_texture(622-624)crates/rendering/src/composite_frame.rs (3)
create_frame_texture(179-198)default(41-67)bind_group(153-177)
crates/rendering/src/decoder/avassetreader.rs (4)
crates/rendering/src/decoder/mod.rs (4)
format(82-84)width(74-76)height(78-80)data(70-72)crates/rendering/src/decoder/frame_converter.rs (1)
copy_rgba_plane(81-92)crates/rendering/src/decoder/ffmpeg.rs (1)
to_decoded_frame(28-46)crates/video-decode/src/ffmpeg.rs (3)
decoder(103-105)decoder(122-124)frames(94-101)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Clippy (aarch64-apple-darwin, macos-latest)
- GitHub Check: Cursor Bugbot
- GitHub Check: Analyze (rust)
🔇 Additional comments (7)
apps/desktop/src-tauri/src/frame_ws.rs (1)
13-20:created_at: Instantlooks fine; ensure all producers set it consistently.
This file assumesframe.created_at.elapsed()is meaningful latency; if any call sites default/refreshcreated_attoo late, these metrics become misleading.crates/editor/src/editor.rs (2)
91-143: Perf metrics instrumentation looks coherentFinal metrics on Stop and periodic metrics are structured and low-overhead relative to the work being measured.
153-193: Queue-drain/drop accounting is reasonable; ensure this behavior is intendedDraining up to ~5ms and dropping older frames biases for “latest frame wins”, which matches interactive preview/playback. Just calling out that it’s a semantic change (not just perf).
crates/rendering/src/decoder/ffmpeg.rs (1)
279-287: Black-frame metadata looks consistent with RGBA fallback.
Format/stride fields match the allocated buffer size (width*height*4).crates/rendering/src/layers/display.rs (1)
109-203: YUV copy path: good structure, but avoid per-frame submit and passuv_stridefor NV12.
- Once
YuvToRgbaConverter::convert_nv12supportsuv_stride, thread it through here too.- Consider moving the
copy_texture_to_textureinto the render loop’s existing encoder (the emptycopy_to_texturesuggests that direction) to avoid extra encoders/submits per frame.
[uggest_recommended_refactor]crates/rendering/src/layers/camera.rs (1)
9-56: Double-buffering integration matches DisplayLayer pattern.
The two-texture + per-texture bind group setup is consistent withcrates/rendering/src/layers/display.rsand should reduce contention.crates/rendering/src/lib.rs (1)
35-40: Public API updates look coherent (PixelFormat export + DecodedSegmentFrames Clone).
ExportingPixelFormatalongsideDecodedFrameand derivingCloneonDecodedSegmentFramesare consistent with the new multi-format pipeline.Also applies to: 1546-1552
| export type DialogState = { open: false } | ({ open: boolean } & CurrentDialog); | ||
|
|
||
| export const FPS = 60; | ||
| export const FPS = 30; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❓ Verification inconclusive
Script executed:
# First, examine the context.ts file around line 50 and the mentioned usages
cat -n apps/desktop/src/routes/editor/context.ts | head -100Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
1 import {
2 createElementBounds,
3 type NullableBounds,
4 } from "@solid-primitives/bounds";
5 import { createContextProvider } from "@solid-primitives/context";
6 import { trackStore } from "@solid-primitives/deep";
7 import { createEventListener } from "@solid-primitives/event-listener";
8 import { createUndoHistory } from "@solid-primitives/history";
9 import { createQuery, skipToken } from "@tanstack/solid-query";
10 import {
11 type Accessor,
12 batch,
13 createEffect,
14 createResource,
15 createSignal,
16 on,
17 onCleanup,
18 } from "solid-js";
19 import { createStore, produce, reconcile, unwrap } from "solid-js/store";
20
21 import { createPresets } from "~/utils/createPresets";
22 import { createCustomDomainQuery } from "~/utils/queries";
23 import { createImageDataWS, createLazySignal } from "~/utils/socket";
24 import {
25 commands,
26 events,
27 type FramesRendered,
28 type MultipleSegments,
29 type ProjectConfiguration,
30 type RecordingMeta,
31 type SceneSegment,
32 type SerializedEditorInstance,
33 type SingleSegment,
34 type TimelineConfiguration,
35 type XY,
36 } from "~/utils/tauri";
37 import type { MaskSegment } from "./masks";
38 import type { TextSegment } from "./text";
39 import { createProgressBar } from "./utils";
40
41 export type CurrentDialog =
42 | { type: "createPreset" }
43 | { type: "renamePreset"; presetIndex: number }
44 | { type: "deletePreset"; presetIndex: number }
45 | { type: "crop"; position: XY<number>; size: XY<number> }
46 | { type: "export" };
47
48 export type DialogState = { open: false } | ({ open: boolean } & CurrentDialog);
49
50 export const FPS = 30;
51
52 export const OUTPUT_SIZE = {
53 x: 1920,
54 y: 1080,
55 };
56
57 export type PreviewQuality = "quarter" | "half" | "full";
58
59 export const DEFAULT_PREVIEW_QUALITY: PreviewQuality = "full";
60
61 const previewQualityScale: Record<PreviewQuality, number> = {
62 full: 1,
63 half: 0.5,
64 quarter: 0.25,
65 };
66
67 export const getPreviewResolution = (quality: PreviewQuality): XY<number> => {
68 const scale = previewQualityScale[quality];
69 const width = (Math.max(2, Math.round(OUTPUT_SIZE.x * scale)) + 1) & ~1;
70 const height = (Math.max(2, Math.round(OUTPUT_SIZE.y * scale)) + 1) & ~1;
71
72 return { x: width, y: height };
73 };
74
75 export type TimelineTrackType = "clip" | "text" | "zoom" | "scene" | "mask";
76
77 export const MAX_ZOOM_IN = 3;
78 const PROJECT_SAVE_DEBOUNCE_MS = 250;
79
80 export type RenderState =
81 | { type: "starting" }
82 | { type: "rendering"; progress: FramesRendered };
83
84 export type CustomDomainResponse = {
85 custom_domain: string | null;
86 domain_verified: boolean | null;
87 };
88
89 export type CornerRoundingType = "rounded" | "squircle";
90
91 type WithCornerStyle<T> = T & { roundingType: CornerRoundingType };
92
93 type EditorTimelineConfiguration = Omit<
94 TimelineConfiguration,
95 "sceneSegments" | "maskSegments"
96 > & {
97 sceneSegments?: SceneSegment[];
98 maskSegments: MaskSegment[];
99 textSegments: TextSegment[];
100 };
Script executed:
# Also get the lines around 556-559 and 758-762 for context
cat -n apps/desktop/src/routes/editor/context.ts | sed -n '550,570p'Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
550
551 createEffect(
552 on(
553 () => editorState.playing,
554 (active) => {
555 if (!active)
556 commands.setPlayheadPosition(
557 Math.floor(editorState.playbackTime * FPS),
558 );
559 },
560 ),
561 );
562
563 const totalDuration = () =>
564 project.timeline?.segments.reduce(
565 (acc, s) => acc + (s.end - s.start) / s.timescale,
566 0,
567 ) ?? props.editorInstance.recordingDuration;
568
569 type State = {
570 zoom: number;
Script executed:
# And around 758-762
cat -n apps/desktop/src/routes/editor/context.ts | sed -n '750,770p'Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
750 const [ws, wsConnected] = createImageDataWS(
751 instance.framesSocketUrl,
752 setLatestFrame,
753 );
754
755 ws.addEventListener("open", () => {
756 console.log("[Editor] WebSocket open event - emitting initial frame");
757 setIsConnected(true);
758 events.renderFrameEvent.emit({
759 frame_number: 0,
760 fps: FPS,
761 resolution_base: getPreviewResolution(DEFAULT_PREVIEW_QUALITY),
762 });
763 });
764
765 ws.addEventListener("close", () => {
766 setIsConnected(false);
767 });
768
769 return instance;
770 });
Script executed:
# Find backend implementations - search for setPlayheadPosition
find . -type f \( -name "*.ts" -o -name "*.rs" \) | xargs grep -l "setPlayheadPosition" 2>/dev/null | head -20Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./apps/desktop/src/routes/editor/context.ts
./apps/desktop/src/utils/tauri.ts
Script executed:
# Search for renderFrameEvent handling on backend
find . -type f \( -name "*.rs" -o -name "*.ts" \) | xargs grep -l "renderFrameEvent" 2>/dev/null | head -20Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./apps/desktop/src/routes/editor/context.ts
./apps/desktop/src/utils/tauri.ts
Script executed:
# Check tauri.ts for command and event type definitions
cat -n apps/desktop/src/utils/tauri.ts | grep -A 10 "setPlayheadPosition\|renderFrameEvent"Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
116 async setPlayheadPosition(frameNumber: number) : Promise<null> {
117 return await TAURI_INVOKE("set_playhead_position", { frameNumber });
118 },
119 async setProjectConfig(config: ProjectConfiguration) : Promise<null> {
120 return await TAURI_INVOKE("set_project_config", { config });
121 },
122 async updateProjectConfigInMemory(config: ProjectConfiguration) : Promise<null> {
123 return await TAURI_INVOKE("update_project_config_in_memory", { config });
124 },
125 async generateZoomSegmentsFromClicks() : Promise<ZoomSegment[]> {
126 return await TAURI_INVOKE("generate_zoom_segments_from_clicks");
--
317 renderFrameEvent: RenderFrameEvent,
318 requestOpenRecordingPicker: RequestOpenRecordingPicker,
319 requestOpenSettings: RequestOpenSettings,
320 requestScreenCapturePrewarm: RequestScreenCapturePrewarm,
321 requestStartRecording: RequestStartRecording,
322 setCaptureAreaPending: SetCaptureAreaPending,
323 targetUnderCursor: TargetUnderCursor,
324 uploadProgressEvent: UploadProgressEvent
325 }>({
326 audioInputLevelChange: "audio-input-level-change",
327 currentRecordingChanged: "current-recording-changed",
--
339 renderFrameEvent: "render-frame-event",
340 requestOpenRecordingPicker: "request-open-recording-picker",
341 requestOpenSettings: "request-open-settings",
342 requestScreenCapturePrewarm: "request-screen-capture-prewarm",
343 requestStartRecording: "request-start-recording",
344 setCaptureAreaPending: "set-capture-area-pending",
345 targetUnderCursor: "target-under-cursor",
346 uploadProgressEvent: "upload-progress-event"
347 })
348
349 /** user-defined constants **/
Script executed:
# Search backend for setPlayheadPosition handler
find . -type f -name "*.rs" | xargs grep -l "setPlayheadPosition" 2>/dev/null | head -10Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
Script executed:
# Search backend for renderFrameEvent handler
find . -type f -name "*.rs" | xargs grep -l "renderFrameEvent\|render_frame" 2>/dev/null | head -10Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./crates/editor/src/audio.rs
./crates/editor/src/editor.rs
./crates/editor/src/editor_instance.rs
./crates/editor/src/playback.rs
./crates/export/src/mp4.rs
Script executed:
# Check for project fps configuration
find . -type f -name "*.rs" -o -name "*.ts" | xargs grep -i "project.*fps\|framerate\|frame.*rate" 2>/dev/null | grep -v node_modules | head -20Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./apps/cli/src/main.rs: // RequestedFormatType::AbsoluteHighestFrameRate,
./apps/cli/src/main.rs: // .map(|f| format!("{}x{}@{}fps", f.resolution().x(), f.resolution().y(), f.frame_rate()))
./apps/desktop/src-tauri/src/captions.rs: frame.set_rate(WHISPER_SAMPLE_RATE);
./apps/desktop/src-tauri/src/captions.rs: for (chunk_idx, chunk) in mixed_samples.chunks(frame_size * channel_count).enumerate() {
./apps/desktop/src-tauri/src/captions.rs: input_frame.set_rate(AudioData::SAMPLE_RATE);
./apps/desktop/src-tauri/src/captions.rs: output_frame.set_rate(WHISPER_SAMPLE_RATE);
./apps/desktop/src-tauri/src/upload.rs: .frame_rate()
./apps/web/app/(org)/dashboard/caps/components/web-recorder-dialog/useWebRecorder.ts: frameRate: { ideal: 30 },
./apps/web/app/(org)/dashboard/caps/components/web-recorder-dialog/web-recorder-constants.ts: frameRate: { ideal: 30 },
./apps/web/components/tools/content.ts: "For best results when slowing down, choose videos with higher frame rates (60fps is ideal). While some choppiness may occur with extreme slowdowns, our tool maintains as much smoothness as possible.",
./crates/audio/src/bin/macos-audio-capture.rs: frame.set_rate(asdb.stream_basic_desc().unwrap().sample_rate as u32);
./crates/audio/src/latency.rs: let fallback = (buffer_size_frames as f64 / sample_rate as f64)
./crates/audio/src/latency.rs: let base_latency = (buffer_frames as f64 / effective_rate).min(ceiling);
./crates/audio/src/latency.rs: let mut latency_secs = total_frames as f64 / effective_rate;
./crates/audio/src/main.rs:// for (index, sample) in buffer.drain(0..frame_size).enumerate() {
./crates/audio/src/main.rs:// for (index, sample) in buffer.drain(0..frame_size).enumerate() {
./crates/camera-avfoundation/examples/cli.rs: let fr_ranges = format.video_supported_frame_rate_ranges();
./crates/camera-avfoundation/examples/cli.rs: max_frame_rate: (
./crates/camera-avfoundation/examples/cli.rs: max_frame_rate: (i64, i32),
./crates/camera-avfoundation/examples/cli.rs: self.max_frame_rate.1 as f32 / self.max_frame_rate.0 as f32,
Script executed:
# Find the set_playhead_position command handler in backend
grep -rn "set_playhead_position\|setPlayheadPosition" --include="*.rs" ./crates ./apps/desktop/src-tauriRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./apps/desktop/src-tauri/src/lib.rs:1483:async fn set_playhead_position(
./apps/desktop/src-tauri/src/lib.rs:2350: set_playhead_position,
Script executed:
# Check playback.rs for renderFrameEvent and frame handling
head -200 ./crates/editor/src/playback.rsRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
use cap_audio::{
FromSampleBytes, LatencyCorrectionConfig, LatencyCorrector, default_output_latency_hint,
};
use cap_media::MediaError;
use cap_media_info::AudioInfo;
use cap_project::{ProjectConfiguration, XY};
use cap_rendering::{DecodedSegmentFrames, ProjectUniforms, RenderVideoConstants};
use cpal::{
BufferSize, SampleFormat, SupportedBufferSize,
traits::{DeviceTrait, HostTrait, StreamTrait},
};
use futures::stream::{FuturesUnordered, StreamExt};
use std::{
collections::{HashMap, HashSet, VecDeque},
sync::{Arc, RwLock},
time::Duration,
};
use tokio::{
sync::{mpsc as tokio_mpsc, watch},
time::Instant,
};
use tracing::{debug, error, info, warn};
use crate::{
audio::{AudioPlaybackBuffer, AudioSegment},
editor,
editor_instance::SegmentMedia,
segments::get_audio_segments,
};
const PREFETCH_BUFFER_SIZE: usize = 120;
const PARALLEL_DECODE_TASKS: usize = 16;
const MAX_PREFETCH_AHEAD: u32 = 150;
const PREFETCH_BEHIND: u32 = 30;
const FRAME_CACHE_SIZE: usize = 90;
#[derive(Debug)]
pub enum PlaybackStartError {
InvalidFps,
}
pub struct Playback {
pub renderer: Arc<editor::RendererHandle>,
pub render_constants: Arc<RenderVideoConstants>,
pub start_frame_number: u32,
pub project: watch::Receiver<ProjectConfiguration>,
pub segment_medias: Arc<Vec<SegmentMedia>>,
}
#[derive(Clone, Copy)]
pub enum PlaybackEvent {
Start,
Frame(u32),
Stop,
}
#[derive(Clone)]
pub struct PlaybackHandle {
stop_tx: watch::Sender<bool>,
event_rx: watch::Receiver<PlaybackEvent>,
}
struct PrefetchedFrame {
frame_number: u32,
segment_frames: DecodedSegmentFrames,
segment_index: u32,
}
struct FrameCache {
frames: HashMap<u32, (DecodedSegmentFrames, u32)>,
order: VecDeque<u32>,
capacity: usize,
}
impl FrameCache {
fn new(capacity: usize) -> Self {
Self {
frames: HashMap::with_capacity(capacity),
order: VecDeque::with_capacity(capacity),
capacity,
}
}
fn get(&mut self, frame_number: u32) -> Option<(DecodedSegmentFrames, u32)> {
if let Some(data) = self.frames.get(&frame_number) {
if let Some(pos) = self.order.iter().position(|&f| f == frame_number) {
self.order.remove(pos);
self.order.push_back(frame_number);
}
Some(data.clone())
} else {
None
}
}
fn insert(
&mut self,
frame_number: u32,
segment_frames: DecodedSegmentFrames,
segment_index: u32,
) {
if self.frames.contains_key(&frame_number) {
if let Some(pos) = self.order.iter().position(|&f| f == frame_number) {
self.order.remove(pos);
}
} else if self.frames.len() >= self.capacity {
if let Some(oldest) = self.order.pop_front() {
self.frames.remove(&oldest);
}
}
self.frames
.insert(frame_number, (segment_frames, segment_index));
self.order.push_back(frame_number);
}
fn len(&self) -> usize {
self.frames.len()
}
}
impl Playback {
pub async fn start(
self,
fps: u32,
resolution_base: XY<u32>,
) -> Result<PlaybackHandle, PlaybackStartError> {
let fps_f64 = fps as f64;
if !(fps_f64.is_finite() && fps_f64 > 0.0) {
warn!(fps, "Invalid FPS provided for playback start");
return Err(PlaybackStartError::InvalidFps);
}
let (stop_tx, mut stop_rx) = watch::channel(false);
stop_rx.borrow_and_update();
let (event_tx, mut event_rx) = watch::channel(PlaybackEvent::Start);
event_rx.borrow_and_update();
let handle = PlaybackHandle {
stop_tx: stop_tx.clone(),
event_rx,
};
let (prefetch_tx, mut prefetch_rx) =
tokio_mpsc::channel::<PrefetchedFrame>(PREFETCH_BUFFER_SIZE * 2);
let (frame_request_tx, mut frame_request_rx) = watch::channel(self.start_frame_number);
let (playback_position_tx, playback_position_rx) = watch::channel(self.start_frame_number);
let in_flight_frames: Arc<RwLock<HashSet<u32>>> = Arc::new(RwLock::new(HashSet::new()));
let prefetch_in_flight = in_flight_frames.clone();
let main_in_flight = in_flight_frames;
let prefetch_stop_rx = stop_rx.clone();
let prefetch_project = self.project.clone();
let prefetch_segment_medias = self.segment_medias.clone();
let prefetch_duration = if let Some(timeline) = &self.project.borrow().timeline {
timeline.duration()
} else {
f64::MAX
};
tokio::spawn(async move {
type PrefetchFuture = std::pin::Pin<
Box<
dyn std::future::Future<Output = (u32, u32, Option<DecodedSegmentFrames>)>
+ Send,
>,
>;
let mut next_prefetch_frame = *frame_request_rx.borrow();
let mut in_flight: FuturesUnordered<PrefetchFuture> = FuturesUnordered::new();
let mut frames_decoded: u32 = 0;
let mut prefetched_behind: HashSet<u32> = HashSet::new();
const INITIAL_PARALLEL_TASKS: usize = 8;
const RAMP_UP_AFTER_FRAMES: u32 = 5;
loop {
if *prefetch_stop_rx.borrow() {
break;
}
if let Ok(true) = frame_request_rx.has_changed() {
let requested = *frame_request_rx.borrow_and_update();
if requested != next_prefetch_frame {
let old_frame = next_prefetch_frame;
let is_backward_seek = requested < old_frame;
let seek_distance = if is_backward_seek {
old_frame - requested
} else {
requested - old_frame
};
next_prefetch_frame = requested;
frames_decoded = 0;
prefetched_behind.clear();
let in_flight_before =
prefetch_in_flight.read().map(|g| g.len()).unwrap_or(0);
let futures_before = in_flight.len();
Script executed:
# Check editor.rs for set_playhead_position implementation
grep -A 20 "set_playhead_position" ./crates/editor/src/editor.rs | head -40Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
Script executed:
# Check ProjectConfiguration type to see if it includes fps/framerate
grep -rn "ProjectConfiguration\|project.*config" --include="*.rs" ./crates/editor/src/ | grep -i "struct\|fps\|frame" | head -20Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
Script executed:
# Read the set_playhead_position implementation
sed -n '1483,1550p' ./apps/desktop/src-tauri/src/lib.rsRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
async fn set_playhead_position(
editor_instance: WindowEditorInstance,
frame_number: u32,
) -> Result<(), String> {
editor_instance
.modify_and_emit_state(|state| {
state.playhead_position = frame_number;
})
.await;
Ok(())
}
#[tauri::command]
#[specta::specta]
#[instrument(skip(editor_instance))]
async fn set_project_config(
editor_instance: WindowEditorInstance,
config: ProjectConfiguration,
) -> Result<(), String> {
config.write(&editor_instance.project_path).unwrap();
editor_instance.project_config.0.send(config).ok();
Ok(())
}
#[tauri::command]
#[specta::specta]
#[instrument(skip(editor_instance))]
async fn update_project_config_in_memory(
editor_instance: WindowEditorInstance,
config: ProjectConfiguration,
) -> Result<(), String> {
editor_instance.project_config.0.send(config).ok();
Ok(())
}
#[tauri::command]
#[specta::specta]
#[instrument(skip(editor_instance))]
async fn generate_zoom_segments_from_clicks(
editor_instance: WindowEditorInstance,
) -> Result<Vec<ZoomSegment>, String> {
let meta = editor_instance.meta();
let recordings = &editor_instance.recordings;
let zoom_segments = recording::generate_zoom_segments_for_project(meta, recordings);
Ok(zoom_segments)
}
#[tauri::command]
#[specta::specta]
#[instrument]
async fn list_audio_devices() -> Result<Vec<String>, ()> {
if !permissions::do_permissions_check(false)
.microphone
.permitted()
{
return Ok(vec![]);
}
Ok(MicrophoneFeed::list().keys().cloned().collect())
}
#[derive(Serialize, Type, Debug, Clone)]
pub struct UploadProgress {
Script executed:
# Search for how set_playhead_position uses frameNumber and what fps it expects
grep -A 30 "async fn set_playhead_position" ./apps/desktop/src-tauri/src/lib.rsRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
async fn set_playhead_position(
editor_instance: WindowEditorInstance,
frame_number: u32,
) -> Result<(), String> {
editor_instance
.modify_and_emit_state(|state| {
state.playhead_position = frame_number;
})
.await;
Ok(())
}
#[tauri::command]
#[specta::specta]
#[instrument(skip(editor_instance))]
async fn set_project_config(
editor_instance: WindowEditorInstance,
config: ProjectConfiguration,
) -> Result<(), String> {
config.write(&editor_instance.project_path).unwrap();
editor_instance.project_config.0.send(config).ok();
Ok(())
}
#[tauri::command]
#[specta::specta]
#[instrument(skip(editor_instance))]
async fn update_project_config_in_memory(
Script executed:
# Check ProjectConfiguration structure for fps field
grep -rn "struct ProjectConfiguration\|pub struct ProjectConfiguration" --include="*.rs" ./cratesRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./crates/project/src/configuration.rs:996:pub struct ProjectConfiguration {
Script executed:
# Find where playback is started and what fps is passed
grep -rn "Playback.*start\|playback.*start" --include="*.rs" ./crates ./apps/desktop/src-tauri | grep "\.start" | head -20Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./crates/editor/src/playback.rs:148: let (playback_position_tx, playback_position_rx) = watch::channel(self.start_frame_number);
Script executed:
# Read ProjectConfiguration struct definition
sed -n '996,1100p' ./crates/project/src/configuration.rsRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
pub struct ProjectConfiguration {
pub aspect_ratio: Option<AspectRatio>,
pub background: BackgroundConfiguration,
pub camera: Camera,
pub audio: AudioConfiguration,
pub cursor: CursorConfiguration,
pub hotkeys: HotkeysConfiguration,
#[serde(default)]
pub timeline: Option<TimelineConfiguration>,
#[serde(default)]
pub captions: Option<CaptionsData>,
#[serde(default)]
pub clips: Vec<ClipConfiguration>,
#[serde(default)]
pub annotations: Vec<Annotation>,
#[serde(default, skip_serializing)]
pub hidden_text_segments: Vec<usize>,
}
fn camera_config_needs_migration(value: &Value) -> bool {
value
.get("camera")
.and_then(|camera| camera.as_object())
.is_some_and(|camera| {
camera.contains_key("zoom_size")
|| camera.contains_key("advanced_shadow")
|| camera.contains_key("rounding_type")
})
}
impl ProjectConfiguration {
pub fn validate(&self) -> Result<(), AnnotationValidationError> {
for annotation in &self.annotations {
annotation.validate()?;
}
Ok(())
}
pub fn load(project_path: impl AsRef<Path>) -> Result<Self, std::io::Error> {
let project_path = project_path.as_ref();
let config_path = project_path.join("project-config.json");
let config_str = std::fs::read_to_string(&config_path)?;
let parsed_value = serde_json::from_str::<Value>(&config_str).ok();
let config: Self = serde_json::from_str(&config_str)
.map_err(|error| std::io::Error::new(std::io::ErrorKind::InvalidData, error))?;
config
.validate()
.map_err(|error| std::io::Error::new(std::io::ErrorKind::InvalidData, error))?;
if parsed_value
.as_ref()
.map(camera_config_needs_migration)
.unwrap_or(false)
{
match config.write(project_path) {
Ok(_) => {
eprintln!("Updated project-config.json camera keys to camelCase");
}
Err(error) => {
eprintln!("Failed to migrate project-config.json: {error}");
}
}
}
Ok(config)
}
pub fn write(&self, project_path: impl AsRef<Path>) -> Result<(), std::io::Error> {
self.validate()
.map_err(|error| std::io::Error::new(std::io::ErrorKind::InvalidData, error))?;
let temp_path = temp_dir().join(uuid::Uuid::new_v4().to_string());
// Write to temporary file first to ensure readers don't see partial files
std::fs::write(&temp_path, serde_json::to_string_pretty(self)?)?;
std::fs::rename(
&temp_path,
project_path.as_ref().join("project-config.json"),
)?;
Ok(())
}
pub fn get_segment_time(&self, frame_time: f64) -> Option<(f64, &TimelineSegment)> {
self.timeline
.as_ref()
.and_then(|t| t.get_segment_time(frame_time))
}
}
pub const SLOW_SMOOTHING_SAMPLES: usize = 24;
pub const REGULAR_SMOOTHING_SAMPLES: usize = 16;
pub const FAST_SMOOTHING_SAMPLES: usize = 10;
pub const SLOW_VELOCITY_THRESHOLD: f64 = 0.003;
pub const REGULAR_VELOCITY_THRESHOLD: f64 = 0.008;
pub const FAST_VELOCITY_THRESHOLD: f64 = 0.015;
Script executed:
# Search for playhead_position usage in backend
grep -rn "playhead_position" --include="*.rs" ./crates ./apps/desktop/src-tauri | head -30Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./crates/editor/src/editor_instance.rs:176: playhead_position: 0,
./crates/editor/src/editor_instance.rs:250: let start_frame_number = state.playhead_position;
./crates/editor/src/editor_instance.rs:283: state.playhead_position = frame_number;
./crates/editor/src/editor_instance.rs:511: pub playhead_position: u32,
./apps/desktop/src-tauri/src/lib.rs:1244: playhead_position: u32,
./apps/desktop/src-tauri/src/lib.rs:1250: playhead_position: s.playhead_position,
./apps/desktop/src-tauri/src/lib.rs:1483:async fn set_playhead_position(
./apps/desktop/src-tauri/src/lib.rs:1489: state.playhead_position = frame_number;
./apps/desktop/src-tauri/src/lib.rs:2097: state.playhead_position = frame_number;
./apps/desktop/src-tauri/src/lib.rs:2350: set_playhead_position,
Script executed:
# Find renderFrameEvent or render_frame_event handler in backend
grep -rn "render_frame_event\|renderFrameEvent" --include="*.rs" ./apps/desktop/src-tauriRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 1
Script executed:
# Check how WebSocket messages are handled for frame requests
grep -rn "frame_number\|fps" --include="*.rs" ./crates/editor/src/playback.rs | head -40Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
45: pub start_frame_number: u32,
64: frame_number: u32,
84: fn get(&mut self, frame_number: u32) -> Option<(DecodedSegmentFrames, u32)> {
85: if let Some(data) = self.frames.get(&frame_number) {
86: if let Some(pos) = self.order.iter().position(|&f| f == frame_number) {
88: self.order.push_back(frame_number);
98: frame_number: u32,
102: if self.frames.contains_key(&frame_number) {
103: if let Some(pos) = self.order.iter().position(|&f| f == frame_number) {
112: .insert(frame_number, (segment_frames, segment_index));
113: self.order.push_back(frame_number);
124: fps: u32,
127: let fps_f64 = fps as f64;
129: if !(fps_f64.is_finite() && fps_f64 > 0.0) {
130: warn!(fps, "Invalid FPS provided for playback start");
147: let (frame_request_tx, mut frame_request_rx) = watch::channel(self.start_frame_number);
148: let (playback_position_tx, playback_position_rx) = watch::channel(self.start_frame_number);
251: let prefetch_time = frame_num as f64 / fps_f64;
308: let prefetch_time = behind_frame as f64 / fps_f64;
364: frame_number: frame_num,
386: start_frame_number: self.start_frame_number,
388: fps,
392: let frame_duration = Duration::from_secs_f64(1.0 / fps_f64);
393: let mut frame_number = self.start_frame_number;
410: start_frame = self.start_frame_number,
411: fps = fps,
429: r#"{{"location":"playback.rs:start","message":"playback starting","data":{{"start_frame":{},"fps":{},"duration":{}}},"timestamp":{},"sessionId":"debug-session","hypothesisId":"E"}}"#,
430: self.start_frame_number, fps, duration, ts
455: if prefetched.frame_number >= frame_number {
472: .sort_by_key(|p| p.frame_number);
487: if prefetched.frame_number >= frame_number {
494: p.frame_number > frame_number + PREFETCH_BUFFER_SIZE as u32
496: .max_by_key(|(_, p)| p.frame_number)
507: let frame_offset = frame_number.saturating_sub(self.start_frame_number) as f64;
519: let playback_time = frame_number as f64 / fps_f64;
530: let segment_frames_opt = if let Some(cached) = frame_cache.get(frame_number) {
537: .position(|p| p.frame_number == frame_number);
547: .map(|guard| guard.contains(&frame_number))
559: if prefetched.frame_number == frame_number {
562: } else if prefetched.frame_number >= self.start_frame_number {
Script executed:
# Search for where renderFrameEvent is emitted from (frontend) and handled (backend)
grep -rn "render.frame\|render_frame" --include="*.rs" ./apps/desktop/src-tauri ./crates | grep -i "event\|emit\|listen" | head -20Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
Script executed:
# Check WebSocket handler for frame requests - find where fps parameter is received/used
grep -rn "fps\|frame_number" --include="*.rs" ./crates/editor/src/editor_instance.rs | head -30Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
242: pub async fn start_playback(self: &Arc<Self>, fps: u32, resolution_base: XY<u32>) {
250: let start_frame_number = state.playhead_position;
256: start_frame_number,
259: .start(fps, resolution_base)
264: warn!(fps, "Skipping playback start due to invalid FPS");
281: playback::PlaybackEvent::Frame(frame_number) => {
283: state.playhead_position = frame_number;
315: let Some((frame_number, fps, resolution_base)) =
329: trace!("Preview renderer: processing frame {}", frame_number);
344: r#"{{"location":"editor_instance.rs:preview_request","message":"preview renderer requesting frame","data":{{"frame_number":{},"fps":{}}},"timestamp":{},"sessionId":"debug-session","hypothesisId":"C"}}"#,
345: frame_number, fps, ts
354: project.get_segment_time(frame_number as f64 / fps as f64)
358: frame_number
388: r#"{{"location":"editor_instance.rs:preview_prefetch","message":"preview spawning prefetch tasks","data":{{"frame_number":{},"prefetch_count":5}},"timestamp":{},"sessionId":"debug-session","hypothesisId":"C"}}"#,
389: frame_number, ts
399: let prefetch_frame = frame_number + offset;
401: project.get_segment_time(prefetch_frame as f64 / fps as f64)
452: trace!("Preview renderer: rendering frame {}", frame_number);
456: frame_number,
457: fps,
463: .render_frame(segment_frames, uniforms, segment_medias.cursor.clone(), frame_number)
466: warn!("Preview renderer: no frames returned for frame {}", frame_number);
484: pub fn get_total_frames(&self, fps: u32) -> u32 {
485: // Calculate total frames based on actual video duration and fps
493: (fps as f64 * duration).ceil() as u32
Script executed:
# Find the WebSocket connection handler in editor_instance.rs
grep -B5 -A20 "frames_socket_url\|framesSocketUrl" --include="*.rs" ./crates/editor/src/editor_instance.rs | head -60Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
Script executed:
# Search for how render frame requests are handled from the frontend
grep -rn "frame.request\|FrameRequest" --include="*.rs" ./crates ./apps/desktop/src-tauri | head -20Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./crates/editor/src/editor_instance.rs:310: trace!("Preview renderer: waiting for frame request");
./crates/editor/src/playback.rs:147: let (frame_request_tx, mut frame_request_rx) = watch::channel(self.start_frame_number);
./crates/editor/src/playback.rs:170: let mut next_prefetch_frame = *frame_request_rx.borrow();
./crates/editor/src/playback.rs:182: if let Ok(true) = frame_request_rx.has_changed() {
./crates/editor/src/playback.rs:183: let requested = *frame_request_rx.borrow_and_update();
./crates/editor/src/playback.rs:601: let _ = frame_request_tx.send(frame_number);
./crates/editor/src/playback.rs:880: let _ = frame_request_tx.send(frame_number);
Script executed:
# Check TimelineConfiguration and segment handling
grep -rn "struct TimelineConfiguration\|get_segment_time" --include="*.rs" ./crates/project/srcRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./crates/project/src/configuration.rs:700:pub struct TimelineConfiguration {
./crates/project/src/configuration.rs:712: pub fn get_segment_time(&self, frame_time: f64) -> Option<(f64, &TimelineSegment)> {
./crates/project/src/configuration.rs:1081: pub fn get_segment_time(&self, frame_time: f64) -> Option<(f64, &TimelineSegment)> {
./crates/project/src/configuration.rs:1084: .and_then(|t| t.get_segment_time(frame_time))
Script executed:
# See the implementation of get_segment_time
grep -B5 -A15 "pub fn get_segment_time" --include="*.rs" ./crates/project/srcRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 2
stderr:
grep: ./crates/project/src: Is a directory
Script executed:
# Check if there's any source fps or framerate in the recording metadata
grep -rn "struct RecordingMeta\|framerate\|fps" --include="*.rs" ./crates/project/src | head -30Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./crates/project/src/lib.rs:15: pub fps: u32,
./crates/project/src/lib.rs:28: fps: 30,
./crates/project/src/meta.rs:21: #[serde(default = "legacy_static_video_fps")]
./crates/project/src/meta.rs:22: pub fps: u32,
./crates/project/src/meta.rs:27:fn legacy_static_video_fps() -> u32 {
./crates/project/src/meta.rs:62:pub struct RecordingMeta {
./crates/project/src/meta.rs:123: Complete { fps: u32, sample_rate: Option<u32> },
./crates/project/src/meta.rs:228: pub fn min_fps(&self) -> u32 {
./crates/project/src/meta.rs:230: Self::SingleSegment { segment } => segment.display.fps,
./crates/project/src/meta.rs:232: inner.segments.iter().map(|s| s.display.fps).min().unwrap()
./crates/project/src/meta.rs:237: pub fn max_fps(&self) -> u32 {
./crates/project/src/meta.rs:239: Self::SingleSegment { segment } => segment.display.fps,
./crates/project/src/meta.rs:241: inner.segments.iter().map(|s| s.display.fps).max().unwrap()
Script executed:
# Verify render_frame signature and what it expects
grep -B5 -A10 "pub fn render_frame\|pub async fn render_frame" --include="*.rs" ./crates/editor/srcRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 2
stderr:
grep: ./crates/editor/src: Is a directory
Script executed:
# Check RecordingMeta and DisplaySegment struct definition
sed -n '15,35p' ./crates/project/src/lib.rsRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
pub fps: u32,
pub resolution: Resolution,
}
#[derive(Debug, Clone, Serialize, Deserialize, Type)]
pub struct Resolution {
pub width: u32,
pub height: u32,
}
impl Default for RecordingConfig {
fn default() -> Self {
Self {
fps: 30,
resolution: Resolution {
width: 1920,
height: 1080,
},
}
}
}
Script executed:
# Read the complete get_segment_time implementation
sed -n '712,730p' ./crates/project/src/configuration.rsRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
pub fn get_segment_time(&self, frame_time: f64) -> Option<(f64, &TimelineSegment)> {
let mut accum_duration = 0.0;
for segment in self.segments.iter() {
if frame_time < accum_duration + segment.duration() {
return segment
.interpolate_time(frame_time - accum_duration)
.map(|t| (t, segment));
}
accum_duration += segment.duration();
}
None
}
pub fn duration(&self) -> f64 {
self.segments.iter().map(|s| s.duration()).sum()
}
Script executed:
# Find where start_playback is called and what fps is passed
grep -B10 -A5 "start_playback" --include="*.rs" ./crates/editor/src/editor_instance.rsRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
println!("EditorInstance disposed");
}
pub async fn modify_and_emit_state(&self, modify: impl Fn(&mut EditorState)) {
let mut state = self.state.lock().await;
modify(&mut state);
(self.on_state_change)(&state);
}
pub async fn start_playback(self: &Arc<Self>, fps: u32, resolution_base: XY<u32>) {
let _ = self.playback_active.send(true);
let (mut handle, prev) = {
let Ok(mut state) = self.state.try_lock() else {
return;
Script executed:
# Check if there's any synchronization between RecordingMeta fps and preview fps
grep -rn "recording.*fps\|meta.*fps" --include="*.rs" ./crates/editor/src | head -20Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
Script executed:
# Check how DisplaySegment fps is defined
grep -B5 -A10 "struct DisplaySegment\|pub struct DisplaySegment" --include="*.rs" ./crates/project/srcRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 2
stderr:
grep: ./crates/project/src: Is a directory
Script executed:
# Find where start_playback is called and what fps argument is passed
grep -rn "start_playback" --include="*.rs" ./crates ./apps/desktop/src-tauriRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./crates/editor/examples/cli.rs:13: editor.start_playback(30, XY::new(1920, 1080)).await;
./crates/editor/src/editor_instance.rs:242: pub async fn start_playback(self: &Arc<Self>, fps: u32, resolution_base: XY<u32>) {
./apps/desktop/src-tauri/src/lib.rs:1258:async fn start_playback(
./apps/desktop/src-tauri/src/lib.rs:1263: editor_instance.start_playback(fps, resolution_base).await;
./apps/desktop/src-tauri/src/lib.rs:2348: start_playback,
Script executed:
# Check the full context where fps is passed to start_playback in editor_instance
grep -B20 "start_playback" --include="*.rs" ./crates/editor/src/editor_instance.rs | tail -40Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
// // Clear audio data
// if self.audio.lock().unwrap().is_some() {
// println!("Clearing audio data");
// *self.audio.lock().unwrap() = None; // Explicitly drop the audio data
// }
// Cancel any remaining tasks
tokio::task::yield_now().await;
drop(state);
println!("EditorInstance disposed");
}
pub async fn modify_and_emit_state(&self, modify: impl Fn(&mut EditorState)) {
let mut state = self.state.lock().await;
modify(&mut state);
(self.on_state_change)(&state);
}
pub async fn start_playback(self: &Arc<Self>, fps: u32, resolution_base: XY<u32>) {
Script executed:
# Look for where renderFrameEvent is handled/listened to in editor_instance
grep -rn "render_frame\|renderFrameEvent" --include="*.rs" ./crates/editor/srcRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
./crates/editor/src/audio.rs:150: pub fn render_frame(
./crates/editor/src/audio.rs:155: self.render_frame_raw(requested_samples, project)
./crates/editor/src/audio.rs:167: pub fn render_frame_raw(
./crates/editor/src/audio.rs:294: .render_frame(Self::PROCESSING_SAMPLES_COUNT as usize, project);
./crates/editor/src/editor.rs:270: pub async fn render_frame(
./crates/editor/src/editor_instance.rs:463: .render_frame(segment_frames, uniforms, segment_medias.cursor.clone(), frame_number)
./crates/editor/src/playback.rs:771: .render_frame(
Script executed:
# Check if there's a recording metadata field that has the actual recording fps
grep -B5 -A15 "pub struct RecordingMeta" --include="*.rs" ./crates/project/src/meta.rsRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
return Self::MacOS;
}
}
#[derive(Debug, Clone, Serialize, Deserialize, Type)]
pub struct RecordingMeta {
#[serde(default)]
pub platform: Option<Platform>,
#[serde(skip_serializing, default)]
pub project_path: PathBuf,
pub pretty_name: String,
#[serde(default)]
pub sharing: Option<SharingMeta>,
#[serde(flatten)]
pub inner: RecordingMetaInner,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub upload: Option<UploadMeta>,
}
#[derive(Deserialize, Serialize, Clone, Type, Debug)]
pub struct S3UploadMeta {
Script executed:
# Find the Tauri start_playback command implementation
sed -n '1258,1275p' ./apps/desktop/src-tauri/src/lib.rsRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
async fn start_playback(
editor_instance: WindowEditorInstance,
fps: u32,
resolution_base: XY<u32>,
) -> Result<(), String> {
editor_instance.start_playback(fps, resolution_base).await;
Ok(())
}
#[tauri::command]
#[specta::specta]
#[instrument(skip(editor_instance))]
async fn stop_playback(editor_instance: WindowEditorInstance) -> Result<(), String> {
let mut state = editor_instance.state.lock().await;
if let Some(handle) = state.playback_task.take() {
handle.stop();
Script executed:
# Check what fps value is in the context.ts when calling renderFrameEvent
# Look at how editorInstance is created and what metadata it has
cat -n apps/desktop/src/routes/editor/context.ts | sed -n '700,800p'Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
700
701 let meta;
702
703 if ("segments" in rawMeta) {
704 meta = {
705 ...rawMeta,
706 type: "multiple",
707 } as unknown as MultipleSegments & { type: "multiple" };
708 } else {
709 meta = {
710 ...rawMeta,
711 type: "single",
712 } as unknown as SingleSegment & { type: "single" };
713 }
714
715 return {
716 ...rawMeta,
717 ...meta,
718 prettyName: pretty_name,
719 hasCamera: (() => {
720 if (meta.type === "single") return !!meta.camera;
721 return !!meta.segments[0].camera;
722 })(),
723 hasSystemAudio: (() => {
724 if (meta.type === "single") return false;
725 return !!meta.segments[0].system_audio;
726 })(),
727 hasMicrophone: (() => {
728 if (meta.type === "single") return !!meta.audio;
729 return !!meta.segments[0].mic;
730 })(),
731 };
732 }
733
734 export type TransformedMeta = ReturnType<typeof transformMeta>;
735
736 export const [EditorInstanceContextProvider, useEditorInstanceContext] =
737 createContextProvider(() => {
738 const [latestFrame, setLatestFrame] = createLazySignal<{
739 width: number;
740 data: ImageData;
741 }>();
742
743 const [isConnected, setIsConnected] = createSignal(false);
744
745 const [editorInstance] = createResource(async () => {
746 console.log("[Editor] Creating editor instance...");
747 const instance = await commands.createEditorInstance();
748 console.log("[Editor] Editor instance created, setting up WebSocket");
749
750 const [ws, wsConnected] = createImageDataWS(
751 instance.framesSocketUrl,
752 setLatestFrame,
753 );
754
755 ws.addEventListener("open", () => {
756 console.log("[Editor] WebSocket open event - emitting initial frame");
757 setIsConnected(true);
758 events.renderFrameEvent.emit({
759 frame_number: 0,
760 fps: FPS,
761 resolution_base: getPreviewResolution(DEFAULT_PREVIEW_QUALITY),
762 });
763 });
764
765 ws.addEventListener("close", () => {
766 setIsConnected(false);
767 });
768
769 return instance;
770 });
771
772 const metaQuery = createQuery(() => ({
773 queryKey: ["editor", "meta"],
774 queryFn: editorInstance()
775 ? () => commands.getEditorMeta().then(transformMeta)
776 : skipToken,
777 cacheTime: 0,
778 staleTime: 0,
779 }));
780
781 return {
782 editorInstance,
783 latestFrame,
784 presets: createPresets(),
785 metaQuery,
786 };
787 }, null!);
788
789 function createStoreHistory<T extends Static>(
790 ...[state, setState]: ReturnType<typeof createStore<T>>
791 ) {
792 // not working properly yet
793 // const getDelta = captureStoreUpdates(state);
794
795 const [pauseCount, setPauseCount] = createSignal(0);
796
797 const history = createUndoHistory(() => {
798 if (pauseCount() > 0) return;
799
800 trackStore(state);
Script executed:
# Check the editorInstance type and what framesSocketUrl includes
grep -B10 -A10 "framesSocketUrl\|frames_socket_url" --include="*.ts" apps/desktop/src/routes/editor/context.ts | head -40Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
}>();
const [isConnected, setIsConnected] = createSignal(false);
const [editorInstance] = createResource(async () => {
console.log("[Editor] Creating editor instance...");
const instance = await commands.createEditorInstance();
console.log("[Editor] Editor instance created, setting up WebSocket");
const [ws, wsConnected] = createImageDataWS(
instance.framesSocketUrl,
setLatestFrame,
);
ws.addEventListener("open", () => {
console.log("[Editor] WebSocket open event - emitting initial frame");
setIsConnected(true);
events.renderFrameEvent.emit({
frame_number: 0,
fps: FPS,
resolution_base: getPreviewResolution(DEFAULT_PREVIEW_QUALITY),
Script executed:
# Find what RecordingMeta contains when returned from backend
grep -rn "RecordingMeta\|recording_meta" --include="*.rs" ./crates/editor/src/editor_instance.rs | head -20Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
4:use cap_project::StudioRecordingMeta;
6: CursorEvents, ProjectConfiguration, RecordingMeta, RecordingMetaInner, TimelineConfiguration,
34: meta: RecordingMeta,
50: let recording_meta = cap_project::RecordingMeta::load_for_project(&project_path)
53: let RecordingMetaInner::Studio(meta) = &recording_meta.inner else {
58: StudioRecordingMeta::SingleSegment { .. } => 1,
59: StudioRecordingMeta::MultipleSegments { inner } => inner.segments.len(),
70: let mut project = recording_meta.project_config();
75: StudioRecordingMeta::SingleSegment { segment } => {
76: let display_path = recording_meta.path(&segment.display.path);
95: StudioRecordingMeta::MultipleSegments { inner } => inner
100: let display_path = recording_meta.path(&segment.display.path);
134: if let Err(e) = project.write(&recording_meta.project_path) {
144: &recording_meta.project_path,
149: let segments = create_segments(&recording_meta, meta).await?;
154: RenderVideoConstants::new(&recordings.segments, recording_meta.clone(), meta.clone())
163: &recording_meta,
184: meta: recording_meta,
196: pub fn meta(&self) -> &RecordingMeta {
477: fn get_studio_meta(&self) -> &StudioRecordingMeta {
Script executed:
# Check if frontend gets recording metadata with fps
grep -rn "RecordingMeta\|recordingMeta" --include="*.ts" apps/desktop/src/utils/tauri.tsRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
98:async getVideoMetadata(path: string) : Promise<VideoRecordingMetadata> {
149:async getRecordingMeta(path: string, fileType: FileType) : Promise<RecordingMetaWithMetadata> {
155:async listRecordings() : Promise<([string, RecordingMetaWithMetadata])[]> {
158:async listScreenshots() : Promise<([string, RecordingMeta])[]> {
220:async getEditorMeta() : Promise<RecordingMeta> {
427:export type InstantRecordingMeta = { recording: boolean } | { error: string } | { fps: number; sample_rate: number | null }
464:export type RecordingMeta = (StudioRecordingMeta | InstantRecordingMeta) & { platform?: Platform | null; pretty_name: string; sharing?: SharingMeta | null; upload?: UploadMeta | null }
465:export type RecordingMetaWithMetadata = ((StudioRecordingMeta | InstantRecordingMeta) & { platform?: Platform | null; pretty_name: string; sharing?: SharingMeta | null; upload?: UploadMeta | null }) & { mode: RecordingMode; status: StudioRecordingStatus }
492:export type StudioRecordingMeta = { segment: SingleSegment } | { inner: MultipleSegments }
505:export type VideoRecordingMetadata = { duration: number; size: number }
Script executed:
# Check if RecordingMeta.fps is actually populated when loaded
grep -B5 -A25 "pub struct RecordingMeta\|RecordingMetaInner" --include="*.rs" ./crates/project/src/meta.rs | head -80Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
return Self::MacOS;
}
}
#[derive(Debug, Clone, Serialize, Deserialize, Type)]
pub struct RecordingMeta {
#[serde(default)]
pub platform: Option<Platform>,
#[serde(skip_serializing, default)]
pub project_path: PathBuf,
pub pretty_name: String,
#[serde(default)]
pub sharing: Option<SharingMeta>,
#[serde(flatten)]
pub inner: RecordingMetaInner,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub upload: Option<UploadMeta>,
}
#[derive(Deserialize, Serialize, Clone, Type, Debug)]
pub struct S3UploadMeta {
pub id: String,
}
#[derive(Clone, Serialize, Deserialize, specta::Type, Debug)]
pub struct VideoUploadInfo {
pub id: String,
pub link: String,
pub config: S3UploadMeta,
}
#[derive(Debug, Clone, Serialize, Deserialize, Type)]
#[serde(tag = "state")]
pub enum UploadMeta {
MultipartUpload {
video_id: String,
file_path: PathBuf,
pre_created_video: VideoUploadInfo,
recording_dir: PathBuf,
},
--
Complete,
}
#[derive(Debug, Clone, Serialize, Deserialize, Type)]
#[serde(untagged, rename_all = "camelCase")]
pub enum RecordingMetaInner {
Studio(StudioRecordingMeta),
Instant(InstantRecordingMeta),
}
impl specta::Flatten for RecordingMetaInner {}
#[derive(Debug, Clone, Serialize, Deserialize, Type)]
#[serde(untagged, rename_all = "camelCase")]
pub enum InstantRecordingMeta {
InProgress { recording: bool },
Failed { error: String },
Complete { fps: u32, sample_rate: Option<u32> },
}
impl RecordingMeta {
pub fn path(&self, relative: &RelativePathBuf) -> PathBuf {
relative.to_path(&self.project_path)
}
pub fn load_for_project(project_path: &Path) -> Result<Self, Box<dyn Error>> {
let meta_path = project_path.join("recording-meta.json");
let mut meta: Self = serde_json::from_str(&std::fs::read_to_string(&meta_path)?)?;
meta.project_path = project_path.to_path_buf();
Ok(meta)
}
pub fn save_for_project(&self) -> Result<(), Either<serde_json::Error, std::io::Error>> {
let meta_path = &self.project_path.join("recording-meta.json");
let meta = serde_json::to_string_pretty(&self).map_err(Either::Left)?;
--
config
}
Script executed:
# Check what happens in the metaQuery - does it include fps?
grep -B5 -A10 "getEditorMeta\|get_editor_meta" --include="*.ts" apps/desktop/src/utils/tauri.tsRepository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
await TAURI_INVOKE("update_auth_plan");
},
async setWindowTransparent(value: boolean) : Promise<void> {
await TAURI_INVOKE("set_window_transparent", { value });
},
async getEditorMeta() : Promise<RecordingMeta> {
return await TAURI_INVOKE("get_editor_meta");
},
async setPrettyName(prettyName: string) : Promise<null> {
return await TAURI_INVOKE("set_pretty_name", { prettyName });
},
async setServerUrl(serverUrl: string) : Promise<null> {
return await TAURI_INVOKE("set_server_url", { serverUrl });
},
async setCameraPreviewState(state: CameraPreviewState) : Promise<null> {
return await TAURI_INVOKE("set_camera_preview_state", { state });
},
Script executed:
# Verify that rendering actually uses the fps parameter to determine frames
grep -B5 -A15 "pub async fn render_frame" --include="*.rs" ./crates/editor/src/editor.rs | head -50Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
impl RendererHandle {
async fn send(&self, msg: RendererMessage) {
let _ = self.tx.send(msg).await;
}
pub async fn render_frame(
&self,
segment_frames: DecodedSegmentFrames,
uniforms: ProjectUniforms,
cursor: Arc<CursorEvents>,
frame_number: u32,
) {
let (finished_tx, _finished_rx) = oneshot::channel();
self.send(RendererMessage::RenderFrame {
segment_frames,
uniforms,
finished: finished_tx,
cursor,
frame_number,
})
Script executed:
# Check how TimelineSegment duration is calculated
grep -B5 -A15 "pub fn duration" --include="*.rs" ./crates/project/src/configuration.rs | head -60Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
Some(self.start + tick * self.timescale)
}
}
/// in seconds
pub fn duration(&self) -> f64 {
(self.end - self.start) / self.timescale
}
}
#[derive(Type, Serialize, Deserialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct ZoomSegment {
pub start: f64,
pub end: f64,
pub amount: f64,
pub mode: ZoomMode,
}
#[derive(Type, Serialize, Deserialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
--
}
None
}
pub fn duration(&self) -> f64 {
self.segments.iter().map(|s| s.duration()).sum()
}
}
pub const WALLPAPERS_PATH: &str = "assets/backgrounds/macOS";
#[derive(Type, Serialize, Deserialize, Clone, Debug, Default)]
#[serde(rename_all = "camelCase")]
pub struct CaptionWord {
pub text: String,
pub start: f32,
pub end: f32,
}
#[derive(Type, Serialize, Deserialize, Clone, Debug, Default)]
Script executed:
# Check if segment.duration() is time-based or frame-based
grep -B10 "fn duration\|pub fn duration" --include="*.rs" ./crates/project/src/configuration.rs | grep -A10 "TimelineSegment\|segment"Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
impl TimelineSegment {
fn interpolate_time(&self, tick: f64) -> Option<f64> {
if tick > self.duration() {
None
} else {
Some(self.start + tick * self.timescale)
}
}
/// in seconds
pub fn duration(&self) -> f64 {
--
.map(|t| (t, segment));
}
accum_duration += segment.duration();
}
None
}
pub fn duration(&self) -> f64 {
Script executed:
# Check if there's any frame-to-time conversion that uses source fps vs preview fps
grep -rn "fps\|frame.*time\|time.*frame" --include="*.rs" ./crates/rendering/src | grep -i "convert\|calculation" | head -20Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
Script executed:
# Check how frame_number from playhead_position is used in the rendering pipeline
grep -B5 -A20 "playhead_position" --include="*.rs" ./crates/editor/src/editor_instance.rs | grep -A15 "playhead_position" | head -40Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
playhead_position: 0,
playback_task: None,
preview_task: None,
})),
on_state_change: Box::new(on_state_change),
preview_tx,
project_config: watch::channel(project),
segment_medias: Arc::new(segments),
meta: recording_meta,
playback_active: playback_active_tx,
playback_active_rx,
});
this.state.lock().await.preview_task =
Some(this.clone().spawn_preview_renderer(preview_rx));
--
let start_frame_number = state.playhead_position;
let playback_handle = match (playback::Playback {
segment_medias: self.segment_medias.clone(),
renderer: self.renderer.clone(),
render_constants: self.render_constants.clone(),
start_frame_number,
project: self.project_config.0.subscribe(),
})
.start(fps, resolution_base)
.await
{
Ok(handle) => handle,
Err(PlaybackStartError::InvalidFps) => {
warn!(fps, "Skipping playback start due to invalid FPS");
return;
--
state.playhead_position = frame_number;
})
.await;
}
playback::PlaybackEvent::Stop => {
let _ = this.playback_active.send(false);
Script executed:
# Check the relationship between frame_number parameter and actual frame selection in decode/render
grep -B10 -A20 "frame_number.*segment\|get_segment" --include="*.rs" ./crates/editor/src/editor_instance.rs | head -80Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stdout:
r#"{{"location":"editor_instance.rs:preview_request","message":"preview renderer requesting frame","data":{{"frame_number":{},"fps":{}}},"timestamp":{},"sessionId":"debug-session","hypothesisId":"C"}}"#,
frame_number, fps, ts
)
.ok();
}
// #endregion
let project = self.project_config.1.borrow().clone();
let Some((segment_time, segment)) =
project.get_segment_time(frame_number as f64 / fps as f64)
else {
warn!(
"Preview renderer: no segment found for frame {}",
frame_number
);
break;
};
let segment_medias = &self.segment_medias[segment.recording_clip as usize];
let clip_config = project
.clips
.iter()
.find(|v| v.index == segment.recording_clip);
let clip_offsets = clip_config.map(|v| v.offsets).unwrap_or_default();
let new_cancel_token = CancellationToken::new();
prefetch_cancel_token = Some(new_cancel_token.clone());
let playback_is_active = *self.playback_active_rx.borrow();
if !playback_is_active {
--
.ok();
}
// #endregion
let prefetch_frames_count = 5u32;
let hide_camera = project.camera.hide;
let playback_rx = self.playback_active_rx.clone();
for offset in 1..=prefetch_frames_count {
let prefetch_frame = frame_number + offset;
if let Some((prefetch_segment_time, prefetch_segment)) =
project.get_segment_time(prefetch_frame as f64 / fps as f64)
{
if let Some(prefetch_segment_media) = self
.segment_medias
.get(prefetch_segment.recording_clip as usize)
{
let prefetch_clip_offsets = project
.clips
.iter()
.find(|v| v.index == prefetch_segment.recording_clip)
.map(|v| v.offsets)
.unwrap_or_default();
let decoders = prefetch_segment_media.decoders.clone();
let cancel_token = new_cancel_token.clone();
let playback_rx = playback_rx.clone();
tokio::spawn(async move {
if cancel_token.is_cancelled() || *playback_rx.borrow() {
return;
}
let _ = decoders
.get_frames(
Script executed:
# Check if there's any documentation or comments about fps meaning
grep -B3 -A3 "fps.*preview\|preview.*fps\|frame.*30\|30.*frame" --include="*.rs" ./crates/editor/src | head -50Repository: CapSoftware/Cap
Repository: CapSoftware/Cap
Exit code: 0
stderr:
grep: ./crates/editor/src: Is a directory
Verify FPS=30 is the intended preview framerate and update naming for clarity.
The FPS constant drives frame-index calculations (setPlayheadPosition(Math.floor(playbackTime * FPS)), Line 556-558) and is sent to the backend in frame rendering requests (renderFrameEvent.emit({ fps: FPS, ... }), Line 758-762). The backend converts frame indices back to time using this same fps (frame_number / fps), so the contract must be consistent. If preview fps should dynamically match the source recording fps (which varies per project) rather than being fixed at 30, the hardcoded constant will cause misalignment. Additionally, rename to PREVIEW_FPS or similar to clarify this is preview framerate, not source/project framerate, preventing accidental misuse where native fps is expected.
🤖 Prompt for AI Agents
In apps/desktop/src/routes/editor/context.ts around line 50, the hardcoded
export const FPS = 30 is ambiguous and may be incorrect if the preview framerate
should track a project's source recording fps; rename it to PREVIEW_FPS (or
PREVIEW_FRAME_RATE) and update references to make explicit it's the
preview/rendering fps, not the source fps. If preview must follow
project-specific fps, change the constant to a configurable value that is set
from the current project's metadata (e.g., default to 30 but override from
project.fps on load) and ensure all uses (playhead math and renderFrameEvent
payloads) read the configured preview fps; otherwise keep a clearly named
constant PREVIEW_FPS and update comments to document the contract with the
backend. Ensure the backend contract still expects the same fps field or update
the emitted payload key if renaming to avoid mismatch.
| // #region agent log | ||
| let canvasRenderCount = 0; | ||
| let totalCanvasRenderTime = 0; | ||
| let maxCanvasRenderTime = 0; | ||
| let lastCanvasMetricsLog = performance.now(); | ||
| // #endregion | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Remove the newly-added comments and gate this debug instrumentation (hard-coded localhost POST) behind a dev/flag.
The // #region ... blocks violate the “no comments” guideline, and the hard-coded http://127.0.0.1:7242/ingest/... + fixed sessionId/hypothesisId should not run in normal builds. Also, sending avgRenderTimeMs/maxRenderTimeMs as strings (toFixed) is awkward for downstream consumers.
function PreviewCanvas() {
const { latestFrame } = useEditorContext();
let canvasRef: HTMLCanvasElement | undefined;
+ let ctx: CanvasRenderingContext2D | null = null;
const [canvasContainerRef, setCanvasContainerRef] =
createSignal<HTMLDivElement>();
const containerBounds = createElementBounds(canvasContainerRef);
- // #region agent log
- let canvasRenderCount = 0;
- let totalCanvasRenderTime = 0;
- let maxCanvasRenderTime = 0;
- let lastCanvasMetricsLog = performance.now();
- // #endregion
+ const canvasRenderMetrics = {
+ count: 0,
+ totalMs: 0,
+ maxMs: 0,
+ lastLogAt: performance.now(),
+ };
+ const metricsEnabled = import.meta.env.DEV;
+ const metricsIngestUrl = metricsEnabled
+ ? "http://127.0.0.1:7242/ingest/966647b7-72f6-4ab7-b76e-6b773ac020d7"
+ : null;
createEffect(() => {
const frame = latestFrame();
if (!frame) return;
if (!canvasRef) return;
- const ctx = canvasRef.getContext("2d", { alpha: false });
+ ctx ??= canvasRef.getContext("2d", { alpha: false });
if (!ctx) return;
- // #region agent log
const renderStart = performance.now();
- // #endregion
createImageBitmap(frame.data).then((bitmap) => {
ctx.drawImage(bitmap, 0, 0);
bitmap.close();
- // #region agent log
const renderTime = performance.now() - renderStart;
- canvasRenderCount++;
- totalCanvasRenderTime += renderTime;
- maxCanvasRenderTime = Math.max(maxCanvasRenderTime, renderTime);
+ canvasRenderMetrics.count++;
+ canvasRenderMetrics.totalMs += renderTime;
+ canvasRenderMetrics.maxMs = Math.max(canvasRenderMetrics.maxMs, renderTime);
if (
- performance.now() - lastCanvasMetricsLog >= 2000 &&
- canvasRenderCount > 0
+ metricsIngestUrl &&
+ performance.now() - canvasRenderMetrics.lastLogAt >= 2000 &&
+ canvasRenderMetrics.count > 0
) {
- const avgTime = totalCanvasRenderTime / canvasRenderCount;
+ const avgTime = canvasRenderMetrics.totalMs / canvasRenderMetrics.count;
fetch(
- "http://127.0.0.1:7242/ingest/966647b7-72f6-4ab7-b76e-6b773ac020d7",
+ metricsIngestUrl,
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
location: "Player.tsx:canvas_render",
message: "canvas render metrics",
data: {
- canvasRenderCount,
- avgRenderTimeMs: avgTime.toFixed(2),
- maxRenderTimeMs: maxCanvasRenderTime.toFixed(2),
+ canvasRenderCount: canvasRenderMetrics.count,
+ avgRenderTimeMs: Math.round(avgTime * 100) / 100,
+ maxRenderTimeMs: Math.round(canvasRenderMetrics.maxMs * 100) / 100,
frameWidth: frame.width,
frameHeight: frame.data.height,
},
timestamp: Date.now(),
sessionId: "debug-session",
hypothesisId: "E",
}),
},
).catch(() => {});
- lastCanvasMetricsLog = performance.now();
+ canvasRenderMetrics.lastLogAt = performance.now();
}
- // #endregion
});
});Also applies to: 478-480, 484-518
🤖 Prompt for AI Agents
In apps/desktop/src/routes/editor/Player.tsx around lines 465-471 (and similarly
478-480, 484-518), remove the newly-added region comments and refactor the debug
instrumentation so it only runs when a development flag is enabled (e.g.
process.env.NODE_ENV === 'development' or a dedicated feature flag), avoid any
hard-coded endpoint, sessionId or hypothesisId by reading them from config/env
or passing them in as optional props, and send numeric metrics (avgRenderTimeMs
and maxRenderTimeMs) as numbers (not toFixed strings) in the POST body; ensure
the instrumentation is no-op in production builds and consolidate repeated logic
into a guarded helper to cover the other line ranges mentioned.
| createEffect(() => { | ||
| const frame = latestFrame(); | ||
| if (!frame) return; | ||
| if (!canvasRef) return; | ||
| const ctx = canvasRef.getContext("2d"); | ||
| ctx?.putImageData(frame.data, 0, 0); | ||
| const ctx = canvasRef.getContext("2d", { alpha: false }); | ||
| if (!ctx) return; | ||
| // #region agent log | ||
| const renderStart = performance.now(); | ||
| // #endregion | ||
| createImageBitmap(frame.data).then((bitmap) => { | ||
| ctx.drawImage(bitmap, 0, 0); | ||
| bitmap.close(); | ||
| // #region agent log | ||
| const renderTime = performance.now() - renderStart; | ||
| canvasRenderCount++; | ||
| totalCanvasRenderTime += renderTime; | ||
| maxCanvasRenderTime = Math.max(maxCanvasRenderTime, renderTime); | ||
| if ( | ||
| performance.now() - lastCanvasMetricsLog >= 2000 && | ||
| canvasRenderCount > 0 | ||
| ) { | ||
| const avgTime = totalCanvasRenderTime / canvasRenderCount; | ||
| fetch( | ||
| "http://127.0.0.1:7242/ingest/966647b7-72f6-4ab7-b76e-6b773ac020d7", | ||
| { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| body: JSON.stringify({ | ||
| location: "Player.tsx:canvas_render", | ||
| message: "canvas render metrics", | ||
| data: { | ||
| canvasRenderCount, | ||
| avgRenderTimeMs: avgTime.toFixed(2), | ||
| maxRenderTimeMs: maxCanvasRenderTime.toFixed(2), | ||
| frameWidth: frame.width, | ||
| frameHeight: frame.data.height, | ||
| }, | ||
| timestamp: Date.now(), | ||
| sessionId: "debug-session", | ||
| hypothesisId: "E", | ||
| }), | ||
| }, | ||
| ).catch(() => {}); | ||
| lastCanvasMetricsLog = performance.now(); | ||
| } | ||
| // #endregion | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent out-of-order frame draws and handle createImageBitmap rejection.
As written, a slow createImageBitmap(frame.data) for an older frame can resolve after a newer frame and overwrite the canvas; plus failures can become unhandled promise rejections.
function PreviewCanvas() {
const { latestFrame } = useEditorContext();
let canvasRef: HTMLCanvasElement | undefined;
let ctx: CanvasRenderingContext2D | null = null;
+ let renderToken = 0;
createEffect(() => {
const frame = latestFrame();
if (!frame) return;
if (!canvasRef) return;
ctx ??= canvasRef.getContext("2d", { alpha: false });
if (!ctx) return;
+ const token = ++renderToken;
const renderStart = performance.now();
- createImageBitmap(frame.data).then((bitmap) => {
- ctx.drawImage(bitmap, 0, 0);
- bitmap.close();
+ createImageBitmap(frame.data)
+ .then((bitmap) => {
+ if (token !== renderToken) {
+ bitmap.close();
+ return;
+ }
+ ctx.drawImage(bitmap, 0, 0);
+ bitmap.close();
- const renderTime = performance.now() - renderStart;
- ...
- });
+ const renderTime = performance.now() - renderStart;
+ ...
+ })
+ .catch(() => {});
});
}Also: please double-check that { alpha: false } is intended with the checkerboard styling—on many browsers it will make the canvas output opaque, which can effectively hide the “transparent background” cue.
Committable suggestion skipped: line range outside the PR's diff.
| @@ -1,20 +1,69 @@ | |||
| import { createWS } from "@solid-primitives/websocket"; | |||
| import * as lz4 from "lz4js"; | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
lz4js decompressBlock signature return value documentation
💡 Result:
The lz4js decompressBlock function signature (from a published JS copy) is:
export function decompressBlock(src: Uint8Array, dst: Uint8Array, sIndex: number, sLength: number, dIndex: number): number
It returns the updated destination index (i.e., the number of bytes written / new dIndex). [1][2]
Sources:
- lz4js implementation (vendor copy showing decompressBlock signature & return). [1]
- lz4js README / API (project overview). [2]
🏁 Script executed:
cd apps/desktop && cat -n src/utils/socket.ts | head -40Repository: CapSoftware/Cap
Length of output: 1588
🏁 Script executed:
cat -n package.json | grep -A2 -B2 "lz4"Repository: CapSoftware/Cap
Length of output: 41
🏁 Script executed:
grep -n "lz4" package*.json pnpm-lock.yaml 2>/dev/null | head -5Repository: CapSoftware/Cap
Length of output: 357
🏁 Script executed:
cat -n apps/desktop/src/utils/socket.ts | tail -80Repository: CapSoftware/Cap
Length of output: 2938
Add safety checks around untrusted uncompressedSize and validate lz4js.decompressBlock return value.
The uncompressedSize read from the buffer at line 18 is untrusted input; allocating new Uint8Array(uncompressedSize) without validation enables OOM/DoS attacks. Additionally, the buffer length is never validated before calling getUint32(0), and the return value from lz4js.decompressBlock (which indicates bytes written in lz4js@0.2.0) is ignored—both should be verified.
The diff provided correctly addresses these gaps: validate the buffer header size, bounds-check uncompressedSize against a reasonable threshold, and verify the decompressed byte count matches expectations.
Note: Lines 179 and 207 contain comments (// #region agent log and // #endregion) that violate the no-comments guideline for .ts files. Remove or refactor.
🤖 Prompt for AI Agents
In apps/desktop/src/utils/socket.ts around line 2 and the referenced
decompression logic, the code reads an untrusted uncompressedSize from the
buffer and allocates without validation and fails to check the return value of
lz4js.decompressBlock; add a guard that the incoming buffer is at least 4 bytes
before calling getUint32(0), parse uncompressedSize and validate it against a
sane maximum (e.g. a configurable constant or a hard cap like a few MBs), ensure
the buffer remaining length is sufficient for the compressed payload, check that
lz4js.decompressBlock returns the expected number of bytes written (and
throw/handle an error if not), and avoid allocating huge Uint8Array if the size
is invalid; also remove or refactor the comment lines at 179 and 207 that
violate the no-comments guideline for .ts files.
| const now = performance.now(); | ||
| if (now - metrics.lastLogTime >= 2000 && metrics.framesReceived > 0) { | ||
| const avgDecompressTime = | ||
| metrics.totalDecompressTimeMs / metrics.framesReceived; | ||
| const avgParseTime = metrics.totalParseTimeMs / metrics.framesReceived; | ||
| const avgImageDataTime = | ||
| metrics.totalImageDataTimeMs / metrics.framesReceived; | ||
| const compressionRatio = ( | ||
| (compressedSize / decompressed.length) * | ||
| 100 | ||
| ).toFixed(1); | ||
| console.log( | ||
| `[PERF:FRONTEND_WS] periodic - frames: ${metrics.framesReceived}, compressed: ${compressedSize} bytes (${compressionRatio}%), decompressed: ${decompressed.length} bytes, avg decompress: ${avgDecompressTime.toFixed(2)}ms, avg parse: ${avgParseTime.toFixed(2)}ms, avg imageData: ${avgImageDataTime.toFixed(2)}ms, dimensions: ${width}x${height}`, | ||
| ); | ||
| // #region agent log | ||
| fetch( | ||
| "http://127.0.0.1:7242/ingest/966647b7-72f6-4ab7-b76e-6b773ac020d7", | ||
| { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| body: JSON.stringify({ | ||
| location: "socket.ts:ws_metrics", | ||
| message: "frontend WS metrics", | ||
| data: { | ||
| framesReceived: metrics.framesReceived, | ||
| avgDecompressMs: avgDecompressTime.toFixed(2), | ||
| avgParseMs: avgParseTime.toFixed(2), | ||
| avgImageDataMs: avgImageDataTime.toFixed(2), | ||
| maxDecompressMs: metrics.maxDecompressTimeMs.toFixed(2), | ||
| maxParseMs: metrics.maxParseTimeMs.toFixed(2), | ||
| compressedBytes: compressedSize, | ||
| decompressedBytes: decompressed.length, | ||
| compressionRatio, | ||
| width, | ||
| height, | ||
| }, | ||
| timestamp: Date.now(), | ||
| sessionId: "debug-session", | ||
| hypothesisId: "B", | ||
| }), | ||
| }, | ||
| ).catch(() => {}); | ||
| // #endregion | ||
| metrics.lastLogTime = now; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded localhost ingest + inline identifiers + added comments violate guidelines and create operational risk.
This introduces an outbound request from a hot path to a fixed endpoint, includes hardcoded IDs/session fields, and adds // comments in a .ts file (disallowed by your repo rules). This should not ship as-is.
- // #region agent log
- fetch(
- "http://127.0.0.1:7242/ingest/966647b7-72f6-4ab7-b76e-6b773ac020d7",
- {
- method: "POST",
- headers: { "Content-Type": "application/json" },
- body: JSON.stringify({
- location: "socket.ts:ws_metrics",
- message: "frontend WS metrics",
- data: {
- framesReceived: metrics.framesReceived,
- avgDecompressMs: avgDecompressTime.toFixed(2),
- avgParseMs: avgParseTime.toFixed(2),
- avgImageDataMs: avgImageDataTime.toFixed(2),
- maxDecompressMs: metrics.maxDecompressTimeMs.toFixed(2),
- maxParseMs: metrics.maxParseTimeMs.toFixed(2),
- compressedBytes: compressedSize,
- decompressedBytes: decompressed.length,
- compressionRatio,
- width,
- height,
- },
- timestamp: Date.now(),
- sessionId: "debug-session",
- hypothesisId: "B",
- }),
- },
- ).catch(() => {});
- // #endregion
metrics.lastLogTime = now;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const now = performance.now(); | |
| if (now - metrics.lastLogTime >= 2000 && metrics.framesReceived > 0) { | |
| const avgDecompressTime = | |
| metrics.totalDecompressTimeMs / metrics.framesReceived; | |
| const avgParseTime = metrics.totalParseTimeMs / metrics.framesReceived; | |
| const avgImageDataTime = | |
| metrics.totalImageDataTimeMs / metrics.framesReceived; | |
| const compressionRatio = ( | |
| (compressedSize / decompressed.length) * | |
| 100 | |
| ).toFixed(1); | |
| console.log( | |
| `[PERF:FRONTEND_WS] periodic - frames: ${metrics.framesReceived}, compressed: ${compressedSize} bytes (${compressionRatio}%), decompressed: ${decompressed.length} bytes, avg decompress: ${avgDecompressTime.toFixed(2)}ms, avg parse: ${avgParseTime.toFixed(2)}ms, avg imageData: ${avgImageDataTime.toFixed(2)}ms, dimensions: ${width}x${height}`, | |
| ); | |
| // #region agent log | |
| fetch( | |
| "http://127.0.0.1:7242/ingest/966647b7-72f6-4ab7-b76e-6b773ac020d7", | |
| { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify({ | |
| location: "socket.ts:ws_metrics", | |
| message: "frontend WS metrics", | |
| data: { | |
| framesReceived: metrics.framesReceived, | |
| avgDecompressMs: avgDecompressTime.toFixed(2), | |
| avgParseMs: avgParseTime.toFixed(2), | |
| avgImageDataMs: avgImageDataTime.toFixed(2), | |
| maxDecompressMs: metrics.maxDecompressTimeMs.toFixed(2), | |
| maxParseMs: metrics.maxParseTimeMs.toFixed(2), | |
| compressedBytes: compressedSize, | |
| decompressedBytes: decompressed.length, | |
| compressionRatio, | |
| width, | |
| height, | |
| }, | |
| timestamp: Date.now(), | |
| sessionId: "debug-session", | |
| hypothesisId: "B", | |
| }), | |
| }, | |
| ).catch(() => {}); | |
| // #endregion | |
| metrics.lastLogTime = now; | |
| } | |
| const now = performance.now(); | |
| if (now - metrics.lastLogTime >= 2000 && metrics.framesReceived > 0) { | |
| const avgDecompressTime = | |
| metrics.totalDecompressTimeMs / metrics.framesReceived; | |
| const avgParseTime = metrics.totalParseTimeMs / metrics.framesReceived; | |
| const avgImageDataTime = | |
| metrics.totalImageDataTimeMs / metrics.framesReceived; | |
| const compressionRatio = ( | |
| (compressedSize / decompressed.length) * | |
| 100 | |
| ).toFixed(1); | |
| console.log( | |
| `[PERF:FRONTEND_WS] periodic - frames: ${metrics.framesReceived}, compressed: ${compressedSize} bytes (${compressionRatio}%), decompressed: ${decompressed.length} bytes, avg decompress: ${avgDecompressTime.toFixed(2)}ms, avg parse: ${avgParseTime.toFixed(2)}ms, avg imageData: ${avgImageDataTime.toFixed(2)}ms, dimensions: ${width}x${height}`, | |
| ); | |
| metrics.lastLogTime = now; | |
| } |
🤖 Prompt for AI Agents
In apps/desktop/src/utils/socket.ts around lines 165 to 209, remove the
hardcoded synchronous outbound fetch to http://127.0.0.1:7242 and the hardcoded
identifiers/session fields and inline // comments; instead, stop making direct
network calls on the hot path and replace with a non-blocking, rate-limited
telemetry queue or scheduled worker, read the ingest endpoint and any
IDs/session values from configuration/environment variables, strip the
disallowed // comments (or move necessary notes to allowed doc/comment style),
and ensure failures are handled silently without blocking (e.g., enqueue +
background flush with retry/backoff and safe no-op if config is missing).
| match format { | ||
| PixelFormat::Rgba => { | ||
| let src_bytes_per_row = frame_size.x * 4; | ||
|
|
||
| queue.write_texture( | ||
| wgpu::TexelCopyTextureInfo { | ||
| texture: &self.frame_textures[next_texture], | ||
| mip_level: 0, | ||
| origin: wgpu::Origin3d::ZERO, | ||
| aspect: wgpu::TextureAspect::All, | ||
| }, | ||
| frame_data, | ||
| wgpu::TexelCopyBufferLayout { | ||
| offset: 0, | ||
| bytes_per_row: Some(src_bytes_per_row), | ||
| rows_per_image: Some(frame_size.y), | ||
| }, | ||
| wgpu::Extent3d { | ||
| width: frame_size.x, | ||
| height: frame_size.y, | ||
| depth_or_array_layers: 1, | ||
| }, | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cd crates/rendering/src/layers && wc -l display.rsRepository: CapSoftware/Cap
Length of output: 73
🏁 Script executed:
cd crates/rendering/src/layers && sed -n '70,120p' display.rs | cat -nRepository: CapSoftware/Cap
Length of output: 2593
🏁 Script executed:
cd crates/rendering/src/layers && sed -n '1,30p' display.rs | cat -nRepository: CapSoftware/Cap
Length of output: 1512
🏁 Script executed:
cd crates/rendering/src/layers && rg "struct.*Frame" -A 5 display.rsRepository: CapSoftware/Cap
Length of output: 41
🏁 Script executed:
cd crates/rendering && rg "screen_frame" --type rust -B 2 -A 2Repository: CapSoftware/Cap
Length of output: 2849
🏁 Script executed:
cd crates/rendering/src/layers && sed -n '40,130p' display.rs | cat -nRepository: CapSoftware/Cap
Length of output: 4444
🏁 Script executed:
cd crates/rendering && rg "struct DecodedFrame" -A 15 --type rustRepository: CapSoftware/Cap
Length of output: 724
🏁 Script executed:
cd crates/rendering && rg "impl DecodedFrame" -A 50 --type rustRepository: CapSoftware/Cap
Length of output: 2127
🏁 Script executed:
cd crates/rendering && rg "fn.*stride|fn.*width|fn.*height" src/decoder/mod.rs --type rustRepository: CapSoftware/Cap
Length of output: 364
🏁 Script executed:
cd crates/rendering && rg "\.prepare\(" -B 3 -A 3 --type rust src/Repository: CapSoftware/Cap
Length of output: 3293
🏁 Script executed:
cd crates/rendering && sed -n '1,100p' src/decoder/mod.rs | cat -nRepository: CapSoftware/Cap
Length of output: 2921
🏁 Script executed:
cd crates/rendering && sed -n '320,360p' src/lib.rs | cat -nRepository: CapSoftware/Cap
Length of output: 1376
🏁 Script executed:
cd crates/rendering && rg "display\.prepare" -B 5 -A 5 --type rust src/lib.rsRepository: CapSoftware/Cap
Length of output: 472
🏁 Script executed:
cd crates/rendering && rg "screen_size" --type rust src/ -B 2 -A 2 | head -30Repository: CapSoftware/Cap
Length of output: 1251
Use screen_frame.y_stride() for RGBA and validate dimensions match frame_size.
The RGBA path hardcodes bytes_per_row = frame_size.x * 4 without checking stride or validating that frame_size matches the actual frame dimensions from segment_frames.screen_frame. If frame_size (from config) differs from actual_width/actual_height (from decoded frame), the texture is created with wrong dimensions and the data upload extent will mismatch the actual payload, causing corruption. Use screen_frame.y_stride() like the NV12/Yuv420p branches do, and either validate or derive frame_size from actual frame dimensions.
🤖 Prompt for AI Agents
In crates/rendering/src/layers/display.rs around lines 85 to 108, the RGBA
branch currently hardcodes bytes_per_row = frame_size.x * 4 and creates/upload
extents using frame_size without verifying the actual decoded frame dimensions;
change the RGBA upload to use screen_frame.y_stride() for bytes_per_row (like
the NV12/Yuv420p branches) and validate that screen_frame.width() and
screen_frame.height() (or actual_width/actual_height from the decoded frame)
match frame_size.x and frame_size.y — if they don’t, either derive frame_size
from the actual frame dimensions before creating the texture or return an error.
Ensure rows_per_image and the Extent3d use the validated/derived dimensions so
the buffer layout matches the payload.
| @group(0) @binding(0) var y_plane: texture_2d<f32>; | ||
| @group(0) @binding(1) var uv_plane: texture_2d<f32>; | ||
| @group(0) @binding(2) var output: texture_storage_2d<rgba8unorm, write>; | ||
|
|
||
| @compute @workgroup_size(8, 8) | ||
| fn main(@builtin(global_invocation_id) global_id: vec3<u32>) { | ||
| let coords = global_id.xy; | ||
| let dims = textureDimensions(output); | ||
|
|
||
| if (coords.x >= dims.x || coords.y >= dims.y) { | ||
| return; | ||
| } | ||
|
|
||
| let y_raw = textureLoad(y_plane, coords, 0).r; | ||
|
|
||
| let uv_coords = coords / 2; | ||
| let uv_raw = textureLoad(uv_plane, uv_coords, 0).rg; | ||
|
|
||
| let y = (y_raw - 0.0625) * 1.164; | ||
| let u = uv_raw.r - 0.5; | ||
| let v = uv_raw.g - 0.5; | ||
|
|
||
| let r = y + 1.596 * v; | ||
| let g = y - 0.391 * u - 0.813 * v; | ||
| let b = y + 2.018 * u; | ||
|
|
||
| let color = vec4<f32>( | ||
| clamp(r, 0.0, 1.0), | ||
| clamp(g, 0.0, 1.0), | ||
| clamp(b, 0.0, 1.0), | ||
| 1.0 | ||
| ); | ||
|
|
||
| textureStore(output, coords, color); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against UV-plane OOB on odd dimensions (or enforce even sizes).
uv_coords = coords / 2 (Line 16) is only safe if uv_plane is sized to cover all coords (e.g., width/height even, or UV texture uses div_ceil). Otherwise the shader can read out of bounds.
Make NV12/YUV420p range+matrix handling consistent.
This shader applies a limited-range adjustment to Y (Line 19), but the YUV420p shader doesn’t. That will produce visibly different brightness/contrast between formats. Align both shaders (or make the conversion configurable via uniforms/params).
| @group(0) @binding(0) var y_plane: texture_2d<f32>; | ||
| @group(0) @binding(1) var u_plane: texture_2d<f32>; | ||
| @group(0) @binding(2) var v_plane: texture_2d<f32>; | ||
| @group(0) @binding(3) var output: texture_storage_2d<rgba8unorm, write>; | ||
|
|
||
| @compute @workgroup_size(8, 8) | ||
| fn main(@builtin(global_invocation_id) global_id: vec3<u32>) { | ||
| let coords = global_id.xy; | ||
| let dims = textureDimensions(output); | ||
|
|
||
| if (coords.x >= dims.x || coords.y >= dims.y) { | ||
| return; | ||
| } | ||
|
|
||
| let y = textureLoad(y_plane, coords, 0).r; | ||
|
|
||
| let uv_coords = coords / 2; | ||
| let u = textureLoad(u_plane, uv_coords, 0).r - 0.5; | ||
| let v = textureLoad(v_plane, uv_coords, 0).r - 0.5; | ||
|
|
||
| let r = y + 1.402 * v; | ||
| let g = y - 0.344 * u - 0.714 * v; | ||
| let b = y + 1.772 * u; | ||
|
|
||
| let color = vec4<f32>( | ||
| clamp(r, 0.0, 1.0), | ||
| clamp(g, 0.0, 1.0), | ||
| clamp(b, 0.0, 1.0), | ||
| 1.0 | ||
| ); | ||
|
|
||
| textureStore(output, coords, color); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against U/V-plane OOB on odd dimensions (or enforce even sizes).
uv_coords = coords / 2 (Line 17) assumes chroma planes are sized to cover all luma pixels; if upstream ever produces odd sizes, this can read OOB unless you allocate chroma textures with div_ceil.
Align YUV420p conversion with NV12 (range + matrix).
NV12 path applies a Y offset/scale; YUV420p path doesn’t. Unless that’s intentional (different input ranges), it’ll cause visible mismatch across formats.
| pub fn convert_nv12( | ||
| &mut self, | ||
| device: &wgpu::Device, | ||
| queue: &wgpu::Queue, | ||
| y_data: &[u8], | ||
| uv_data: &[u8], | ||
| width: u32, | ||
| height: u32, | ||
| y_stride: u32, | ||
| ) -> &wgpu::TextureView { | ||
| self.ensure_textures(device, width, height, PixelFormat::Nv12); | ||
|
|
||
| let y_texture = self.y_texture.as_ref().unwrap(); | ||
| let uv_texture = self.uv_texture.as_ref().unwrap(); | ||
| let output_texture = self.output_texture.as_ref().unwrap(); | ||
|
|
||
| if y_stride == width { | ||
| queue.write_texture( | ||
| wgpu::TexelCopyTextureInfo { | ||
| texture: y_texture, | ||
| mip_level: 0, | ||
| origin: wgpu::Origin3d::ZERO, | ||
| aspect: wgpu::TextureAspect::All, | ||
| }, | ||
| y_data, | ||
| wgpu::TexelCopyBufferLayout { | ||
| offset: 0, | ||
| bytes_per_row: Some(width), | ||
| rows_per_image: Some(height), | ||
| }, | ||
| wgpu::Extent3d { | ||
| width, | ||
| height, | ||
| depth_or_array_layers: 1, | ||
| }, | ||
| ); | ||
| } else { | ||
| let mut packed_y = Vec::with_capacity((width * height) as usize); | ||
| for row in 0..height as usize { | ||
| let start = row * y_stride as usize; | ||
| let end = start + width as usize; | ||
| if end <= y_data.len() { | ||
| packed_y.extend_from_slice(&y_data[start..end]); | ||
| } | ||
| } | ||
| queue.write_texture( | ||
| wgpu::TexelCopyTextureInfo { | ||
| texture: y_texture, | ||
| mip_level: 0, | ||
| origin: wgpu::Origin3d::ZERO, | ||
| aspect: wgpu::TextureAspect::All, | ||
| }, | ||
| &packed_y, | ||
| wgpu::TexelCopyBufferLayout { | ||
| offset: 0, | ||
| bytes_per_row: Some(width), | ||
| rows_per_image: Some(height), | ||
| }, | ||
| wgpu::Extent3d { | ||
| width, | ||
| height, | ||
| depth_or_array_layers: 1, | ||
| }, | ||
| ); | ||
| } | ||
|
|
||
| queue.write_texture( | ||
| wgpu::TexelCopyTextureInfo { | ||
| texture: uv_texture, | ||
| mip_level: 0, | ||
| origin: wgpu::Origin3d::ZERO, | ||
| aspect: wgpu::TextureAspect::All, | ||
| }, | ||
| uv_data, | ||
| wgpu::TexelCopyBufferLayout { | ||
| offset: 0, | ||
| bytes_per_row: Some(width), | ||
| rows_per_image: Some(height / 2), | ||
| }, | ||
| wgpu::Extent3d { | ||
| width: width / 2, | ||
| height: height / 2, | ||
| depth_or_array_layers: 1, | ||
| }, | ||
| ); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add uv_stride parameter to convert_nv12 and handle UV row padding like Y data.
Currently, the method repacks Y data when y_stride != width (lines 294-343) but always writes UV data as if tightly packed (lines 344-362), regardless of stride. Many hardware decoders produce padded UV rows, causing silent corruption. Mirror the Y repack logic for UV using uv_stride:
pub fn convert_nv12(
&mut self,
device: &wgpu::Device,
queue: &wgpu::Queue,
y_data: &[u8],
uv_data: &[u8],
width: u32,
height: u32,
y_stride: u32,
+ uv_stride: u32,
) -> &wgpu::TextureView {Then repack uv_data using uv_stride when uv_stride != width.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
crates/rendering/src/yuv_converter.rs around lines 278 to 363: the convert_nv12
signature and UV upload assume tightly-packed UV rows while Y handling already
deals with row padding; add an uv_stride: u32 parameter to convert_nv12, update
all call sites to pass the UV stride, and mirror the Y-plane repack logic for
the UV plane: if uv_stride == width then write uv_data directly, otherwise build
a packed_uv Vec by iterating rows (height/2 rows) and copying width bytes per
row from uv_data using uv_stride as the source row pitch, then call
queue.write_texture with the packed_uv buffer and the same bytes_per_row/extent
values used for the tightly-packed case (extent width/2, height/2).
| let mut packed_y = Vec::with_capacity((width * height) as usize); | ||
| for row in 0..height as usize { | ||
| let start = row * y_stride as usize; | ||
| let end = start + width as usize; | ||
| if end <= y_data.len() { | ||
| packed_y.extend_from_slice(&y_data[start..end]); | ||
| } | ||
| } | ||
| queue.write_texture( | ||
| wgpu::TexelCopyTextureInfo { | ||
| texture: y_texture, | ||
| mip_level: 0, | ||
| origin: wgpu::Origin3d::ZERO, | ||
| aspect: wgpu::TextureAspect::All, | ||
| }, | ||
| &packed_y, | ||
| wgpu::TexelCopyBufferLayout { | ||
| offset: 0, | ||
| bytes_per_row: Some(width), | ||
| rows_per_image: Some(height), | ||
| }, | ||
| wgpu::Extent3d { | ||
| width, | ||
| height, | ||
| depth_or_array_layers: 1, | ||
| }, | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -name "yuv_converter.rs" -type fRepository: CapSoftware/Cap
Length of output: 98
🏁 Script executed:
git ls-files | grep yuv_converterRepository: CapSoftware/Cap
Length of output: 96
🏁 Script executed:
ls -la crates/rendering/src/ 2>/dev/null | head -20Repository: CapSoftware/Cap
Length of output: 1175
🏁 Script executed:
git ls-files | grep -i yuvRepository: CapSoftware/Cap
Length of output: 146
🏁 Script executed:
find . -type f -name "*.rs" | grep -i yuvRepository: CapSoftware/Cap
Length of output: 98
🏁 Script executed:
rg "yuv_converter" --type rustRepository: CapSoftware/Cap
Length of output: 1705
🏁 Script executed:
git ls-files -o -i --exclude-standard | head -30Repository: CapSoftware/Cap
Length of output: 41
🏁 Script executed:
git statusRepository: CapSoftware/Cap
Length of output: 124
🏁 Script executed:
rg "write_texture" -t rust -B10Repository: CapSoftware/Cap
Length of output: 11483
🏁 Script executed:
rg "fn convert" -t rustRepository: CapSoftware/Cap
Length of output: 1635
🏁 Script executed:
sed -n '310,350p' crates/rendering/src/yuv_converter.rsRepository: CapSoftware/Cap
Length of output: 1473
🏁 Script executed:
sed -n '300,400p' crates/rendering/src/yuv_converter.rsRepository: CapSoftware/Cap
Length of output: 3750
🏁 Script executed:
# Get the convert_nv12 function
sed -n '280,360p' crates/rendering/src/yuv_converter.rsRepository: CapSoftware/Cap
Length of output: 2858
🏁 Script executed:
# Find the line numbers of key functions
rg "pub fn convert_nv12|pub fn convert_yuv420p" crates/rendering/src/yuv_converter.rs -nRepository: CapSoftware/Cap
Length of output: 119
🏁 Script executed:
# Get exact structure
ast-grep --pattern 'fn convert_nv12($$$) {
$$$
}'Repository: CapSoftware/Cap
Length of output: 41
🏁 Script executed:
cat -n crates/rendering/src/yuv_converter.rs | sed -n '430,480p'Repository: CapSoftware/Cap
Length of output: 2169
Incomplete buffer uploads cause silent frame corruption.
The repack loops silently omit rows when buffer bounds are exceeded (lines 319-322, 452-454, and similar patterns in the UV plane handling). When end > *_data.len(), the row is skipped but write_texture still receives the undersized buffer while Extent3d declares the full texture dimensions. This leaves skipped rows with undefined GPU memory, corrupting the frame.
Validate input buffers upfront and either return an error or zero-fill missing rows to ensure packed buffers are always exactly width*height / half_width*half_height bytes.
Significant improvements to the frame streaming pipeline for the desktop app, focusing on WebSocket frame transmission, compression, and performance monitoring. The main changes include adding frame compression and latency tracking on both the backend (Rust) and frontend (TypeScript), updating dependencies to support LZ4 compression, and enhancing performance logging for debugging and optimization.
Note
Overhauls editor playback/rendering and streaming: LZ4-compressed WS frames with end-to-end perf metrics, GPU-pipelined readback and YUV (NV12/YUV420p) paths, smarter prefetch/caching, and default preview at 30 FPS.
created_atper-frame timestamp and LZ4 compression (lz4_flex/lz4js) with stride/size trailer; periodic latency/size metrics; error handling.createImageBitmap.YuvToRgbaConverter; display/camera layers handle RGBA/NV12/YUV420p and avoid redundant uploads.PixelFormat(RGBA/NV12/YUV420p) with plane/stride access; macOS AVAssetReader and FFmpeg paths emit matching formats; larger frame cache and smarter resets.FPSto30.lz4_flex(Rust) andlz4js+@types/lz4js(TS).Written by Cursor Bugbot for commit 1a2ffd8. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
New Features
Improvements
Chores
✏️ Tip: You can customize this high-level summary in your review settings.