refactor: split StrandService into explicit collaborators#82
refactor: split StrandService into explicit collaborators#82flyingrobots merged 49 commits intomainfrom
Conversation
Invariant: every source file in src/ must have 100% line coverage. Baseline: 92.12% lines (with barrel file exclusions). Ratchet: vitest coverage thresholds with autoUpdate — coverage can only go up. @vitest/coverage-v8 added as devDependency. Design doc: hill is 100% line coverage. Strategy: controllers first (smallest LOC), then strand services, then WarpApp/Core/Runtime. Cycle 0010 — 100-percent-coverage
…oller, SubscriptionController, ForkController, CheckpointController StrandController (17 tests): pure delegation verification for all 14 methods ProvenanceController (24 tests): backward cone BFS, causal sort, degraded/null guards SubscriptionController (53 tests): subscribe/watch/notify with glob filtering, polling, replay ForkController (38 tests): fork validation, ancestor walking, checkpoint relation, rollback CheckpointController (31 tests): checkpoint lifecycle, GC, migration boundary, coverage sync 5569 total tests passing. 1,398 LOC of previously untested code now covered. Cycle 0010 — 100-percent-coverage
PatchController (62 tests): createPatch auto-materialize, patch/patchMany, _onPatchCommitted eager apply, chain walking, encrypted blob reads, discoverWriters/Ticks, join CRDT merge, writer() delegation SyncController (73 tests): frontier, status, sync request/response, trust gate enforcement, direct peer sync, HTTP sync with retry/abort/auth, serve port validation QueryController (67 tests): state queries, neighbor lookups (linear + indexed), content attachments (node + edge), query/observer/worldline creation, translationCost 5771 total tests passing. 8/10 controllers now covered. Cycle 0010 — 100-percent-coverage
Covers buildPatchDivergence, compareCoordinates, compareStrand, planCoordinateTransfer, planStrandTransfer, selector normalization, strand resolution, state hash fallback, ceiling filtering, and edge cases including multi-writer frontiers and scope passthrough.
MaterializeController (62 tests): full/incremental materialize, ceiling time-travel, receipt collection, auto-checkpoint, subscriber notify, view build/invalidate, coordinate materialize, seek cache, index verify ComparisonController (61 tests): patch divergence, coordinate/strand comparison, transfer planning, selector normalization, state hash, ceiling filtering All 10 controllers now have unit tests. 488 new controller tests total. 5894 tests passing across 338 files. Cycle 0010 — 100-percent-coverage
Coverage threshold bumped by vitest autoUpdate after 488 new controller tests. Cycle 0010 — 100-percent-coverage
Covers all 12 public methods, key private methods, and the footprint overlap classification algorithm. Tests verify: - Strand lifecycle (create, get, list, drop, getOrThrow) - Braid coordination with base observation matching - Materialization via JoinReducer with receipt collection - PatchBuilder wiring and reentrancy guard - Intent queuing, listing, and tick processing - Patch collection (base + overlay + braided, deduplication) - Entity-level provenance queries (patchesFor) - Commit pipeline (journal, codec fallback, blob storage) - Braid ref synchronization (create, update, stale cleanup) - All error paths (10 distinct StrandError codes)
Covers the full public API (analyze) and _hash caching: - Empty/trivial cases (no writers, no patches, single writer) - Supersession detection (LWW winner identification) - Redundancy detection (identical property writes) - Eventual override detection (NodeAdd + PropSet patterns) - Option filtering (kind, writerId, entityId, target selector) - Lamport ceiling and scan budget application - Evidence levels (summary, standard, full) - Validation errors (14 rejection cases for malformed options) - Resolved coordinate structure (frontier kind, digest, ceiling) - Snapshot hash determinism (identical inputs → identical hash) - Complex multi-writer scenarios (3 writers, edge ops, causal ordering) - Truncation diagnostics for budget-exceeded scans - Trace structure validation (all required fields on traces and losers)
Phase 4 — cover the orchestration layer: WarpApp (29 tests): - All delegation paths: writer, createPatch, patch, patchMany - Content methods (8): getContent, getContentStream, getContentOid, getContentMeta + edge variants via callInternalRuntimeMethod - Strand delegation (10): create, get, list, braid, drop, createStrandPatch, patchStrand, queueStrandIntent, listStrandIntents, tickStrand - Observer overloads and query delegation WarpCore (16 tests): - _adopt: early return for existing WarpCore + prototype mutation path - Content methods (8) with prototype chain resolution through callInternalRuntimeMethod → _queryController - Effect pipeline accessors with null pipeline WarpRuntime (8 tests): - _extractTrustedWriters: trusted/untrusted/empty assessment filtering - _maxLamportFromState: empty/single/multi-writer frontiers - temporal.eventually/always: exercises loadAllPatches callback
Add focused tests for streamUtils, PatchSession, missing port stubs, and PatchJournalPort coverage gaps. Refresh the vitest lines threshold to the passing baseline and capture the WarpRuntime.open options-class follow-up in METHOD backlog.
Add focused tests for error/type utility modules, trust verdict derivation, internal runtime method lookup, default codec and crypto fallback paths, glob matching, and small state/stream branches. Refresh the vitest line threshold to the new passing baseline.
📝 WalkthroughWalkthroughThis PR decomposes the monolithic StrandService into four collaborator classes, adds shared strand utilities and typedefs, introduces a coverage-ratchet mechanism and helper script, updates CI/typecheck/coverage tooling, adds many design/retro/backlog docs, and adds hundreds of unit tests across controllers and domain services. Changes
Sequence Diagram(s)sequenceDiagram
participant Facade as StrandService<br/>(Facade)
participant Descriptor as StrandDescriptorStore
participant PatchSvc as StrandPatchService
participant Materializer as StrandMaterializer
participant IntentSvc as StrandIntentService
participant Graph as Graph<br/>(Persistence)
rect rgba(100,150,255,0.5)
Note over Facade,Graph: patch(strandId, build)
Facade->>PatchSvc: patch(strandId, build)
PatchSvc->>Descriptor: readDescriptorByOid(oid, strandId)
Descriptor->>Graph: read blob / refs
Graph-->>Descriptor: descriptor blob
Descriptor-->>PatchSvc: normalized descriptor
PatchSvc->>Materializer: materializeDescriptor(descriptor)
Materializer->>Graph: collect patches (frontier/overlay/braids)
Graph-->>Materializer: patch entries
Materializer->>Materializer: reduceV5 -> state
Materializer-->>PatchSvc: materialized state
PatchSvc->>Graph: write patch blob & commit
Graph-->>PatchSvc: commit SHA
PatchSvc->>Descriptor: writeDescriptor(updated)
Descriptor->>Graph: update overlay/braid refs
Graph-->>Descriptor: success
Descriptor-->>PatchSvc: success
PatchSvc-->>Facade: patch SHA
end
rect rgba(150,200,100,0.5)
Note over Facade,Graph: tick(strandId)
Facade->>IntentSvc: tick(strandId)
IntentSvc->>Descriptor: readDescriptorByOid/metadata
Descriptor->>Graph: read refs/overlays
Graph-->>Descriptor: metadata
Descriptor-->>IntentSvc: descriptor + queue
IntentSvc->>IntentSvc: classify intents (overlap detection)
IntentSvc->>PatchSvc: commitQueuedPatch(...) (for admitted intents)
PatchSvc->>Graph: persist patch + commit
Graph-->>PatchSvc: commit SHA
PatchSvc->>Descriptor: writeDescriptor(updated)
Descriptor->>Graph: persist descriptor
Graph-->>Descriptor: success
Descriptor-->>IntentSvc: success
IntentSvc-->>Facade: tick record / stats
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (18)
docs/method/backlog/bad-code/PROTO_state-diff-private-helper-residue.md (1)
23-25: Strengthen evidence traceability with concrete artifact links.Good note overall. Consider adding a direct pointer to the exact coverage artifact (report path, commit SHA, or CI run URL) so this backlog item remains auditable over time.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/method/backlog/bad-code/PROTO_state-diff-private-helper-residue.md` around lines 23 - 25, Update the Evidence section in PROTO_state-diff-private-helper-residue to include concrete artifact links: append the exact coverage report file path/name (e.g., coverage/state-diff-report.html), the commit SHA that produced the report, and the CI run URL (or job ID and timestamp) that generated the tranche, and mention the relevant file under test (StateDiff.js) so the backlog item is auditable; ensure each bullet contains one concrete link or identifier (report path, commit SHA, CI URL) rather than generic text.docs/method/backlog/asap/VIZ_cut-git-warp-visualization-surface-in-favor-of-warp-ttd.md (1)
28-37: Add explicit migration scope and completion criteria for this backlog item.The proposal is clear, but it would be easier to execute/review with concrete “done” criteria (what is removed, what is retained, and how operator docs are updated).
Suggested addition
## Practical shape: @@ - update docs so `git-warp` points operators to `warp-ttd` for rich visualization and playback work + +## Acceptance criteria + +- Inventory `src/visualization/` and mark each module as: remove / retain (substrate export only). +- Remove duplicated CLI visualization paths from `git-warp`. +- Keep stable substrate-facing export interfaces consumed by `warp-ttd`. +- Update operator docs with canonical `warp-ttd` entrypoint and migration note. +- Add/adjust tests to lock retained export contracts.Also applies to: 46-50
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/method/backlog/asap/VIZ_cut-git-warp-visualization-surface-in-favor-of-warp-ttd.md` around lines 28 - 37, Add explicit migration scope and concrete completion criteria to the "Practical shape" section: enumerate exactly which files/dirs under src/visualization/ will be removed (e.g., specific renderers and CLI display layers duplicated by warp-ttd), list the substrate-facing data/export surfaces that must be retained and their expected API/format for warp-ttd consumption, and add a verification checklist that git-warp docs are updated to point operators to warp-ttd for visualization/playback (include links or doc IDs and acceptance tests such as "no duplicate renderer code remains" and "git-warp docs contain warp-ttd examples"). Ensure the terms used match the proposal: src/visualization/, warp-ttd, renderers, CLI display layers, git-warp.test/unit/domain/services/StreamingBitmapIndexBuilder.test.js (1)
207-220: The “sorted frontier” claim is not actually enforced by this assertion.At Line 213,
toEqualon objects validates values, not key order. This test can still pass iffrontierkeys are emitted unsorted. Add an explicit key-order assertion.Suggested assertion hardening
expect(frontierJson).toEqual({ version: 1, writerCount: 2, frontier: { 'writer-a': 'aaaa', 'writer-b': 'bbbb', }, }); + expect(Object.keys(frontierJson.frontier)).toEqual(['writer-a', 'writer-b']);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/services/StreamingBitmapIndexBuilder.test.js` around lines 207 - 220, The test currently only checks frontierJson values with toEqual, which doesn't enforce key order; update the assertion to explicitly verify the key order of the frontier object produced by the code that creates frontierJson (use the same writtenBlobs and oidMatch selection used to build frontierJson) — for example, assert that Object.keys(frontierJson.frontier) equals the expected ordered array (['writer-a','writer-b']) or compare a deterministic serialization (e.g., JSON.stringify(frontierJson.frontier)) against the expected ordered JSON string; reference frontierJson, writtenBlobs and oidMatch to locate where to add this key-order assertion.test/unit/domain/services/IncrementalIndexUpdater.test.js (1)
732-881: Consider documenting the intent behind private method testing.This suite directly tests private methods (
_handleNodeRemove,_purgeNodeEdges,_handleEdgeAdd, etc.) via type casts toany. While this provides thorough coverage of guard paths and early-return boundaries, it creates coupling to internal implementation details.Given the PR objectives mention "seam-lock tests on extracted collaborators," this appears intentional. Consider adding a brief comment at the top of this
describeblock explaining that these tests are intentionally locking internal behavior to detect accidental changes during future refactors.📝 Suggested documentation comment
describe('internal guard paths', () => { + // NOTE: These tests intentionally access private methods to lock down + // internal early-return/no-op behavior and cache reconciliation boundaries. + // This coupling is accepted as a seam-lock strategy (cycle 0011). + it('no-ops node removal when the node has no allocated global id', () => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/services/IncrementalIndexUpdater.test.js` around lines 732 - 881, Add a short explanatory comment at the top of the describe('internal guard paths', ...) block clarifying that these tests intentionally access private methods (e.g., _handleNodeRemove, _purgeNodeEdges, _handleEdgeAdd, _handleEdgeRemove, _flushEdgeShards, _loadLabels, _getOrBuildAliveEdgeAdjacency, _removeEdgeKeyFromAdjacency) via type casts to `any` to lock exported collaborator behavior and catch accidental refactors; place the comment immediately above the describe call so future readers know the coupling is deliberate and should be preserved or updated when refactoring.test/unit/domain/services/GraphTraversal.test.js (1)
932-1050: Good coverage of internal edge cases, but note the maintenance trade-off.Testing private helpers (
_findTopoCycleWitness,_biAStarExpand,_reconstructPath,_shouldUpdatePredecessor) provides valuable coverage for complex internal logic. However, these tests are coupled to implementation details—future refactors that preserve public behavior may still break these tests.Consider adding a brief comment at the top of this describe block noting that these tests are intentionally covering internal implementation to ensure edge-case correctness, so future maintainers understand the purpose.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/services/GraphTraversal.test.js` around lines 932 - 1050, Add a short explanatory comment above the "GraphTraversal private helpers" describe block stating that these specs intentionally test private helpers (_findTopoCycleWitness, _biAStarExpand, _reconstructPath, _shouldUpdatePredecessor) to lock down edge-case behavior despite coupling to implementation, so future maintainers know the tests are deliberate and should be preserved during refactors; place the comment immediately before the describe('GraphTraversal private helpers', ...) declaration.test/unit/domain/services/controllers/ProvenanceController.test.js (2)
121-124: Reuse the same rejection promise in these assertions.Each pair currently invokes the controller twice, so
_ensureFreshState, logging, and any future stateful setup run twice as well. Capture the promise once and assert both the error type and code against that single execution.♻️ Suggested pattern
- await expect(ctrl.patchesFor('node:a')).rejects.toThrow(QueryError); - await expect(ctrl.patchesFor('node:a')).rejects.toMatchObject({ + const promise = ctrl.patchesFor('node:a'); + await expect(promise).rejects.toThrow(QueryError); + await expect(promise).rejects.toMatchObject({ code: 'E_PROVENANCE_DEGRADED', });Also applies to: 130-133, 221-224, 230-233, 352-355
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/services/controllers/ProvenanceController.test.js` around lines 121 - 124, The tests call ctrl.patchesFor(...) multiple times causing duplicate executions; fix by capturing the single rejection promise once (e.g., const p = ctrl.patchesFor('node:a')) and then assert both the error class and properties against that same promise (expect(p).rejects.toThrow(QueryError); expect(p).rejects.toMatchObject({ code: 'E_PROVENANCE_DEGRADED' })); apply the same change to the other similar pairs (the assertions around lines corresponding to 'node:b' and the other failing cases) so _ensureFreshState and logging run only once per test.
146-146: Reset the shared stubs, not just their call history.
detectMessageKind,decodePatchMessage,mockReplay, andreduceV5are module-scoped mocks whose implementations change across tests. In Vitest,vi.clearAllMocks()leaves thosemockReturnValuestubs in place, so this file can become order-sensitive as more cases are added. Prefer resetting the mutable shared mocks inbeforeEachand then re-seeding the defaults each suite needs.♻️ Suggested pattern
beforeEach(() => { vi.clearAllMocks(); + detectMessageKind.mockReset(); + decodePatchMessage.mockReset(); + mockReplay.mockReset(); + reduceV5.mockReset(); host = createHost(); ctrl = new ProvenanceController(host);Also applies to: 258-258, 368-368
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/services/controllers/ProvenanceController.test.js` at line 146, Replace the single vi.clearAllMocks() with a reset of the shared mock implementations and re-seed defaults: call vi.resetAllMocks() (or vi.resetModules() if you need to re-import) in the beforeEach, then explicitly set the default mockReturnValue/implementation for detectMessageKind, decodePatchMessage, mockReplay, and reduceV5 so each test suite starts with a fresh, deterministic stubbed behavior; ensure these reseeding calls are placed in the same beforeEach that used to call vi.clearAllMocks().docs/method/backlog/asap/DX_timeoutms-missing-from-type-surface.md (1)
6-10: Add language specifier to fenced code block.The error message code block lacks a language identifier. Adding
textorplaintextsatisfies markdownlint MD040 and improves rendering consistency.📝 Proposed fix
-``` +```text error TS2353: Object literal may only specify known properties, and 'timeoutMs' does not exist in type '{ graphName: string; persistence: GraphPersistencePort; writerId: string; ... }'.</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/method/backlog/asap/DX_timeoutms-missing-from-type-surface.mdaround
lines 6 - 10, The fenced code block showing the TypeScript error lacks a
language specifier; update the markdown in
docs/method/backlog/asap/DX_timeoutms-missing-from-type-surface.md by adding a
language tag (e.g.,text orplaintext) to the error message fenced block
so it satisfies markdownlint MD040 and renders consistently.</details> </blockquote></details> <details> <summary>test/unit/domain/services/controllers/CheckpointController.test.js (1)</summary><blockquote> `155-155`: **Make the clone stub structural, not shallow.** `{ ...s }` preserves the nested `Map` instances, so these GC tests won't catch mutations leaking back into the original cached state — the exact behavior `cloneStateV5()` is meant to prevent. <details> <summary>Suggested test stub change</summary> ```diff - cloneStateV5Mock.mockImplementation((s) => ({ ...s })); + cloneStateV5Mock.mockImplementation((s) => ({ + ...s, + nodeAlive: new Map(s.nodeAlive), + edgeAlive: new Map(s.edgeAlive), + prop: new Map(s.prop), + observedFrontier: new Map(s.observedFrontier), + })); ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/services/controllers/CheckpointController.test.js` at line 155, The test stub for cloneStateV5Mock uses a shallow spread ({ ...s }) which preserves nested Map instances and fails to detect mutation leaks; update cloneStateV5Mock.mockImplementation to perform a structural deep clone that recreates Maps (and other containers) so nested Map instances are copied (e.g., recursively clone objects/arrays and for Map create new Map([...]) or equivalent), ensuring the stub mirrors cloneStateV5() behavior and will catch leaks in the cached state during the GC tests. ``` </details> </blockquote></details> <details> <summary>test/unit/domain/services/controllers/ComparisonController.test.js (1)</summary><blockquote> `929-945`: **Avoid locking in duplicate patch-chain loads here.** The behavior under test is the comparison result, not `3 writers × 2 sides` of internal IO. `toHaveBeenCalledTimes(6)` will fail as soon as live/live comparisons dedupe shared frontier collection, which would be a valid optimization. Prefer asserting the unique tips consulted or the resulting comparison payload instead. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/services/controllers/ComparisonController.test.js` around lines 929 - 945, The test should not assert a fixed call count for host._loadPatchChainFromSha (which ties to internal IO) but should verify behavior: replace the expect(...toHaveBeenCalledTimes(6)) with an assertion that host._loadPatchChainFromSha was invoked for each unique tip SHA from the mocked frontier (e.g., 'sha-a','sha-b','sha-c') when controller.compareCoordinates is called, or alternatively assert the produced comparison payload is correct; locate this in the same test that mocks host.getFrontier, host.materializeCoordinate and calls controller.compareCoordinates and change the assertion to check calls include those unique SHAs (or validate the result) rather than a hard-coded 6 calls. ``` </details> </blockquote></details> <details> <summary>test/unit/domain/WarpGraph.test.js (1)</summary><blockquote> `243-284`: **Consider testing via the public sync API rather than private `_createSyncTrustGate()`.** This test accesses the private `_createSyncTrustGate()` method directly. While it achieves coverage, testing through the public sync flow would be more resilient to internal refactoring. That said, the test correctly: - Mocks `AuditVerifierService.prototype.evaluateTrust` with proper cleanup via `mockRestore()` - Verifies the trust gate correctly forwards `{ pin, mode, writerIds }` to audit evaluation - Validates the rejection verdict mapping when untrusted writers are present <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/WarpGraph.test.js` around lines 243 - 284, Replace the direct call to the private _createSyncTrustGate() by exercising the public sync API on the graph so the trust gate is invoked via normal flow; remove usage of graph._createSyncTrustGate() and instead perform the public operation that triggers trust evaluation (e.g., call the graph.sync / graph.performSync / graph.syncWriters method used by the runtime to check writers) while keeping the same mock spy on AuditVerifierService.prototype.evaluateTrust and the same assertions against evaluateTrustSpy and the returned verdict; ensure you still open the runtime with trust: { mode: 'enforce', pin: 'pin-123' } and assert evaluateTrustSpy was called with 'events' and { pin, mode, writerIds: ['alice','bob'] } and the final result equals the expected rejection mapping. ``` </details> </blockquote></details> <details> <summary>test/unit/domain/services/controllers/ForkController.test.js (1)</summary><blockquote> `249-303`: **Consider whether testing private methods is intentional here.** Testing `_isAncestor()` directly (prefixed with underscore, indicating internal) couples tests to implementation details. If this is intentional to ensure coverage of the ancestry-walk logic (including cycle detection), the approach is reasonable. However, if the method signature changes in a refactor, these tests will need updates. If these are considered "seam-lock" tests per the PR objectives (locking extraction seams), then this is appropriate and aligned with the cycle's goals. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/services/controllers/ForkController.test.js` around lines 249 - 303, Decide whether testing the internal method _isAncestor is intentional; if not, remove or replace these tests to exercise public behavior (call the public ForkController API that relies on ancestry instead of ctrl._isAncestor) so tests don’t couple to implementation details, otherwise explicitly expose/annotate the internal seam for testing (e.g., export the internal function or add a clear comment) and keep the existing tests. Locate the tests referencing ctrl._isAncestor and the implementation of _isAncestor (and the ForkError type E_FORK_CYCLE_DETECTED) and either rewrite the tests to use the public method that triggers ancestry checks or add an explicit test-only export/annotation to make the seam intentional. ``` </details> </blockquote></details> <details> <summary>src/domain/services/strand/StrandMaterializer.js (2)</summary><blockquote> `198-214`: **Remove dead statement `void state`.** Line 213 has `void state;` which appears to be leftover dead code. The `state` parameter is already used in line 105 via `_setMaterializedState(state)`, so this void statement serves no purpose. <details> <summary>🧹 Proposed fix</summary> ```diff } this._graph._provenanceDegraded = false; - void state; } } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/domain/services/strand/StrandMaterializer.js` around lines 198 - 214, In _syncGraphMaterialization remove the dead no-op statement "void state;" at the end of the function; this local no-op is unused (state is already handled elsewhere such as in _setMaterializedState) so simply delete the "void state;" line in the _syncGraphMaterialization method of StrandMaterializer.js to clean up dead code. ``` </details> --- `101-111`: **Add error handling for `getFrontier()` failure.** The `materializeDescriptor` method awaits `getFrontier()` at line 108, which is documented to throw if listing refs fails. If `getFrontier()` throws after `_setMaterializedState()` completes (line 105), the graph will have a persisted state but `_lastFrontier` won't be updated, leaving the graph in an inconsistent state. Consider wrapping the `getFrontier()` call in a try-catch to either recover gracefully or provide better error context. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/domain/services/strand/StrandMaterializer.js` around lines 101 - 111, materializeDescriptor currently awaits this._graph.getFrontier() after persisting state via this._graph._setMaterializedState(), which can throw and leave the graph inconsistent; wrap the getFrontier() call in a try-catch inside materializeDescriptor (reference method name materializeDescriptor and members _setMaterializedState and _lastFrontier) so that on failure you handle/annotate the error: set this._graph._lastFrontier to null (or leave a well-defined sentinel), log or augment the thrown error with context (e.g., descriptor id or descriptor.summary) and rethrow so callers see the cause, ensuring cached fields (_cachedCeiling, _cachedFrontier) remain in a consistent state before propagating the error. ``` </details> </blockquote></details> <details> <summary>src/domain/services/strand/StrandIntentService.js (1)</summary><blockquote> `296-303`: **Consider freezing the copied arrays for full immutability.** The `_freezeQueuedIntentSnapshots` method copies the inner arrays (`reads`, `writes`, `contentBlobOids`) but doesn't freeze them. Callers could still mutate these arrays, potentially causing subtle bugs. If the intent is full immutability for "safe public consumption", consider freezing the arrays too: <details> <summary>❄️ Proposed enhancement for deeper immutability</summary> ```diff _freezeQueuedIntentSnapshots(intents) { return intents.map((intent) => Object.freeze({ ...intent, - reads: [...intent.reads], - writes: [...intent.writes], - contentBlobOids: [...intent.contentBlobOids], + reads: Object.freeze([...intent.reads]), + writes: Object.freeze([...intent.writes]), + contentBlobOids: Object.freeze([...intent.contentBlobOids]), })); } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/domain/services/strand/StrandIntentService.js` around lines 296 - 303, The _freezeQueuedIntentSnapshots function copies inner arrays (reads, writes, contentBlobOids) but only freezes the outer object, leaving those arrays mutable; update _freezeQueuedIntentSnapshots so each copied inner array is frozen (e.g., Object.freeze([...intent.reads])) before freezing the returned intent object to ensure full immutability for callers and avoid accidental mutation of intent.reads, intent.writes, or intent.contentBlobOids. ``` </details> </blockquote></details> <details> <summary>src/domain/services/strand/StrandService.js (1)</summary><blockquote> `307-331`: **Consider extracting `openDetachedReadGraph` to a shared utility.** This function clones graph configuration options and opens a detached read-only graph instance. While functional, it accesses many private `_`-prefixed properties of `WarpRuntime`. If this pattern is needed elsewhere or the graph API evolves, consider making this a method on `WarpRuntime` itself (e.g., `graph.openDetached({ autoMaterialize: false })`) to encapsulate the configuration cloning logic. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/domain/services/strand/StrandService.js` around lines 307 - 331, The helper function openDetachedReadGraph duplicates logic to clone many private WarpRuntime properties and open a read-only instance; extract this into a single encapsulated API on the runtime (e.g., add a method on WarpRuntime like openDetached or openDetachedReadGraph) so callers no longer touch _-prefixed internals. Implement a public instance method on the WarpRuntime class that builds the opts object (preserving fields set in openDetachedReadGraph: persistence, graphName, writerId, autoMaterialize:false, onDeleteWithData, clock, audit:false, trustConfig and conditional gcPolicy/checkpointPolicy/logger/crypto/codec/patchJournal/seekCache/blobStorage/patchBlobStorage/checkpointStore) and calls GraphClass.open(opts); then replace usages of the standalone openDetachedReadGraph with the new instance method to centralize future changes and avoid direct access to private properties. ``` </details> </blockquote></details> <details> <summary>src/domain/services/strand/StrandDescriptorStore.js (1)</summary><blockquote> `515-525`: **Potential redundant ref operations in `_syncOneBraidRef`.** When `headPatchSha` is null/empty and the ref exists, the code reads the ref (line 522) then deletes it (line 523). Since `deleteRef` is typically idempotent, you could simplify by always calling `deleteRef` when `headPatchSha` is null/empty, avoiding the extra `readRef` call. <details> <summary>♻️ Suggested simplification</summary> ```diff async _syncOneBraidRef(strandId, readOverlay, nextRefs) { const ref = this.buildBraidRef(strandId, readOverlay.strandId); nextRefs.add(ref); if (readOverlay.headPatchSha !== null && readOverlay.headPatchSha.length > 0) { await this._graph._persistence.updateRef(ref, readOverlay.headPatchSha); return; } - if ((await this._graph._persistence.readRef(ref)) !== null) { - await this._graph._persistence.deleteRef(ref); - } + // deleteRef is typically idempotent; no need to check existence first + await this._graph._persistence.deleteRef(ref); } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/domain/services/strand/StrandDescriptorStore.js` around lines 515 - 525, In _syncOneBraidRef, avoid the redundant readRef before deleting: when readOverlay.headPatchSha is null/empty, call this._graph._persistence.deleteRef(ref) directly instead of first awaiting readRef; keep building the ref with buildBraidRef and adding to nextRefs as-is, and preserve the existing updateRef path when headPatchSha is present (use the same ref variable and _graph._persistence.updateRef/deleteRef methods). ``` </details> </blockquote></details> <details> <summary>src/domain/services/strand/StrandPatchService.js (1)</summary><blockquote> `311-326`: **_freezeQueuedIntent correctly validates and freezes the intent, but accesses a private builder property.** The method validates the patch (ensuring non-empty operations), freezes the returned object, and normalizes arrays. However, line 324 accesses `builder._contentBlobs` — a private internal property of PatchBuilderV2. This creates unnecessary coupling to the builder's internal structure. Expose `contentBlobs` via a public getter on `PatchBuilderV2` (e.g., `get contentBlobs()`) rather than accessing the private `_contentBlobs` property directly. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/domain/services/strand/StrandPatchService.js` around lines 311 - 326, _replace access to the private builder property by adding a public getter on PatchBuilderV2 and using it in _freezeQueuedIntent: implement a get contentBlobs() { return this._contentBlobs; } (or equivalent) on the PatchBuilderV2 class, then change the reference in _freezeQueuedIntent from builder._contentBlobs to builder.contentBlobs so normalizeStringArray uses the public API; keep validation, freezing, and naming unchanged._ ``` </details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Inline comments:
In
@docs/design/0011-strand-service-boundary-split/strand-service-boundary-split.md:
- Line 120: Choose a single spelling for "façade"/"facade" and make it
consistent across the document; search for both forms and update every
occurrence (including the sentence "ReduceStrandServiceto a façade that
composes those collaborators." and the other mentions around the previous
usages) so all instances use the chosen spelling, and ensure headings, captions,
and any cross-references are updated to match.In
@docs/invariants/full-coverage.md:
- Around line 23-25: The documented vitest command npx vitest run test/unit/
--coverage --coverage.thresholds.lines=100 currently hard-codes a 100% threshold
and conflicts with the surrounding "ratchet" (progressive enforcement) wording;
update the doc to explicitly state whether that command represents the future
lock-state (100% required) or the current CI gate (current threshold), and
either replace the hard-coded command with a representative current-gate command
or show both commands (current CI threshold and future lock-state) along with a
short note linking them to the ratchet policy; apply the same
clarification/update to the other instances referenced in the surrounding
ratchet text (the block around the 33-38 lines).In
@docs/method/backlog/asap/VIZ_cut-git-warp-visualization-surface-in-favor-of-warp-ttd.md:
- Around line 11-12: The docs contain a machine-specific local path string
~/git/warp-ttd; replace that instance with a canonical repository locator
(e.g., a github/gitlab org/repo URL or your project's internal canonical path)
so readers can resolve it consistently; update the sentence that currently reads
"even though~/git/warp-ttdnow exists..." to reference the canonical locator
(for example "even though the dedicated debugging and visualization tool
(org/warp-ttd) now exists") and ensure any surrounding wording about CLI
presentation paths remains accurate.In
@src/domain/services/strand/StrandDescriptorStore.js:
- Around line 93-110: normalizeReadOverlays assumes every array element is a
non-null object and directly accesses entry['strandId'] etc., which can throw
for null/primitive elements; update the function to first filter/validate
entries by ensuring each entry is a non-null object (typeof entry === 'object'
&& entry !== null) before mapping, and either skip invalid entries or provide
safe defaults for strandId/overlayId/kind/headPatchSha/patchCount; reference the
local variables entries and overlay and the function normalizeReadOverlays when
making the change so the mapping only runs on validated objects and doesn't
crash on null/primitives.- Around line 255-283: The catch-all in readDescriptorByOid is re-wrapping a
StrandError (like the E_STRAND_GRAPH_MISMATCH thrown when descriptor.graphName
!== this._graph._graphName) into E_STRAND_CORRUPT; modify the logic in
readDescriptorByOid so that after calling parseStrandBlob (or inside the catch)
you detect if the thrown error is already a StrandError with code
'E_STRAND_GRAPH_MISMATCH' (or any existing StrandError) and rethrow it
unchanged, otherwise wrap non-StrandError parse failures as the existing
E_STRAND_CORRUPT; refer to the readDescriptorByOid method, parseStrandBlob call,
and the StrandError constructor/codes to implement this conditional rethrow.In
@test/unit/domain/services/AuditVerifierService.test.js:
- Around line 1169-1186: The test title and assertions disagree: the spec
asserts result.mode === 'signed_evidence_v1' but the title says the default is
"warn"; update the test title in the "defaults trust policy mode to warn when
mode is omitted" test to accurately describe the behavior (e.g., "defaults trust
policy mode to signed_evidence_v1 when mode is omitted") so the test name
matches the assertion that verify.evaluateTrust returns result.mode
'signed_evidence_v1' (references: createVerifier, verifier.evaluateTrust,
result.mode).- Around line 191-202: The test shows persistence.readRef throwing is currently
treated as an empty/valid chain; change verifyChain (created by createVerifier)
to catch errors from persistence.readRef instead of treating them as "no ref"
and return a non-VALID failure state (e.g., set result.status to 'UNAVAILABLE'
or similar error enum, leave tipCommit null and set receiptsVerified to null or
an error sentinel) or rethrow the error so callers can distinguish storage
failures; specifically, locate the call to persistence.readRef inside
verifyChain, wrap it in a try/catch, and map the catch to the new error status
rather than returning { status: 'VALID', receiptsVerified: 0 }.In
@test/unit/domain/services/controllers/MaterializeController.test.js:
- Around line 611-627: The test doesn't assert that the ceiling filter actually
excluded the lamport-10 patch; update the test that calls ctrl.materialize({
ceiling: 5 }) to give the lamport:10 fakePatchEntry a unique side-effect (e.g.,
a unique node id or property) and then assert that this unique node/property is
NOT present after materialization (inspect host._provenanceIndex or the
materialized output exposed by ctrl/materialize). Specifically, change the
patches passed to host._loadPatchChainFromSha so the high-lamport patch created
by fakePatchEntry('sha10') introduces a distinct node/property, call
ctrl.materialize({ ceiling: 5 }), and then assert the provenance index or
returned materialized structure does not contain that distinct node/property
(use existing helpers/assertions used elsewhere in the test suite to query
ProvenanceIndex).In
@test/unit/domain/services/controllers/QueryController.test.js:
- Around line 525-530: The test currently assumes an empty string match is
allowed and will fail later; instead enforce the intended contract that match
must be non-empty by updating the assertion on ctrl.observer({ match: '' }) to
expect a validation failure: call ctrl.observer with { match: '' } and assert it
rejects (or throws) with the validation message 'observer config.match must be a
non-empty string or non-empty array of strings' so the test explicitly verifies
the validation in the observer() path.In
@test/unit/domain/services/controllers/SubscriptionController.test.js:
- Around line 253-256: The test currently asserts that ctrl.watch('', {
onChange: vi.fn() }) does not throw, but the API contract requires non-empty
pattern strings; change the test to treat the empty-string case as invalid by
asserting that ctrl.watch('', { onChange: vi.fn() }) throws (or otherwise
rejects) and update the test description to "rejects an empty string pattern" so
the behavior of SubscriptionController.watch is consistent with other validation
paths.In
@test/unit/domain/WarpApp.delegation.test.js:
- Around line 52-65: Declare explicit types for mockRuntime and mockCore (e.g.,
let mockRuntime: ReturnType and let mockCore:
ReturnType or the specific interfaces) so they are not
implicit any; when constructing app = new WarpApp(mockCore) assert the mock core
satisfies WarpCore using a type assertion (e.g., mockCore as unknown as
WarpCore) to satisfy the checker; and avoid assigning to the private _runtime
directly by casting app to any or using (app as unknown as { _runtime?: () =>
typeof mockRuntime })._runtime = () => mockRuntime (or provide a public
test-only setter) and similarly assert core() overrides via a type-safe cast
(e.g., (app as any).core = () => mockCore) so the test compiles without
TypeScript errors while keeping identities createMockRuntime, createMockCore,
WarpApp, _runtime, core, and WarpCore as references.
Nitpick comments:
In@docs/method/backlog/asap/DX_timeoutms-missing-from-type-surface.md:
- Around line 6-10: The fenced code block showing the TypeScript error lacks a
language specifier; update the markdown in
docs/method/backlog/asap/DX_timeoutms-missing-from-type-surface.md by adding a
language tag (e.g.,text orplaintext) to the error message fenced block
so it satisfies markdownlint MD040 and renders consistently.In
@docs/method/backlog/asap/VIZ_cut-git-warp-visualization-surface-in-favor-of-warp-ttd.md:
- Around line 28-37: Add explicit migration scope and concrete completion
criteria to the "Practical shape" section: enumerate exactly which files/dirs
under src/visualization/ will be removed (e.g., specific renderers and CLI
display layers duplicated by warp-ttd), list the substrate-facing data/export
surfaces that must be retained and their expected API/format for warp-ttd
consumption, and add a verification checklist that git-warp docs are updated to
point operators to warp-ttd for visualization/playback (include links or doc IDs
and acceptance tests such as "no duplicate renderer code remains" and "git-warp
docs contain warp-ttd examples"). Ensure the terms used match the proposal:
src/visualization/, warp-ttd, renderers, CLI display layers, git-warp.In
@docs/method/backlog/bad-code/PROTO_state-diff-private-helper-residue.md:
- Around line 23-25: Update the Evidence section in
PROTO_state-diff-private-helper-residue to include concrete artifact links:
append the exact coverage report file path/name (e.g.,
coverage/state-diff-report.html), the commit SHA that produced the report, and
the CI run URL (or job ID and timestamp) that generated the tranche, and mention
the relevant file under test (StateDiff.js) so the backlog item is auditable;
ensure each bullet contains one concrete link or identifier (report path, commit
SHA, CI URL) rather than generic text.In
@src/domain/services/strand/StrandDescriptorStore.js:
- Around line 515-525: In _syncOneBraidRef, avoid the redundant readRef before
deleting: when readOverlay.headPatchSha is null/empty, call
this._graph._persistence.deleteRef(ref) directly instead of first awaiting
readRef; keep building the ref with buildBraidRef and adding to nextRefs as-is,
and preserve the existing updateRef path when headPatchSha is present (use the
same ref variable and _graph._persistence.updateRef/deleteRef methods).In
@src/domain/services/strand/StrandIntentService.js:
- Around line 296-303: The _freezeQueuedIntentSnapshots function copies inner
arrays (reads, writes, contentBlobOids) but only freezes the outer object,
leaving those arrays mutable; update _freezeQueuedIntentSnapshots so each copied
inner array is frozen (e.g., Object.freeze([...intent.reads])) before freezing
the returned intent object to ensure full immutability for callers and avoid
accidental mutation of intent.reads, intent.writes, or intent.contentBlobOids.In
@src/domain/services/strand/StrandMaterializer.js:
- Around line 198-214: In _syncGraphMaterialization remove the dead no-op
statement "void state;" at the end of the function; this local no-op is unused
(state is already handled elsewhere such as in _setMaterializedState) so simply
delete the "void state;" line in the _syncGraphMaterialization method of
StrandMaterializer.js to clean up dead code.- Around line 101-111: materializeDescriptor currently awaits
this._graph.getFrontier() after persisting state via
this._graph._setMaterializedState(), which can throw and leave the graph
inconsistent; wrap the getFrontier() call in a try-catch inside
materializeDescriptor (reference method name materializeDescriptor and members
_setMaterializedState and _lastFrontier) so that on failure you handle/annotate
the error: set this._graph._lastFrontier to null (or leave a well-defined
sentinel), log or augment the thrown error with context (e.g., descriptor id or
descriptor.summary) and rethrow so callers see the cause, ensuring cached fields
(_cachedCeiling, _cachedFrontier) remain in a consistent state before
propagating the error.In
@src/domain/services/strand/StrandPatchService.js:
- Around line 311-326: _replace access to the private builder property by adding
a public getter on PatchBuilderV2 and using it in _freezeQueuedIntent: implement
a get contentBlobs() { return this._contentBlobs; } (or equivalent) on the
PatchBuilderV2 class, then change the reference in _freezeQueuedIntent from
builder.contentBlobs to builder.contentBlobs so normalizeStringArray uses the
public API; keep validation, freezing, and naming unchanged.In
@src/domain/services/strand/StrandService.js:
- Around line 307-331: The helper function openDetachedReadGraph duplicates
logic to clone many private WarpRuntime properties and open a read-only
instance; extract this into a single encapsulated API on the runtime (e.g., add
a method on WarpRuntime like openDetached or openDetachedReadGraph) so callers
no longer touch _-prefixed internals. Implement a public instance method on the
WarpRuntime class that builds the opts object (preserving fields set in
openDetachedReadGraph: persistence, graphName, writerId, autoMaterialize:false,
onDeleteWithData, clock, audit:false, trustConfig and conditional
gcPolicy/checkpointPolicy/logger/crypto/codec/patchJournal/seekCache/blobStorage/patchBlobStorage/checkpointStore)
and calls GraphClass.open(opts); then replace usages of the standalone
openDetachedReadGraph with the new instance method to centralize future changes
and avoid direct access to private properties.In
@test/unit/domain/services/controllers/CheckpointController.test.js:
- Line 155: The test stub for cloneStateV5Mock uses a shallow spread ({ ...s })
which preserves nested Map instances and fails to detect mutation leaks; update
cloneStateV5Mock.mockImplementation to perform a structural deep clone that
recreates Maps (and other containers) so nested Map instances are copied (e.g.,
recursively clone objects/arrays and for Map create new Map([...]) or
equivalent), ensuring the stub mirrors cloneStateV5() behavior and will catch
leaks in the cached state during the GC tests.In
@test/unit/domain/services/controllers/ComparisonController.test.js:
- Around line 929-945: The test should not assert a fixed call count for
host._loadPatchChainFromSha (which ties to internal IO) but should verify
behavior: replace the expect(...toHaveBeenCalledTimes(6)) with an assertion that
host._loadPatchChainFromSha was invoked for each unique tip SHA from the mocked
frontier (e.g., 'sha-a','sha-b','sha-c') when controller.compareCoordinates is
called, or alternatively assert the produced comparison payload is correct;
locate this in the same test that mocks host.getFrontier,
host.materializeCoordinate and calls controller.compareCoordinates and change
the assertion to check calls include those unique SHAs (or validate the result)
rather than a hard-coded 6 calls.In
@test/unit/domain/services/controllers/ForkController.test.js:
- Around line 249-303: Decide whether testing the internal method _isAncestor is
intentional; if not, remove or replace these tests to exercise public behavior
(call the public ForkController API that relies on ancestry instead of
ctrl._isAncestor) so tests don’t couple to implementation details, otherwise
explicitly expose/annotate the internal seam for testing (e.g., export the
internal function or add a clear comment) and keep the existing tests. Locate
the tests referencing ctrl._isAncestor and the implementation of _isAncestor
(and the ForkError type E_FORK_CYCLE_DETECTED) and either rewrite the tests to
use the public method that triggers ancestry checks or add an explicit test-only
export/annotation to make the seam intentional.In
@test/unit/domain/services/controllers/ProvenanceController.test.js:
- Around line 121-124: The tests call ctrl.patchesFor(...) multiple times
causing duplicate executions; fix by capturing the single rejection promise once
(e.g., const p = ctrl.patchesFor('node:a')) and then assert both the error class
and properties against that same promise (expect(p).rejects.toThrow(QueryError);
expect(p).rejects.toMatchObject({ code: 'E_PROVENANCE_DEGRADED' })); apply the
same change to the other similar pairs (the assertions around lines
corresponding to 'node:b' and the other failing cases) so _ensureFreshState and
logging run only once per test.- Line 146: Replace the single vi.clearAllMocks() with a reset of the shared
mock implementations and re-seed defaults: call vi.resetAllMocks() (or
vi.resetModules() if you need to re-import) in the beforeEach, then explicitly
set the default mockReturnValue/implementation for detectMessageKind,
decodePatchMessage, mockReplay, and reduceV5 so each test suite starts with a
fresh, deterministic stubbed behavior; ensure these reseeding calls are placed
in the same beforeEach that used to call vi.clearAllMocks().In
@test/unit/domain/services/GraphTraversal.test.js:
- Around line 932-1050: Add a short explanatory comment above the
"GraphTraversal private helpers" describe block stating that these specs
intentionally test private helpers (_findTopoCycleWitness, _biAStarExpand,
_reconstructPath, _shouldUpdatePredecessor) to lock down edge-case behavior
despite coupling to implementation, so future maintainers know the tests are
deliberate and should be preserved during refactors; place the comment
immediately before the describe('GraphTraversal private helpers', ...)
declaration.In
@test/unit/domain/services/IncrementalIndexUpdater.test.js:
- Around line 732-881: Add a short explanatory comment at the top of the
describe('internal guard paths', ...) block clarifying that these tests
intentionally access private methods (e.g., _handleNodeRemove, _purgeNodeEdges,
_handleEdgeAdd, _handleEdgeRemove, _flushEdgeShards, _loadLabels,
_getOrBuildAliveEdgeAdjacency, _removeEdgeKeyFromAdjacency) via type casts to
anyto lock exported collaborator behavior and catch accidental refactors;
place the comment immediately above the describe call so future readers know the
coupling is deliberate and should be preserved or updated when refactoring.In
@test/unit/domain/services/StreamingBitmapIndexBuilder.test.js:
- Around line 207-220: The test currently only checks frontierJson values with
toEqual, which doesn't enforce key order; update the assertion to explicitly
verify the key order of the frontier object produced by the code that creates
frontierJson (use the same writtenBlobs and oidMatch selection used to build
frontierJson) — for example, assert that Object.keys(frontierJson.frontier)
equals the expected ordered array (['writer-a','writer-b']) or compare a
deterministic serialization (e.g., JSON.stringify(frontierJson.frontier))
against the expected ordered JSON string; reference frontierJson, writtenBlobs
and oidMatch to locate where to add this key-order assertion.In
@test/unit/domain/WarpGraph.test.js:
- Around line 243-284: Replace the direct call to the private
_createSyncTrustGate() by exercising the public sync API on the graph so the
trust gate is invoked via normal flow; remove usage of
graph._createSyncTrustGate() and instead perform the public operation that
triggers trust evaluation (e.g., call the graph.sync / graph.performSync /
graph.syncWriters method used by the runtime to check writers) while keeping the
same mock spy on AuditVerifierService.prototype.evaluateTrust and the same
assertions against evaluateTrustSpy and the returned verdict; ensure you still
open the runtime with trust: { mode: 'enforce', pin: 'pin-123' } and assert
evaluateTrustSpy was called with 'events' and { pin, mode, writerIds:
['alice','bob'] } and the final result equals the expected rejection mapping.</details> <details> <summary>🪄 Autofix (Beta)</summary> Fix all unresolved CodeRabbit comments on this PR: - [ ] <!-- {"checkboxId": "4b0d0e0a-96d7-4f10-b296-3a18ea78f0b9"} --> Push a commit to this branch (recommended) - [ ] <!-- {"checkboxId": "ff5b1114-7d8c-49e6-8ac1-43f82af23a33"} --> Create a new PR with the fixes </details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: defaults **Review profile**: CHILL **Plan**: Pro **Run ID**: `4d9d8436-141e-4361-9ec4-5a4cb70e7ded` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 7d64a9555e0c24c8105c677cf14a058be346b20e and 39f04c00fb81a8906b69a4ccfddde63c62978875. </details> <details> <summary>⛔ Files ignored due to path filters (1)</summary> * `package-lock.json` is excluded by `!**/package-lock.json` </details> <details> <summary>📒 Files selected for processing (99)</summary> * `AGENTS.md` * `docs/design/0010-100-percent-coverage/100-percent-coverage.md` * `docs/design/0011-strand-service-boundary-split/strand-service-boundary-split.md` * `docs/invariants/full-coverage.md` * `docs/method/backlog/asap/DX_timeoutms-missing-from-type-surface.md` * `docs/method/backlog/asap/PROTO_conflict-analyzer-pipeline-decomposition.md` * `docs/method/backlog/asap/PROTO_dag-pathfinding-algorithm-split.md` * `docs/method/backlog/asap/PROTO_warpruntime-open-options-class.md` * `docs/method/backlog/asap/VIZ_cut-git-warp-visualization-surface-in-favor-of-warp-ttd.md` * `docs/method/backlog/bad-code/PROTO_bitmap-neighbor-provider-dead-false-branch.md` * `docs/method/backlog/bad-code/PROTO_conflict-analyzer-dead-branches.md` * `docs/method/backlog/bad-code/PROTO_dag-pathfinding-raw-error.md` * `docs/method/backlog/bad-code/PROTO_incremental-index-updater-null-proto-rewrap-dead-branch.md` * `docs/method/backlog/bad-code/PROTO_inmemory-graph-adapter-default-hash-unavailable-branch.md` * `docs/method/backlog/bad-code/PROTO_join-reducer-import-time-strategy-validation-residue.md` * `docs/method/backlog/bad-code/PROTO_materialize-controller-seek-cache-error-opacity.md` * `docs/method/backlog/bad-code/PROTO_roaring-loader-fallback-opacity.md` * `docs/method/backlog/bad-code/PROTO_state-diff-private-helper-residue.md` * `docs/method/backlog/bad-code/PROTO_strand-service-dead-branches.md` * `docs/method/backlog/bad-code/PROTO_strand-typedef-corridor.md` * `docs/method/backlog/bad-code/PROTO_streaming-bitmap-index-builder-serialization-tail.md` * `docs/method/backlog/bad-code/PROTO_trust-record-service-unreachable-exhausted-tails.md` * `docs/method/backlog/bad-code/PROTO_wormhole-service-defensive-tail-branches.md` * `docs/method/retro/0010-100-percent-coverage/100-percent-coverage.md` * `docs/method/retro/0011-strand-service-boundary-split/strand-service-boundary-split.md` * `package.json` * `scripts/coverage-ratchet.js` * `src/domain/services/controllers/StrandController.js` * `src/domain/services/strand/StrandDescriptorStore.js` * `src/domain/services/strand/StrandIntentService.js` * `src/domain/services/strand/StrandMaterializer.js` * `src/domain/services/strand/StrandPatchService.js` * `src/domain/services/strand/StrandService.js` * `src/domain/services/strand/strandShared.js` * `src/domain/services/strand/strandTypes.js` * `test/unit/domain/WarpApp.delegation.test.js` * `test/unit/domain/WarpCore.content.test.js` * `test/unit/domain/WarpGraph.coverageGaps.test.js` * `test/unit/domain/WarpGraph.queryBuilder.test.js` * `test/unit/domain/WarpGraph.test.js` * `test/unit/domain/WarpGraph.traverse.test.js` * `test/unit/domain/errors/index.test.js` * `test/unit/domain/services/AuditReceiptService.test.js` * `test/unit/domain/services/AuditVerifierService.test.js` * `test/unit/domain/services/BitmapIndexReader.test.js` * `test/unit/domain/services/BitmapNeighborProvider.test.js` * `test/unit/domain/services/CheckpointService.edgeCases.test.js` * `test/unit/domain/services/CheckpointService.test.js` * `test/unit/domain/services/DagPathFinding.test.js` * `test/unit/domain/services/GitGraphAdapter.test.js` * `test/unit/domain/services/GraphTraversal.test.js` * `test/unit/domain/services/GraphTraversal.transitiveClosure.test.js` * `test/unit/domain/services/GraphTraversal.transitiveReduction.test.js` * `test/unit/domain/services/IncrementalIndexUpdater.test.js` * `test/unit/domain/services/JoinReducer.test.js` * `test/unit/domain/services/JoinReducer.trackDiff.test.js` * `test/unit/domain/services/Observer.test.js` * `test/unit/domain/services/StateDiff.test.js` * `test/unit/domain/services/StreamingBitmapIndexBuilder.test.js` * `test/unit/domain/services/VisibleStateScopeV1.test.js` * `test/unit/domain/services/VisibleStateTransferPlannerV5.test.js` * `test/unit/domain/services/WormholeService.test.js` * `test/unit/domain/services/controllers/CheckpointController.test.js` * `test/unit/domain/services/controllers/ComparisonController.test.js` * `test/unit/domain/services/controllers/ForkController.test.js` * `test/unit/domain/services/controllers/MaterializeController.test.js` * `test/unit/domain/services/controllers/PatchController.test.js` * `test/unit/domain/services/controllers/ProvenanceController.test.js` * `test/unit/domain/services/controllers/QueryController.test.js` * `test/unit/domain/services/controllers/StrandController.test.js` * `test/unit/domain/services/controllers/SubscriptionController.test.js` * `test/unit/domain/services/controllers/SyncController.test.js` * `test/unit/domain/services/state/StateHashService.test.js` * `test/unit/domain/services/strand/ConflictAnalyzerService.test.js` * `test/unit/domain/services/strand/StrandService.test.js` * `test/unit/domain/stream/WarpStream.test.js` * `test/unit/domain/trust/TrustRecordService.test.js` * `test/unit/domain/trust/verdict.test.js` * `test/unit/domain/types/WarpErrors.test.js` * `test/unit/domain/types/WarpOptions.test.js` * `test/unit/domain/types/WarpPersistence.test.js` * `test/unit/domain/utils/callInternalRuntimeMethod.test.js` * `test/unit/domain/utils/defaultCodec.test.js` * `test/unit/domain/utils/defaultCrypto.unavailable.test.js` * `test/unit/domain/utils/defaultTrustCrypto.test.js` * `test/unit/domain/utils/defaultTrustCrypto.unavailable.test.js` * `test/unit/domain/utils/matchGlob.test.js` * `test/unit/domain/utils/roaring.test.js` * `test/unit/domain/utils/streamUtils.test.js` * `test/unit/domain/warp/PatchSession.test.js` * `test/unit/infrastructure/adapters/CborCheckpointStoreAdapter.test.js` * `test/unit/infrastructure/adapters/GitGraphAdapter.coverage.test.js` * `test/unit/infrastructure/adapters/InMemoryBlobStorageAdapter.test.js` * `test/unit/infrastructure/adapters/InMemoryGraphAdapter.test.js` * `test/unit/ports/IndexStorePort.test.js` * `test/unit/ports/NeighborProviderPort.test.js` * `test/unit/ports/PatchJournalPort.test.js` * `test/unit/scripts/coverage-ratchet.test.js` * `vitest.config.js` </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| - Move patch-builder / commit flow into `StrandPatchService`. | ||
| - Move queued intent classification, drain, and persistence into | ||
| `StrandIntentService`. | ||
| - Reduce `StrandService` to a façade that composes those collaborators. |
There was a problem hiding this comment.
Minor: Inconsistent spelling of "façade" vs "facade".
The document uses both "façade" (lines 120, 179, 197) and "facade" elsewhere. Consider standardizing on one spelling throughout for consistency.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~120-~120: Do not mix variants of the same word (‘façade’ and ‘facade’) within a single text.
Context: ...Service. - Reduce StrandService` to a façade that composes those collaborators. ## ...
(EN_WORD_COHERENCY)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@docs/design/0011-strand-service-boundary-split/strand-service-boundary-split.md`
at line 120, Choose a single spelling for "façade"/"facade" and make it
consistent across the document; search for both forms and update every
occurrence (including the sentence "Reduce `StrandService` to a façade that
composes those collaborators." and the other mentions around the previous
usages) so all instances use the chosen spelling, and ensure headings, captions,
and any cross-references are updated to match.
| ```bash | ||
| npx vitest run test/unit/ --coverage --coverage.thresholds.lines=100 | ||
| ``` |
There was a problem hiding this comment.
Align the documented check command with the active ratchet policy.
Line 24 hard-codes a 100% threshold, while the surrounding ratchet text describes progressive enforcement. Please clarify whether this command is the future lock-state check or the current CI gate to avoid runbook confusion.
Also applies to: 33-38
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/invariants/full-coverage.md` around lines 23 - 25, The documented vitest
command npx vitest run test/unit/ --coverage --coverage.thresholds.lines=100
currently hard-codes a 100% threshold and conflicts with the surrounding
"ratchet" (progressive enforcement) wording; update the doc to explicitly state
whether that command represents the future lock-state (100% required) or the
current CI gate (current threshold), and either replace the hard-coded command
with a representative current-gate command or show both commands (current CI
threshold and future lock-state) along with a short note linking them to the
ratchet policy; apply the same clarification/update to the other instances
referenced in the surrounding ratchet text (the block around the 33-38 lines).
| CLI presentation paths) even though `~/git/warp-ttd` now exists as the | ||
| dedicated debugging and visualization tool. |
There was a problem hiding this comment.
Replace local-path reference with a canonical repository locator.
~/git/warp-ttd is machine-specific and can age poorly in shared docs. Prefer org/repo URL or canonical internal path so every reader can resolve it.
Suggested doc tweak
-(`src/visualization/`, ASCII renderers, graph render helpers, related
-CLI presentation paths) even though `~/git/warp-ttd` now exists as the
+(`src/visualization/`, ASCII renderers, graph render helpers, related
+CLI presentation paths) even though `warp-ttd` (canonical repo: `<ORG>/<REPO>`) now exists as the
dedicated debugging and visualization tool.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@docs/method/backlog/asap/VIZ_cut-git-warp-visualization-surface-in-favor-of-warp-ttd.md`
around lines 11 - 12, The docs contain a machine-specific local path string
`~/git/warp-ttd`; replace that instance with a canonical repository locator
(e.g., a github/gitlab org/repo URL or your project's internal canonical path)
so readers can resolve it consistently; update the sentence that currently reads
"even though `~/git/warp-ttd` now exists..." to reference the canonical locator
(for example "even though the dedicated debugging and visualization tool
(org/warp-ttd) now exists") and ensure any surrounding wording about CLI
presentation paths remains accurate.
| function normalizeReadOverlays(value) { | ||
| if (!Array.isArray(value)) { | ||
| return []; | ||
| } | ||
| const entries = /** @type {unknown[]} */ (value); | ||
| return entries | ||
| .map((entry) => { | ||
| const overlay = /** @type {Record<string, unknown>} */ (entry); | ||
| return { | ||
| strandId: /** @type {string} */ (overlay['strandId']), | ||
| overlayId: /** @type {string} */ (overlay['overlayId']), | ||
| kind: /** @type {string} */ (overlay['kind']), | ||
| headPatchSha: /** @type {string|null} */ (overlay['headPatchSha'] ?? null), | ||
| patchCount: /** @type {number} */ (overlay['patchCount']), | ||
| }; | ||
| }) | ||
| .sort((left, right) => compareStrings(left.strandId, right.strandId)); | ||
| } |
There was a problem hiding this comment.
Unsafe type coercion in normalizeReadOverlays - missing null/object check.
The function assumes entry is an object but only checks if the outer value is an array. If an array element is null, undefined, or a primitive, accessing properties like entry['strandId'] will throw or produce unexpected results.
🛡️ Proposed fix to add entry validation
function normalizeReadOverlays(value) {
if (!Array.isArray(value)) {
return [];
}
const entries = /** `@type` {unknown[]} */ (value);
return entries
+ .filter((entry) => entry !== null && entry !== undefined && typeof entry === 'object')
.map((entry) => {
const overlay = /** `@type` {Record<string, unknown>} */ (entry);
return {
strandId: /** `@type` {string} */ (overlay['strandId']),
overlayId: /** `@type` {string} */ (overlay['overlayId']),
kind: /** `@type` {string} */ (overlay['kind']),
headPatchSha: /** `@type` {string|null} */ (overlay['headPatchSha'] ?? null),
patchCount: /** `@type` {number} */ (overlay['patchCount']),
};
})
.sort((left, right) => compareStrings(left.strandId, right.strandId));
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function normalizeReadOverlays(value) { | |
| if (!Array.isArray(value)) { | |
| return []; | |
| } | |
| const entries = /** @type {unknown[]} */ (value); | |
| return entries | |
| .map((entry) => { | |
| const overlay = /** @type {Record<string, unknown>} */ (entry); | |
| return { | |
| strandId: /** @type {string} */ (overlay['strandId']), | |
| overlayId: /** @type {string} */ (overlay['overlayId']), | |
| kind: /** @type {string} */ (overlay['kind']), | |
| headPatchSha: /** @type {string|null} */ (overlay['headPatchSha'] ?? null), | |
| patchCount: /** @type {number} */ (overlay['patchCount']), | |
| }; | |
| }) | |
| .sort((left, right) => compareStrings(left.strandId, right.strandId)); | |
| } | |
| function normalizeReadOverlays(value) { | |
| if (!Array.isArray(value)) { | |
| return []; | |
| } | |
| const entries = /** `@type` {unknown[]} */ (value); | |
| return entries | |
| .filter((entry) => entry !== null && entry !== undefined && typeof entry === 'object') | |
| .map((entry) => { | |
| const overlay = /** `@type` {Record<string, unknown>} */ (entry); | |
| return { | |
| strandId: /** `@type` {string} */ (overlay['strandId']), | |
| overlayId: /** `@type` {string} */ (overlay['overlayId']), | |
| kind: /** `@type` {string} */ (overlay['kind']), | |
| headPatchSha: /** `@type` {string|null} */ (overlay['headPatchSha'] ?? null), | |
| patchCount: /** `@type` {number} */ (overlay['patchCount']), | |
| }; | |
| }) | |
| .sort((left, right) => compareStrings(left.strandId, right.strandId)); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/domain/services/strand/StrandDescriptorStore.js` around lines 93 - 110,
normalizeReadOverlays assumes every array element is a non-null object and
directly accesses entry['strandId'] etc., which can throw for null/primitive
elements; update the function to first filter/validate entries by ensuring each
entry is a non-null object (typeof entry === 'object' && entry !== null) before
mapping, and either skip invalid entries or provide safe defaults for
strandId/overlayId/kind/headPatchSha/patchCount; reference the local variables
entries and overlay and the function normalizeReadOverlays when making the
change so the mapping only runs on validated objects and doesn't crash on
null/primitives.
| async readDescriptorByOid(oid, strandId) { | ||
| const buf = await this._graph._persistence.readBlob(oid); | ||
| if (buf === null || buf === undefined) { | ||
| throw new StrandError(`Strand '${strandId}' points to a missing blob`, { | ||
| code: 'E_STRAND_MISSING_OBJECT', | ||
| context: { graphName: this._graph._graphName, strandId, oid }, | ||
| }); | ||
| } | ||
|
|
||
| try { | ||
| const descriptor = parseStrandBlob(buf, `strand '${strandId}'`); | ||
| if (descriptor.graphName !== this._graph._graphName) { | ||
| throw new StrandError('descriptor graphName does not match the current graph', { | ||
| code: 'E_STRAND_GRAPH_MISMATCH', | ||
| }); | ||
| } | ||
| return descriptor; | ||
| } catch (err) { | ||
| throw new StrandError(`Strand '${strandId}' is corrupt`, { | ||
| code: 'E_STRAND_CORRUPT', | ||
| context: { | ||
| graphName: this._graph._graphName, | ||
| strandId, | ||
| oid, | ||
| cause: /** @type {Error} */ (err).message, | ||
| }, | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
readDescriptorByOid error handling wraps all parse errors uniformly.
The catch block at line 272 wraps any error during parsing as E_STRAND_CORRUPT. However, the graphName mismatch check at line 266-270 throws a StrandError with E_STRAND_GRAPH_MISMATCH, which would then be caught and re-wrapped as E_STRAND_CORRUPT, losing the specific error code.
🐛 Proposed fix to preserve graph mismatch errors
async readDescriptorByOid(oid, strandId) {
const buf = await this._graph._persistence.readBlob(oid);
if (buf === null || buf === undefined) {
throw new StrandError(`Strand '${strandId}' points to a missing blob`, {
code: 'E_STRAND_MISSING_OBJECT',
context: { graphName: this._graph._graphName, strandId, oid },
});
}
+ let descriptor;
try {
- const descriptor = parseStrandBlob(buf, `strand '${strandId}'`);
- if (descriptor.graphName !== this._graph._graphName) {
- throw new StrandError('descriptor graphName does not match the current graph', {
- code: 'E_STRAND_GRAPH_MISMATCH',
- });
- }
- return descriptor;
+ descriptor = parseStrandBlob(buf, `strand '${strandId}'`);
} catch (err) {
throw new StrandError(`Strand '${strandId}' is corrupt`, {
code: 'E_STRAND_CORRUPT',
context: {
graphName: this._graph._graphName,
strandId,
oid,
cause: /** `@type` {Error} */ (err).message,
},
});
}
+
+ if (descriptor.graphName !== this._graph._graphName) {
+ throw new StrandError('descriptor graphName does not match the current graph', {
+ code: 'E_STRAND_GRAPH_MISMATCH',
+ context: { expected: this._graph._graphName, actual: descriptor.graphName },
+ });
+ }
+ return descriptor;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/domain/services/strand/StrandDescriptorStore.js` around lines 255 - 283,
The catch-all in readDescriptorByOid is re-wrapping a StrandError (like the
E_STRAND_GRAPH_MISMATCH thrown when descriptor.graphName !==
this._graph._graphName) into E_STRAND_CORRUPT; modify the logic in
readDescriptorByOid so that after calling parseStrandBlob (or inside the catch)
you detect if the thrown error is already a StrandError with code
'E_STRAND_GRAPH_MISMATCH' (or any existing StrandError) and rethrow it
unchanged, otherwise wrap non-StrandError parse failures as the existing
E_STRAND_CORRUPT; refer to the readDescriptorByOid method, parseStrandBlob call,
and the StrandError constructor/codes to implement this conditional rethrow.
| it('defaults trust policy mode to warn when mode is omitted', async () => { | ||
| const persistence = new InMemoryGraphAdapter(); | ||
| await seedTrustChain(persistence, 'events', [ | ||
| KEY_ADD_1, | ||
| KEY_ADD_2, | ||
| WRITER_BIND_ADD_ALICE, | ||
| ]); | ||
|
|
||
| const verifier = createVerifier(persistence); | ||
| const result = await verifier.evaluateTrust('events', { | ||
| writerIds: ['alice'], | ||
| }); | ||
|
|
||
| expect(result.mode).toBe('signed_evidence_v1'); | ||
| expect(result.trust.status).toBe('configured'); | ||
| expect(result.trust.source).toBe('ref'); | ||
| expect(result.trustVerdict).toBe('pass'); | ||
| }); |
There was a problem hiding this comment.
Rename this test or update the assertion.
The title says the omitted default is "warn", but the body pins result.mode === 'signed_evidence_v1'. That mismatch will mislead the next reader about the behavior this test is actually locking down.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/domain/services/AuditVerifierService.test.js` around lines 1169 -
1186, The test title and assertions disagree: the spec asserts result.mode ===
'signed_evidence_v1' but the title says the default is "warn"; update the test
title in the "defaults trust policy mode to warn when mode is omitted" test to
accurately describe the behavior (e.g., "defaults trust policy mode to
signed_evidence_v1 when mode is omitted") so the test name matches the assertion
that verify.evaluateTrust returns result.mode 'signed_evidence_v1' (references:
createVerifier, verifier.evaluateTrust, result.mode).
| it('filters patches by Lamport ceiling', async () => { | ||
| const { ctrl, host } = setup(); | ||
| const frontier = new Map([['w1', 'tip1']]); | ||
| host.getFrontier.mockResolvedValue(frontier); | ||
|
|
||
| const patches = [ | ||
| fakePatchEntry({ lamport: 1, sha: 'sha1' }), | ||
| fakePatchEntry({ lamport: 5, sha: 'sha5' }), | ||
| fakePatchEntry({ lamport: 10, sha: 'sha10' }), | ||
| ]; | ||
| host._loadPatchChainFromSha.mockResolvedValue(patches); | ||
|
|
||
| await ctrl.materialize({ ceiling: 5 }); | ||
|
|
||
| // Patches with lamport > 5 should be excluded — the code filters in collectPatchesForFrontier | ||
| expect(host._provenanceIndex).toBeInstanceOf(ProvenanceIndex); | ||
| }); |
There was a problem hiding this comment.
Assert an observable effect of the ceiling filter.
This test never proves that the lamport-10 patch was excluded — _provenanceIndex would still be a ProvenanceIndex even if all three patches were applied. Give the above-ceiling patch a unique node/property effect and assert that it does not appear in the materialized result.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/domain/services/controllers/MaterializeController.test.js` around
lines 611 - 627, The test doesn't assert that the ceiling filter actually
excluded the lamport-10 patch; update the test that calls ctrl.materialize({
ceiling: 5 }) to give the lamport:10 fakePatchEntry a unique side-effect (e.g.,
a unique node id or property) and then assert that this unique node/property is
NOT present after materialization (inspect host._provenanceIndex or the
materialized output exposed by ctrl/materialize). Specifically, change the
patches passed to host._loadPatchChainFromSha so the high-lamport patch created
by fakePatchEntry('sha10') introduces a distinct node/property, call
ctrl.materialize({ ceiling: 5 }), and then assert the provenance index or
returned materialized structure does not contain that distinct node/property
(use existing helpers/assertions used elsewhere in the test suite to query
ProvenanceIndex).
| it('accepts an empty string match without throwing validation error', async () => { | ||
| // Empty string is still typeof 'string', so it passes the match check. | ||
| // The call may fail downstream (e.g. _materializeGraph) but NOT on match validation. | ||
| await expect(ctrl.observer({ match: '' })).rejects.not.toThrow( | ||
| 'observer config.match must be a non-empty string or non-empty array of strings', | ||
| ); |
There was a problem hiding this comment.
Don't lock observer({ match: '' }) into the "reject later" path.
The contract here says match must be non-empty, but this assertion treats '' as acceptable and only requires failure to happen downstream. That makes both a real validation fix and a future successful implementation fail the test; pick the intended contract and assert that directly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/domain/services/controllers/QueryController.test.js` around lines
525 - 530, The test currently assumes an empty string match is allowed and will
fail later; instead enforce the intended contract that match must be non-empty
by updating the assertion on ctrl.observer({ match: '' }) to expect a validation
failure: call ctrl.observer with { match: '' } and assert it rejects (or throws)
with the validation message 'observer config.match must be a non-empty string or
non-empty array of strings' so the test explicitly verifies the validation in
the observer() path.
| it('accepts an empty string pattern (matches only empty-string node IDs)', () => { | ||
| // Empty string is a valid non-empty-type string; isValidPattern returns true | ||
| expect(() => ctrl.watch('', { onChange: vi.fn() })).not.toThrow(); | ||
| }); |
There was a problem hiding this comment.
Don't codify watch('') as valid input.
Every other validation path here treats pattern as a non-empty string. Locking in the empty-string loophole makes the API contract inconsistent and turns a future validation fix into a breaking test change.
Suggested test change
- it('accepts an empty string pattern (matches only empty-string node IDs)', () => {
- // Empty string is a valid non-empty-type string; isValidPattern returns true
- expect(() => ctrl.watch('', { onChange: vi.fn() })).not.toThrow();
- });
+ it('rejects an empty string pattern', () => {
+ expect(() => ctrl.watch('', { onChange: vi.fn() }))
+ .toThrow('pattern must be a non-empty string');
+ });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('accepts an empty string pattern (matches only empty-string node IDs)', () => { | |
| // Empty string is a valid non-empty-type string; isValidPattern returns true | |
| expect(() => ctrl.watch('', { onChange: vi.fn() })).not.toThrow(); | |
| }); | |
| it('rejects an empty string pattern', () => { | |
| expect(() => ctrl.watch('', { onChange: vi.fn() })) | |
| .toThrow('pattern must be a non-empty string'); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/domain/services/controllers/SubscriptionController.test.js` around
lines 253 - 256, The test currently asserts that ctrl.watch('', { onChange:
vi.fn() }) does not throw, but the API contract requires non-empty pattern
strings; change the test to treat the empty-string case as invalid by asserting
that ctrl.watch('', { onChange: vi.fn() }) throws (or otherwise rejects) and
update the test description to "rejects an empty string pattern" so the behavior
of SubscriptionController.watch is consistent with other validation paths.
| let mockRuntime; | ||
| let mockCore; | ||
|
|
||
| beforeEach(() => { | ||
| mockRuntime = createMockRuntime(); | ||
| mockCore = createMockCore(); | ||
|
|
||
| // Construct WarpApp with a mock core that also acts as runtime | ||
| app = new WarpApp(mockCore); | ||
| // Override _runtime() to return our mock runtime (which has all the methods) | ||
| app._runtime = () => mockRuntime; | ||
| // Override core() to return our mock core | ||
| app.core = () => mockCore; | ||
| }); |
There was a problem hiding this comment.
Fix TypeScript errors causing pipeline failures.
The test file has several TypeScript issues flagged by the CI pipeline:
mockRuntimeandmockCore(lines 52-53) have implicitanytypes- Line 60: The mock object doesn't satisfy the
WarpCoretype - Line 62:
_runtimeis a private property and cannot be reassigned directly
Add explicit type annotations and use type assertions to satisfy the type checker:
🔧 Proposed fix
describe('WarpApp delegation', () => {
/** `@type` {WarpApp} */
let app;
+ /** `@type` {any} */
let mockRuntime;
+ /** `@type` {any} */
let mockCore;
beforeEach(() => {
mockRuntime = createMockRuntime();
mockCore = createMockCore();
// Construct WarpApp with a mock core that also acts as runtime
- app = new WarpApp(mockCore);
+ app = new WarpApp(/** `@type` {any} */ (mockCore));
// Override _runtime() to return our mock runtime (which has all the methods)
- app._runtime = () => mockRuntime;
+ /** `@type` {any} */ (app)._runtime = () => mockRuntime;
// Override core() to return our mock core
- app.core = () => mockCore;
+ /** `@type` {any} */ (app).core = () => mockCore;
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let mockRuntime; | |
| let mockCore; | |
| beforeEach(() => { | |
| mockRuntime = createMockRuntime(); | |
| mockCore = createMockCore(); | |
| // Construct WarpApp with a mock core that also acts as runtime | |
| app = new WarpApp(mockCore); | |
| // Override _runtime() to return our mock runtime (which has all the methods) | |
| app._runtime = () => mockRuntime; | |
| // Override core() to return our mock core | |
| app.core = () => mockCore; | |
| }); | |
| /** `@type` {any} */ | |
| let mockRuntime; | |
| /** `@type` {any} */ | |
| let mockCore; | |
| beforeEach(() => { | |
| mockRuntime = createMockRuntime(); | |
| mockCore = createMockCore(); | |
| // Construct WarpApp with a mock core that also acts as runtime | |
| app = new WarpApp(/** `@type` {any} */ (mockCore)); | |
| // Override _runtime() to return our mock runtime (which has all the methods) | |
| /** `@type` {any} */ (app)._runtime = () => mockRuntime; | |
| // Override core() to return our mock core | |
| /** `@type` {any} */ (app).core = () => mockCore; | |
| }); |
🧰 Tools
🪛 GitHub Actions: Release Preflight (PR)
[error] 52-52: TypeScript error TS7034: Variable 'mockRuntime' implicitly has type 'any' in some locations where its type cannot be determined.
🪛 GitHub Check: preflight
[failure] 64-64:
Variable 'mockCore' implicitly has an 'any' type.
[failure] 62-62:
Variable 'mockRuntime' implicitly has an 'any' type.
[failure] 62-62:
Property '_runtime' is private and only accessible within class 'WarpApp'.
[failure] 60-60:
Argument of type '{ createStrand: Mock<() => Promise<{ strandId: string; }>>; getStrand: Mock<() => Promise<{ strandId: string; }>>; listStrands: Mock<() => Promise<{ strandId: string; }[]>>; ... 6 more ...; tickStrand: Mock<...>; }' is not assignable to parameter of type 'WarpCore'.
[failure] 53-53:
Variable 'mockCore' implicitly has type 'any' in some locations where its type cannot be determined.
[failure] 52-52:
Variable 'mockRuntime' implicitly has type 'any' in some locations where its type cannot be determined.
🪛 GitHub Check: type-firewall
[failure] 64-64:
Variable 'mockCore' implicitly has an 'any' type.
[failure] 62-62:
Variable 'mockRuntime' implicitly has an 'any' type.
[failure] 62-62:
Property '_runtime' is private and only accessible within class 'WarpApp'.
[failure] 60-60:
Argument of type '{ createStrand: Mock<() => Promise<{ strandId: string; }>>; getStrand: Mock<() => Promise<{ strandId: string; }>>; listStrands: Mock<() => Promise<{ strandId: string; }[]>>; ... 6 more ...; tickStrand: Mock<...>; }' is not assignable to parameter of type 'WarpCore'.
[failure] 53-53:
Variable 'mockCore' implicitly has type 'any' in some locations where its type cannot be determined.
[failure] 52-52:
Variable 'mockRuntime' implicitly has type 'any' in some locations where its type cannot be determined.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/domain/WarpApp.delegation.test.js` around lines 52 - 65, Declare
explicit types for mockRuntime and mockCore (e.g., let mockRuntime:
ReturnType<typeof createMockRuntime> and let mockCore: ReturnType<typeof
createMockCore> or the specific interfaces) so they are not implicit any; when
constructing app = new WarpApp(mockCore) assert the mock core satisfies WarpCore
using a type assertion (e.g., mockCore as unknown as WarpCore) to satisfy the
checker; and avoid assigning to the private _runtime directly by casting app to
any or using (app as unknown as { _runtime?: () => typeof mockRuntime
})._runtime = () => mockRuntime (or provide a public test-only setter) and
similarly assert core() overrides via a type-safe cast (e.g., (app as any).core
= () => mockCore) so the test compiles without TypeScript errors while keeping
identities createMockRuntime, createMockCore, WarpApp, _runtime, core, and
WarpCore as references.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.github/workflows/ci.yml (1)
72-83: New coverage gate looks good.The job correctly sets up Node.js 22, installs dependencies, and runs
test:coverage:ciwhich enforces the 97.71% line coverage threshold via Vitest without allowing auto-updates. This aligns well with the ratchet mechanism defined inscripts/coverage-ratchet.jsandvitest.config.js.One optional improvement: consider adding
needs: type-firewallso the coverage job doesn't consume runner time if static analysis already failed.coverage-threshold: runs-on: ubuntu-latest + needs: type-firewall steps:,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 72 - 83, Add a dependency so the coverage job doesn't run if static analysis fails: update the CI job named coverage-threshold to include needs: type-firewall (i.e., add the needs key referencing the existing type-firewall job) so the coverage-threshold job will be skipped when the type-firewall job fails.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 72-83: Add a dependency so the coverage job doesn't run if static
analysis fails: update the CI job named coverage-threshold to include needs:
type-firewall (i.e., add the needs key referencing the existing type-firewall
job) so the coverage-threshold job will be skipped when the type-firewall job
fails.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 01e7ab69-7ed8-48f7-b452-faaa88dfb8bb
📒 Files selected for processing (2)
.github/workflows/ci.ymlpackage.json
🚧 Files skipped from review as they are similar to previous changes (1)
- package.json
Release Preflight
If you tag this commit as |
1 similar comment
Release Preflight
If you tag this commit as |
Release Preflight
If you tag this commit as |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
docs/method/backlog/bad-code/PROTO_js-test-typecheck-drift.md (1)
18-21: Optional wording polish for bullet cadence.Consider varying the opening words for these bullets to reduce repetition and improve scanability.
Proposed wording tweak
-- It blurs source-type safety with test-fixture polish. -- It encourages fake fixes like `ts-ignore` or widening everything back to mush. -- It blocks PRs even when `src/`, the public consumer surface, lint, and runtime +- This blurs source-type safety with test-fixture polish. +- It can encourage fake fixes like `ts-ignore` or widening everything back to mush. +- It also blocks PRs even when `src/`, the public consumer surface, lint, and runtime tests are all green.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/method/backlog/bad-code/PROTO_js-test-typecheck-drift.md` around lines 18 - 21, Reword the three bullets that currently start "It blurs source-type safety with test-fixture polish.", "It encourages fake fixes like `ts-ignore` or widening everything back to mush.", and "It blocks PRs even when `src/`, the public consumer surface, lint, and runtime tests are all green." to vary their openings and improve cadence — e.g., switch to active verbs or different lead words (such as "Blurs...", "Encourages...", "Blocks..." or rephrase to "Creates...", "Promotes...", "Prevents..."), keep the meaning intact, and ensure each bullet scans distinctly so readers don’t see repetitive "It" leads.docs/method/backlog/asap/PROTO_conflict-analyzer-pipeline-decomposition.md (1)
26-27: Optional: Consider rephrasing for stylistic variety.Static analysis tools suggest varying the phrase "hard to" for stylistic elevation. However, the current phrasing is clear and direct, making this change entirely optional.
📝 Optional stylistic variation
-service class. That makes the file hard to test in layers, hard to -review, and too easy to accidentally couple unrelated phases. +service class. That makes the file difficult to test in layers, challenging to +review, and too easy to accidentally couple unrelated phases.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/method/backlog/asap/PROTO_conflict-analyzer-pipeline-decomposition.md` around lines 26 - 27, The sentence repeats "hard to" three times ("hard to test in layers, hard to review, and too easy to accidentally couple unrelated phases"); rewrite it for stylistic variety and clarity by combining and varying phrasing—e.g., "This makes the file difficult to test in layers, challenging to review, and prone to accidental coupling of unrelated phases"—so keep meaning but avoid repeated "hard to" while preserving the original critique.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@docs/method/backlog/asap/PROTO_conflict-analyzer-pipeline-decomposition.md`:
- Around line 26-27: The sentence repeats "hard to" three times ("hard to test
in layers, hard to review, and too easy to accidentally couple unrelated
phases"); rewrite it for stylistic variety and clarity by combining and varying
phrasing—e.g., "This makes the file difficult to test in layers, challenging to
review, and prone to accidental coupling of unrelated phases"—so keep meaning
but avoid repeated "hard to" while preserving the original critique.
In `@docs/method/backlog/bad-code/PROTO_js-test-typecheck-drift.md`:
- Around line 18-21: Reword the three bullets that currently start "It blurs
source-type safety with test-fixture polish.", "It encourages fake fixes like
`ts-ignore` or widening everything back to mush.", and "It blocks PRs even when
`src/`, the public consumer surface, lint, and runtime tests are all green." to
vary their openings and improve cadence — e.g., switch to active verbs or
different lead words (such as "Blurs...", "Encourages...", "Blocks..." or
rephrase to "Creates...", "Promotes...", "Prevents..."), keep the meaning
intact, and ensure each bullet scans distinctly so readers don’t see repetitive
"It" leads.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4b0aeebc-c015-4bd9-a595-90a1e5905653
📒 Files selected for processing (8)
.github/workflows/ci.yml.github/workflows/release-pr.ymldocs/method/backlog/asap/DX_timeoutms-missing-from-type-surface.mddocs/method/backlog/asap/PROTO_conflict-analyzer-pipeline-decomposition.mddocs/method/backlog/bad-code/PROTO_js-test-typecheck-drift.mddocs/method/backlog/bad-code/PROTO_strand-typedef-corridor.mdtest/unit/domain/services/controllers/MaterializeController.test.jstest/unit/domain/services/strand/StrandService.test.js
✅ Files skipped from review due to trivial changes (4)
- .github/workflows/release-pr.yml
- docs/method/backlog/asap/DX_timeoutms-missing-from-type-surface.md
- docs/method/backlog/bad-code/PROTO_strand-typedef-corridor.md
- test/unit/domain/services/controllers/MaterializeController.test.js
What changed
This closes cycle 0011 by splitting
StrandServiceinto explicit collaborators and documenting the result.StrandDescriptorStoreStrandMaterializerStrandPatchServiceStrandIntentServicestrandShared.jsstrandTypes.jsStrandServiceWhy
StrandServicehad become a god object mixing descriptor persistence, materialization, patch commit flow, and intent/tick orchestration. This change reduces that responsibility pile while preserving the existing behavior behind the coverage harness from cycle 0010.Impact
StrandServiceis now a materially thinner facademainValidation
npm run lintnpm run typecheck:srcnpm run test:coverage6484tests passing97.71%line coverageSummary by CodeRabbit
Documentation
Bug Fixes
Tests
Chores