-
Notifications
You must be signed in to change notification settings - Fork 2
Distributed Transaction Routing (DTR) #517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: testnet
Are you sure you want to change the base?
Conversation
+ fix relax_tx nodecall call payload parameters + add signature verification to relay_tx handler + add node identitification to relay_tx handler + remove transactio object from relax_tx payload + rename relay service to dtr manager + use promise.allSettled to dispatch tx relays + return validators from isValidatorForNextBlock helper + add signing algorithm checks for relayer node
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis PR implements Minimal Deterministic Transaction Relay (DTR), a feature enabling non-validator nodes to relay transactions to validators in production mode. It introduces validator detection, a background retry service, relay endpoint handling, relay transaction reception logic, and supporting state management across the node's networking and consensus layers. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Node as Non-Validator Node
participant Val as Validator Node
participant Mempool
participant DTR as DTRManager<br/>(Retry Service)
Client->>Node: Submit Transaction
Node->>Node: Check if validator?
alt Is Validator
Node->>Mempool: Add to local mempool
else Is Not Validator
Node->>Val: Relay transaction to validators
par Relay to Multiple Validators
Node->>Val: Attempt relay (random order)
Val->>Val: Validate signature & coherence
alt Validation Success
Val->>Mempool: Add to validator mempool
Val-->>Node: 200 OK
else Validation Fails
Val-->>Node: Error response
end
end
opt All relays fail
Node->>Node: Cache ValidityData locally
end
end
Note over DTR: Periodic Background (every 10s)
loop Every 10 seconds (PROD only)
DTR->>Mempool: Scan for transactions
DTR->>DTR: Recalculate validators<br/>(if block changed)
DTR->>Val: Retry relay with updated<br/>validator set
alt Relay Success
DTR->>Mempool: Remove transaction
DTR->>DTR: Clear cache entry
else Retry Limit Met
DTR->>Mempool: Remove after 10 attempts
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Areas requiring extra attention:
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.gitignore (1)
111-118: Remove duplicate gitignore patterns.Lines 111–113 duplicate patterns already present in lines 114–115 and 117. Consolidate these to reduce clutter and improve maintainability.
Apply this diff to remove the duplicates:
# Output files output/* .env -CLAUDE.md -data/l2ps/example/config.json -data/l2ps/example/iv.key data/l2ps/example/config.json data/l2ps/example/iv.key data/l2ps/* CLAUDE.md GEMINI.md
🧹 Nitpick comments (10)
src/libs/blockchain/mempool_v2.ts (1)
213-226: Use the project's logging utility for consistency.The method uses
console.logfor all logging, while the rest of the file and codebase uses thelogutility (imported on line 11). For consistency and to benefit from log file persistence:static async removeTransaction(txHash: string): Promise<void> { try { const result = await this.repo.delete({ hash: txHash }) - if (result.affected > 0) { - console.log(`[Mempool] Removed transaction ${txHash} (DTR relay success)`) + if (result.affected && result.affected > 0) { + log.info(`[Mempool] Removed transaction ${txHash} (DTR relay success)`) } else { - console.log(`[Mempool] Transaction ${txHash} not found for removal`) + log.info(`[Mempool] Transaction ${txHash} not found for removal`) } } catch (error) { - console.log(`[Mempool] Error removing transaction ${txHash}:`, error) + log.error(`[Mempool] Error removing transaction ${txHash}: ${error}`) throw error } }src/utilities/sharedState.ts (1)
57-59: Consider adding cache eviction to prevent unbounded memory growth.The
validityDataCacheMap can grow indefinitely if transactions fail to relay or are never cleaned up. For long-running production nodes, this could lead to memory issues.Consider adding a max size limit or TTL-based eviction, or ensure the DTRManager properly cleans up entries after successful relay or max retry attempts.
src/index.ts (1)
388-403: Consider consolidating shutdown logic and cleaning up other services.The shutdown handlers only stop the DTR service, but other services (MCP server, signaling server) are also started and may benefit from graceful shutdown.
+function gracefulShutdown(signal: string) { + console.log(`[MAIN] Received ${signal}, shutting down gracefully...`) + if (getSharedState.PROD) { + DTRManager.getInstance().stop() + } + // Consider also: indexState.mcpServer?.stop(), signaling server cleanup + process.exit(0) +} + -process.on("SIGINT", () => { - console.log("[DTR] Received SIGINT, shutting down gracefully...") - if (getSharedState.PROD) { - DTRManager.getInstance().stop() - } - process.exit(0) -}) - -process.on("SIGTERM", () => { - console.log("[DTR] Received SIGTERM, shutting down gracefully...") - if (getSharedState.PROD) { - DTRManager.getInstance().stop() - } - process.exit(0) -}) +process.on("SIGINT", () => gracefulShutdown("SIGINT")) +process.on("SIGTERM", () => gracefulShutdown("SIGTERM"))src/libs/consensus/v2/routines/isValidator.ts (1)
6-19: Consider adding error handling for robustness.The function has no try-catch, so any error from
getCommonValidatorSeed()orgetShard()will propagate up. Depending on how callers handle this, it may cause issues in the DTR retry loop.export default async function isValidatorForNextBlock(): Promise<{ isValidator: boolean validators: Peer[] }> { + try { const { commonValidatorSeed } = await getCommonValidatorSeed() const validators = await getShard(commonValidatorSeed) return { isValidator: validators.some( peer => peer.identity === getSharedState.publicKeyHex, ), validators, } + } catch (error) { + console.error("[isValidatorForNextBlock] Error:", error) + return { isValidator: false, validators: [] } + } }Note: This function's correctness depends on fixing the early-return issue in
getShard.ts(see related comment).src/libs/network/manageNodeCall.ts (1)
470-474: Add input validation for RELAY_TX before delegating.The handler directly accesses
data.validityDatawithout checking ifdataexists or has the expected structure. WhilereceiveRelayedTransactionmay handle invalid input, a defensive check at the boundary is good practice.// REVIEW DTR: Handle relayed transactions from non-validator nodes case "RELAY_TX": + if (!data?.validityData) { + response.result = 400 + response.response = "Missing validityData in RELAY_TX request" + break + } return await DTRManager.receiveRelayedTransaction( data.validityData as ValidityData, )src/libs/network/endpointHandlers.ts (2)
432-434: UsetoSorted()to avoid mutating the original array.The
.sort()method mutates the array in place. While thevalidatorsarray comes fromisValidatorForNextBlock(), usingtoSorted()(or spreading first) is safer and more explicit about intent.- const availableValidators = validators.sort( - () => Math.random() - 0.5, - ) // Random order for load balancing + const availableValidators = [...validators].sort( + () => Math.random() - 0.5, + ) // Random order for load balancing
313-313: Track TODO comments for proper typing.These TODO comments indicate deferred work for proper type definitions once l2ps is implemented. Consider creating a tracking issue.
Would you like me to open an issue to track these typing improvements for when l2ps is implemented?
Also applies to: 558-559
dtr_implementation/README.md (1)
51-65: Add language specifier to fenced code block.The code block illustrating resource utilization lacks a language specifier, which prevents proper rendering/highlighting.
-``` +```text Traditional Demos Network: ├── Validator Node A: Full Mempool (1000 transactions)src/libs/network/dtr/relayRetryService.ts (2)
35-38: Mark readonly properties appropriately.Per static analysis,
retryAttemptsis never reassigned andvalidityDataCacheis a public static that should be readonly to prevent accidental reassignment.- private retryAttempts = new Map<string, number>() // txHash -> attempt count + private readonly retryAttempts = new Map<string, number>() // txHash -> attempt count private readonly maxRetryAttempts = 10 private readonly retryIntervalMs = 10000 // 10 seconds - public static validityDataCache = new Map<string, ValidityData>() + public static readonly validityDataCache = new Map<string, ValidityData>()
143-145: Improve error logging to avoid[object Object]output.Concatenating error objects with strings produces unhelpful output. Use
error.messageor structured logging.} catch (error) { - log.error("[DTR RetryService] Error processing mempool: " + error) + log.error("[DTR RetryService] Error processing mempool: " + (error instanceof Error ? error.message : String(error))) }Apply similar fix to lines 178-181 and 544.
Also applies to: 177-181, 543-545
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (14)
.gitignore(1 hunks)dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md(1 hunks)dtr_implementation/README.md(1 hunks)dtr_implementation/validator_status_minimal.md(1 hunks)src/index.ts(2 hunks)src/libs/blockchain/mempool_v2.ts(1 hunks)src/libs/consensus/v2/PoRBFT.ts(0 hunks)src/libs/consensus/v2/routines/getShard.ts(1 hunks)src/libs/consensus/v2/routines/isValidator.ts(1 hunks)src/libs/consensus/v2/routines/mergeMempools.ts(0 hunks)src/libs/network/dtr/relayRetryService.ts(1 hunks)src/libs/network/endpointHandlers.ts(6 hunks)src/libs/network/manageNodeCall.ts(3 hunks)src/utilities/sharedState.ts(2 hunks)
💤 Files with no reviewable changes (2)
- src/libs/consensus/v2/PoRBFT.ts
- src/libs/consensus/v2/routines/mergeMempools.ts
🧰 Additional context used
🧬 Code graph analysis (6)
src/libs/blockchain/mempool_v2.ts (1)
src/utilities/logger.ts (1)
error(125-132)
src/libs/consensus/v2/routines/isValidator.ts (4)
src/libs/peer/Peer.ts (1)
Peer(15-346)src/libs/consensus/v2/routines/getCommonValidatorSeed.ts (1)
getCommonValidatorSeed(58-132)src/libs/consensus/v2/routines/getShard.ts (1)
getShard(8-58)src/utilities/sharedState.ts (1)
getSharedState(271-273)
src/libs/network/endpointHandlers.ts (3)
src/utilities/sharedState.ts (1)
getSharedState(271-273)src/libs/consensus/v2/routines/isValidator.ts (1)
isValidatorForNextBlock(6-19)src/libs/network/dtr/relayRetryService.ts (1)
DTRManager(31-570)
src/libs/network/dtr/relayRetryService.ts (10)
src/utilities/logger.ts (1)
error(125-132)src/utilities/sharedState.ts (1)
getSharedState(271-273)src/libs/consensus/v2/routines/isValidator.ts (1)
isValidatorForNextBlock(6-19)src/libs/blockchain/mempool_v2.ts (1)
Mempool(18-227)src/libs/consensus/v2/routines/getCommonValidatorSeed.ts (1)
getCommonValidatorSeed(58-132)src/libs/consensus/v2/routines/getShard.ts (1)
getShard(8-58)src/libs/peer/Peer.ts (1)
Peer(15-346)src/libs/peer/PeerManager.ts (1)
PeerManager(20-423)src/libs/crypto/hashing.ts (1)
Hashing(15-26)src/libs/blockchain/transaction.ts (1)
isCoherent(263-273)
src/index.ts (2)
src/utilities/sharedState.ts (1)
getSharedState(271-273)src/libs/network/dtr/relayRetryService.ts (1)
DTRManager(31-570)
src/libs/network/manageNodeCall.ts (1)
src/libs/network/dtr/relayRetryService.ts (1)
DTRManager(31-570)
🪛 Biome (2.1.2)
src/libs/network/endpointHandlers.ts
[error] 428-429: Unexpected constant condition.
(lint/correctness/noConstantCondition)
🪛 GitHub Check: SonarCloud Code Analysis
src/libs/network/endpointHandlers.ts
[warning] 40-40: Remove this unused import of 'getCommonValidatorSeed'.
[warning] 39-39: Remove this unused import of 'getShard'.
[warning] 313-313: Complete the task associated to this "TODO" comment.
[warning] 427-427: Remove this commented out code.
[warning] 428-428: This always evaluates to truthy. Consider refactoring this code.
[warning] 432-434: Move this array "sort" operation to a separate statement or replace it with "toSorted".
[warning] 424-424: Remove this useless assignment to variable "isValidator".
[warning] 459-459: Complete the task associated to this "TODO" comment.
[warning] 559-559: Complete the task associated to this "TODO" comment.
src/libs/network/dtr/relayRetryService.ts
[warning] 144-144: 'error' will use Object's default stringification format ('[object Object]') when stringified.
[warning] 544-544: 'error' will use Object's default stringification format ('[object Object]') when stringified.
[warning] 180-180: 'error' will use Object's default stringification format ('[object Object]') when stringified.
[warning] 38-38: Make this public static property readonly.
[warning] 35-35: Member 'retryAttempts' is never reassigned; mark it as readonly.
src/libs/network/manageNodeCall.ts
[warning] 1-1: Remove this unused import of 'SigningAlgorithm'.
[warning] 32-32: Remove this unused import of 'ucrypto'.
[warning] 21-21: Remove this unused import of 'isValidatorForNextBlock'.
[warning] 22-22: Remove this unused import of 'TxUtils'.
[warning] 23-23: '@kynesyslabs/demosdk/types' imported multiple times.
[warning] 35-35: Remove this unused import of 'PeerManager'.
[warning] 23-23: Remove this unused import of 'Transaction'.
[warning] 31-31: Remove this unused import of 'hexToUint8Array'.
🪛 LanguageTool
dtr_implementation/README.md
[grammar] ~252-~252: Use a hyphen to join words.
Context: ... relay optimization - Quality-of-service based routing ### **Phase 3: Incentive ...
(QB_NEW_EN_HYPHEN)
🪛 markdownlint-cli2 (0.18.1)
dtr_implementation/README.md
51-51: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md
51-51: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (4)
dtr_implementation/README.md (1)
1-273: Documentation looks comprehensive and well-structured.The README provides a clear explanation of DTR's purpose, architecture, security considerations, and implementation details. The flow diagrams and use cases help understand the feature.
src/libs/network/dtr/relayRetryService.ts (2)
31-49: DTRManager singleton with lifecycle management looks well-structured.The singleton pattern, start/stop lifecycle with proper cleanup, and background interval management are implemented correctly.
Also applies to: 55-69, 75-91
360-554:receiveRelayedTransactionvalidation pipeline is thorough.The multi-layer validation (validator check, signing algorithm, known validator, signature, coherence, transaction signature, mempool addition) provides good security guarantees for relayed transactions.
dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md (1)
224-333: Flow diagram is comprehensive and helpful.The ASCII flow diagram clearly illustrates the DTR decision tree and processing paths for both immediate relay and background retry scenarios.
| export default async function isValidatorForNextBlock(): Promise<boolean> { | ||
| try { | ||
| const { commonValidatorSeed } = await getCommonValidatorSeed() | ||
| const validators = await getShard(commonValidatorSeed) | ||
| const ourIdentity = getSharedState.identity.ed25519.publicKey.toString("hex") | ||
| return validators.some(peer => peer.identity === ourIdentity) | ||
| } catch { | ||
| return false // Conservative fallback | ||
| } | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Documentation signature mismatch with actual implementation.
Same issue as in validator_status_minimal.md - the function signature shown returns Promise<boolean> but the actual implementation returns Promise<{ isValidator: boolean; validators: Peer[] }>.
🤖 Prompt for AI Agents
In dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md around lines 25 to 35, the
documented function signature declares Promise<boolean> but the real
implementation returns Promise<{ isValidator: boolean; validators: Peer[] }>;
update the documentation to match the actual implementation by changing the
signature to return Promise<{ isValidator: boolean; validators: Peer[] }>,
update any example calls and descriptions to reference the isValidator property
and the validators array, and ensure the Peer type is either referenced or
defined in the doc so types are consistent.
| ### Total New Files: 2 | ||
| - `src/libs/consensus/v2/routines/isValidator.ts` (15 lines) | ||
| - `src/libs/network/dtr/relayRetryService.ts` (240 lines) - Background retry service | ||
|
|
||
| ### Total Modified Files: 4 | ||
| - `src/libs/network/endpointHandlers.ts` (+50 lines) - Enhanced DTR logic with multi-validator retry | ||
| - `src/libs/network/manageNodeCall.ts` (+55 lines) - RELAY_TX handler with validation | ||
| - `src/libs/blockchain/mempool_v2.ts` (+20 lines) - removeTransaction method | ||
| - `src/utilities/sharedState.ts` (+3 lines) - ValidityData cache | ||
| - `src/index.ts` (+25 lines) - Service startup and graceful shutdown |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
File count discrepancy: lists 4 modified files but enumerates 5.
The summary says "Total Modified Files: 4" but then lists 5 files (endpointHandlers.ts, manageNodeCall.ts, mempool_v2.ts, sharedState.ts, and index.ts).
-### Total Modified Files: 4
+### Total Modified Files: 5📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### Total New Files: 2 | |
| - `src/libs/consensus/v2/routines/isValidator.ts` (15 lines) | |
| - `src/libs/network/dtr/relayRetryService.ts` (240 lines) - Background retry service | |
| ### Total Modified Files: 4 | |
| - `src/libs/network/endpointHandlers.ts` (+50 lines) - Enhanced DTR logic with multi-validator retry | |
| - `src/libs/network/manageNodeCall.ts` (+55 lines) - RELAY_TX handler with validation | |
| - `src/libs/blockchain/mempool_v2.ts` (+20 lines) - removeTransaction method | |
| - `src/utilities/sharedState.ts` (+3 lines) - ValidityData cache | |
| - `src/index.ts` (+25 lines) - Service startup and graceful shutdown | |
| ### Total New Files: 2 | |
| - `src/libs/consensus/v2/routines/isValidator.ts` (15 lines) | |
| - `src/libs/network/dtr/relayRetryService.ts` (240 lines) - Background retry service | |
| ### Total Modified Files: 5 | |
| - `src/libs/network/endpointHandlers.ts` (+50 lines) - Enhanced DTR logic with multi-validator retry | |
| - `src/libs/network/manageNodeCall.ts` (+55 lines) - RELAY_TX handler with validation | |
| - `src/libs/blockchain/mempool_v2.ts` (+20 lines) - removeTransaction method | |
| - `src/utilities/sharedState.ts` (+3 lines) - ValidityData cache | |
| - `src/index.ts` (+25 lines) - Service startup and graceful shutdown |
🤖 Prompt for AI Agents
In dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md around lines 129 to 138, the
"Total Modified Files" count is incorrect: it states 4 but lists 5 filenames.
Fix by updating the summary count to match the enumerated files (change "Total
Modified Files: 4" to "Total Modified Files: 5") or remove any file from the
list so the count and list are consistent.
| export default async function isValidatorForNextBlock(): Promise<boolean> { | ||
| try { | ||
| // Use existing seed generation (unchanged) | ||
| const { commonValidatorSeed } = await getCommonValidatorSeed() | ||
|
|
||
| // Use existing shard selection (unchanged) | ||
| const validators = await getShard(commonValidatorSeed) | ||
|
|
||
| // Use existing identity access (unchanged) | ||
| const ourIdentity = getSharedState.identity.ed25519.publicKey.toString("hex") | ||
|
|
||
| // Simple check if we're in the validator list | ||
| return validators.some(peer => peer.identity === ourIdentity) | ||
|
|
||
| } catch (error) { | ||
| // Conservative fallback - assume we're not validator | ||
| return false | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Documentation does not match actual implementation.
The code example shows isValidatorForNextBlock() returning Promise<boolean>, but based on the relevant code snippet from src/libs/consensus/v2/routines/isValidator.ts (lines 5-18), the actual implementation returns Promise<{ isValidator: boolean; validators: Peer[] }>.
Update the documentation to reflect the actual signature:
export default async function isValidatorForNextBlock(): Promise<{
isValidator: boolean
validators: Peer[]
}> {
// ...implementation
}🤖 Prompt for AI Agents
In dtr_implementation/validator_status_minimal.md around lines 20 to 38, the
documented function signature and return value are incorrect; update the example
to match the actual implementation which returns an object with both isValidator
and validators. Change the Promise<boolean> signature to Promise<{ isValidator:
boolean; validators: Peer[] }>, return an object { isValidator:
validators.some(...), validators } instead of a boolean, and update the
catch/fallback to return { isValidator: false, validators: [] } (or an
appropriate empty array) so the documented example matches
src/libs/consensus/v2/routines/isValidator.ts.
src/index.ts
Outdated
| // mainLoop() // Is an async function so running without waiting send that to the background | ||
|
|
||
| // Start DTR relay retry service after background loop initialization | ||
| // The service will wait for syncStatus to be true before actually processing | ||
| if (getSharedState.PROD) { | ||
| console.log( | ||
| "[DTR] Initializing relay retry service (will start after sync)", | ||
| ) | ||
| // Service will check syncStatus internally before processing | ||
| DTRManager.getInstance().start() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: mainLoop() is commented out, disabling core node functionality.
Line 374 shows mainLoop() commented out with //. This disables the main background loop which is essential for the node's core operations (sync, consensus, peer management, etc.). The DTR service alone cannot substitute for the main loop.
If this is intentional for isolated DTR testing, ensure it's reverted before merging:
term.yellow("[MAIN] ✅ Starting the background loop\n")
// ANCHOR Starting the main loop
-// mainLoop() // Is an async function so running without waiting send that to the background
+mainLoop() // Is an async function so running without waiting sends that to the background
// Start DTR relay retry service after background loop initializationCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/index.ts around lines 374 to 384, the call to mainLoop() is commented out
which disables the node's primary background loop; uncomment and invoke
mainLoop() so the main background processing (sync, consensus, peer mgmt) runs —
start it in the background (call without awaiting) before initializing the DTR
retry service, and attach a rejection handler to the returned Promise (e.g.,
.catch to log errors). If the omission was intentional for isolated testing,
gate the omission behind a clear flag so production merges always call
mainLoop().
| return allPeers | ||
| const peers = allPeers.filter(peer => peer.status.online && peer.sync.status) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Early return bypasses deterministic shard selection, leaving dead code.
The return allPeers on line 11 makes all subsequent code unreachable, including:
- Filtering for synced peers
- Deterministic validator selection using the seed
- Shard size limiting
- Security warnings for small shards
This breaks the validator selection mechanism that DTR relies on. isValidatorForNextBlock() will now incorrectly identify all online peers as validators instead of the deterministically selected shard.
If this is intentional for testing, it should be reverted before merging to testnet. Otherwise, remove the early return:
const allPeers = await PeerManager.getInstance().getOnlinePeers()
-return allPeers
const peers = allPeers.filter(peer => peer.status.online && peer.sync.status)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return allPeers | |
| const peers = allPeers.filter(peer => peer.status.online && peer.sync.status) | |
| const allPeers = await PeerManager.getInstance().getOnlinePeers() | |
| const peers = allPeers.filter(peer => peer.status.online && peer.sync.status) |
🤖 Prompt for AI Agents
In src/libs/consensus/v2/routines/getShard.ts around lines 11 to 12, there is an
unintended early return ("return allPeers") that makes the subsequent
deterministic shard-selection logic (filtering for synced peers, using the seed
to pick validators, enforcing shard size limits, and emitting security warnings
for small shards) unreachable; remove the early return so the function proceeds
to: filter allPeers to only online and synced peers, apply the deterministic
selection using the provided seed, trim the selected list to the configured
shard size, and keep the existing small-shard warning logic intact (restore any
removed variables/branches necessary for deterministic selection and
validation).
| import getShard from "src/libs/consensus/v2/routines/getShard" | ||
| import getCommonValidatorSeed from "src/libs/consensus/v2/routines/getCommonValidatorSeed" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Remove unused imports.
The imports getShard and getCommonValidatorSeed are not used in this file. The DTR logic uses isValidatorForNextBlock() which internally calls these functions and returns both the validator status and the validators list.
-import getShard from "src/libs/consensus/v2/routines/getShard"
-import getCommonValidatorSeed from "src/libs/consensus/v2/routines/getCommonValidatorSeed"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import getShard from "src/libs/consensus/v2/routines/getShard" | |
| import getCommonValidatorSeed from "src/libs/consensus/v2/routines/getCommonValidatorSeed" |
🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis
[warning] 40-40: Remove this unused import of 'getCommonValidatorSeed'.
[warning] 39-39: Remove this unused import of 'getShard'.
🤖 Prompt for AI Agents
In src/libs/network/endpointHandlers.ts around lines 39 to 40, the imports
getShard and getCommonValidatorSeed are unused; remove those two import
statements (and any trailing commas or leftover spacing) so only actually used
imports remain, ensure there are no side-effect dependencies elsewhere in the
file, and run the linter/TypeScript build to confirm no remaining references.
src/libs/network/endpointHandlers.ts
Outdated
| const { isValidator, validators } = | ||
| await isValidatorForNextBlock() | ||
|
|
||
| // if (!isValidator) { | ||
| if (true) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Debug code left in - if (true) bypasses validator check.
The isValidator variable is destructured but never used because line 428 uses if (true) instead of if (!isValidator). This causes all nodes (including validators) to attempt relaying, which defeats the purpose of DTR and could cause infinite relay loops between validators.
const { isValidator, validators } =
await isValidatorForNextBlock()
- // if (!isValidator) {
- if (true) {
+ if (!isValidator) {
log.debug(
"[DTR] Non-validator node: attempting relay to all validators",
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { isValidator, validators } = | |
| await isValidatorForNextBlock() | |
| // if (!isValidator) { | |
| if (true) { | |
| const { isValidator, validators } = | |
| await isValidatorForNextBlock() | |
| if (!isValidator) { | |
| log.debug( | |
| "[DTR] Non-validator node: attempting relay to all validators", | |
| ) |
🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis
[warning] 427-427: Remove this commented out code.
[warning] 428-428: This always evaluates to truthy. Consider refactoring this code.
[warning] 424-424: Remove this useless assignment to variable "isValidator".
🤖 Prompt for AI Agents
In src/libs/network/endpointHandlers.ts around lines 424 to 428, the debug
placeholder `if (true)` bypasses the validator check and ignores the
destructured isValidator variable; change the condition back to use the actual
check (e.g., `if (!isValidator)`) so only non-validator nodes proceed to relay,
ensure isValidator is defined/handled (avoid undefined) and remove the debug
shortcut, and keep validators available if needed for further logic to prevent
validators from attempting to relay and creating loops.
| for (const result of results) { | ||
| if (result.status === "fulfilled") { | ||
| const response = result.value | ||
| log.debug("response: " + JSON.stringify(response)) | ||
|
|
||
| if (response.result == 200) { | ||
| continue | ||
| } | ||
|
|
||
| // TODO: Handle response codes individually | ||
| DTRManager.validityDataCache.set( | ||
| response.extra.peer, | ||
| validatedData, | ||
| ) | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n src/libs/network/endpointHandlers.ts | sed -n '440,480p'Repository: kynesyslabs/node
Length of output: 2027
🏁 Script executed:
cat -n src/libs/network/endpointHandlers.ts | sed -n '420,470p'Repository: kynesyslabs/node
Length of output: 2564
🏁 Script executed:
rg -n "DTR.*relay\|relay.*mempool\|addTransaction" -B 3 -A 3 src/libs/network/endpointHandlers.ts | head -60Repository: kynesyslabs/node
Length of output: 42
🏁 Script executed:
cat -n src/libs/network/endpointHandlers.ts | sed -n '450,490p'Repository: kynesyslabs/node
Length of output: 1953
DTR relay success does not prevent mempool addition—implement early return when relay succeeds.
The continue statement at line 456 only skips the cache.set operation for successful relays (status 200). After the relay loop completes, the code unconditionally proceeds to mempool addition at line 470, regardless of relay success. For the DTR goal of reducing non-validator mempool storage, a successful relay should skip local mempool addition entirely. Add logic to exit the DTR block early or return from the handler when at least one relay succeeds, preventing the transaction from being added to the local mempool.
🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis
[warning] 459-459: Complete the task associated to this "TODO" comment.
🤖 Prompt for AI Agents
In src/libs/network/endpointHandlers.ts around lines 450 to 466, the loop uses
"continue" when a relay returns result 200 but does not prevent the handler from
proceeding to mempool addition afterward; change the control flow so that if any
relay response is fulfilled and response.result === 200 you immediately exit the
DTR handling path (e.g., return from the handler or early-return out of the DTR
block) to prevent adding the transaction to the local mempool, otherwise proceed
with the existing cache.set and follow-up logic; ensure any necessary cleanup or
response is preserved when returning early.
| DTRManager.validityDataCache.set( | ||
| response.extra.peer, | ||
| validatedData, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: ValidityData is cached with wrong key (peer identity instead of txHash).
The cache key should be the transaction hash, not the peer identity. Using response.extra.peer as the key means:
- Only one transaction can be cached per peer
- The retry service at line 275 looks up by
txHash, which won't find this data
// TODO: Handle response codes individually
DTRManager.validityDataCache.set(
- response.extra.peer,
+ validatedData.data.transaction.hash,
validatedData,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| DTRManager.validityDataCache.set( | |
| response.extra.peer, | |
| validatedData, | |
| ) | |
| DTRManager.validityDataCache.set( | |
| validatedData.data.transaction.hash, | |
| validatedData, | |
| ) |
🤖 Prompt for AI Agents
In src/libs/network/endpointHandlers.ts around lines 460 to 463, the code caches
ValidityData using response.extra.peer but it should use the transaction hash;
change the cache key to the txHash (e.g., response.extra.txHash or
validatedData.txHash) instead of response.extra.peer, and add a guard to ensure
txHash is defined before calling DTRManager.validityDataCache.set (log or bail
if missing) so the retry service lookup by txHash will succeed.
| @@ -1,4 +1,4 @@ | |||
| import { RPCResponse } from "@kynesyslabs/demosdk/types" | |||
| import { RPCResponse, SigningAlgorithm } from "@kynesyslabs/demosdk/types" | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Remove unused imports flagged by static analysis.
Multiple imports are unused in this file, as flagged by SonarCloud:
SigningAlgorithm(line 1)isValidatorForNextBlock(line 21)TxUtils(line 22)Transaction(line 23)hexToUint8Array(line 31)ucrypto(line 32)PeerManager(line 35)
Additionally, @kynesyslabs/demosdk/types is imported twice (lines 1 and 23).
-import { RPCResponse, SigningAlgorithm } from "@kynesyslabs/demosdk/types"
+import { RPCResponse, ValidityData } from "@kynesyslabs/demosdk/types"
import { emptyResponse } from "./server_rpc"
...
-import isValidatorForNextBlock from "../consensus/v2/routines/isValidator"
-import TxUtils from "../blockchain/transaction"
-import { Transaction, ValidityData } from "@kynesyslabs/demosdk/types"
import { Twitter } from "../identity/tools/twitter"
...
-import {
- hexToUint8Array,
- ucrypto,
- uint8ArrayToHex,
-} from "@kynesyslabs/demosdk/encryption"
-import { PeerManager } from "../peer"
+import { uint8ArrayToHex } from "@kynesyslabs/demosdk/encryption"
import { DTRManager } from "./dtr/relayRetryService"Also applies to: 21-23, 30-36
🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis
[warning] 1-1: Remove this unused import of 'SigningAlgorithm'.
🤖 Prompt for AI Agents
In src/libs/network/manageNodeCall.ts around lines 1 and 21-36, several imports
are unused and one module is imported twice; remove the unused imports
SigningAlgorithm (line 1), isValidatorForNextBlock (line 21), TxUtils (line 22),
Transaction (line 23 — note duplicate import from @kynesyslabs/demosdk/types),
hexToUint8Array (line 31), ucrypto (line 32), and PeerManager (line 35), and
collapse the duplicate @kynesyslabs/demosdk/types import so only RPCResponse
remains imported; update the import statements accordingly to eliminate
unused/duplicate imports.
|
@coderabbitai generate docstrings |
✅ Actions performedInitiated docstring generation; will generate only if new commits exist. |
|
Note Docstrings generation - SUCCESS |
Docstrings generation was requested by @tcsenpai. * #517 (comment) The following files were modified: * `src/index.ts` * `src/libs/consensus/v2/routines/getShard.ts` * `src/libs/consensus/v2/routines/isValidator.ts` * `src/libs/network/manageNodeCall.ts`
📝 Add docstrings to `dtr`
+ fix: synced blocks having no transactions + update RELAY_TX nodecall to accept a list + add dtr waiter key + refactor relayRetryService to DTR Manager + put back sleep 250ms in requestBlocks while loop (sync.ts) + start waitForBlockThenRelay if tx comes in in consensus loop + release dtr waiter after finalize block +
+ add new getBlockTransactions node call + return null if rawTx is nullish (fromRawTransaction) + use getBlockTransactions to download block transactions in sync.ts + try/catch in for loops when running sync gcr edits and tx inserts + delete relay cache entries after consuming them in rpc node + more logging
+ release dtr relay waiter after consensus + bump shard size back to 4 nodes
|


Summary by CodeRabbit
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.