Skip to content

Conversation

@cwilvx
Copy link
Contributor

@cwilvx cwilvx commented Dec 4, 2025

Summary by CodeRabbit

  • New Features

    • Implemented Distributed Transaction Routing (DTR) system for optimized transaction relay across validators
    • Added background retry service to automatically reprocess and relay pending transactions at intervals
    • Introduced validator-aware transaction routing with deterministic validator selection and load balancing
    • Enhanced transaction relay with caching and multi-attempt retry logic for improved reliability
  • Chores

    • Updated repository ignore patterns for configuration and key files
    • Cleaned up unused imports across consensus modules

✏️ Tip: You can customize this high-level summary in your review settings.

tcsenpai and others added 10 commits June 24, 2025 13:29
+ fix relax_tx nodecall call payload parameters
+ add signature verification to relay_tx handler
+ add node identitification to relay_tx handler
+ remove transactio object from relax_tx payload
+ rename relay service to dtr manager
+ use promise.allSettled to dispatch tx relays
+ return validators from isValidatorForNextBlock helper
+ add signing algorithm checks for relayer node
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 4, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This PR implements Minimal Deterministic Transaction Relay (DTR), a feature enabling non-validator nodes to relay transactions to validators in production mode. It introduces validator detection, a background retry service, relay endpoint handling, relay transaction reception logic, and supporting state management across the node's networking and consensus layers.

Changes

Cohort / File(s) Summary
Configuration & Documentation
.gitignore, dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md, dtr_implementation/README.md, dtr_implementation/validator_status_minimal.md
Added gitignore entries for CLAUDE.md, config.json, and iv.key. Added comprehensive DTR design documentation covering architecture, security, phases, and performance metrics.
Validator Detection
src/libs/consensus/v2/routines/isValidator.ts
New module exporting isValidatorForNextBlock() to determine if current node is in next block's validator set and getValidatorsForRelay() to retrieve online synced validators for relay targets.
DTR Relay Service
src/libs/network/dtr/relayRetryService.ts
New DTRManager class implementing background relay retry service with validator caching, retry state management (max 10 attempts), relay transaction handling, and static relay reception validation including signature and coherence checks.
Network Routing & Reception
src/libs/network/endpointHandlers.ts, src/libs/network/manageNodeCall.ts
Extended endpoint handler with production DTR checks, validator status determination, and relay attempts to all validators. Added RELAY_TX message case in node call handler for receiving relayed transactions. Updated NodeCall interface to include data and muid fields.
Consensus & Routing
src/libs/consensus/v2/PoRBFT.ts, src/libs/consensus/v2/routines/getShard.ts, src/libs/consensus/v2/routines/mergeMempools.ts
Removed unused import from PoRBFT. Modified getShard to return all peers immediately (breaking subsequent filtering logic). Removed unused import from mergeMempools.
Mempool & Shared State
src/libs/blockchain/mempool_v2.ts, src/utilities/sharedState.ts
Added removeTransaction() public method to Mempool. Extended SharedState with new public validityDataCache (Map) for caching relay transaction validity data.
Node Startup & Lifecycle
src/index.ts
Added conditional DTR service initialization and lifecycle management (start/stop) in production mode with graceful shutdown hooks for SIGINT and SIGTERM.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Node as Non-Validator Node
    participant Val as Validator Node
    participant Mempool
    participant DTR as DTRManager<br/>(Retry Service)

    Client->>Node: Submit Transaction
    Node->>Node: Check if validator?
    alt Is Validator
        Node->>Mempool: Add to local mempool
    else Is Not Validator
        Node->>Val: Relay transaction to validators
        par Relay to Multiple Validators
            Node->>Val: Attempt relay (random order)
            Val->>Val: Validate signature & coherence
            alt Validation Success
                Val->>Mempool: Add to validator mempool
                Val-->>Node: 200 OK
            else Validation Fails
                Val-->>Node: Error response
            end
        end
        
        opt All relays fail
            Node->>Node: Cache ValidityData locally
        end
    end

    Note over DTR: Periodic Background (every 10s)
    loop Every 10 seconds (PROD only)
        DTR->>Mempool: Scan for transactions
        DTR->>DTR: Recalculate validators<br/>(if block changed)
        DTR->>Val: Retry relay with updated<br/>validator set
        alt Relay Success
            DTR->>Mempool: Remove transaction
            DTR->>DTR: Clear cache entry
        else Retry Limit Met
            DTR->>Mempool: Remove after 10 attempts
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Areas requiring extra attention:

  • getShard.ts modification — Early return statement bypasses all subsequent shard filtering and selection logic, potentially breaking validator set computation across the system
  • Relay security validation — DTRManager relay reception includes signature verification, coherence checks, and validator authentication; these critical security paths require careful auditing
  • NodeCall interface expansion — Adding data and muid fields to public interface may affect existing callers and backward compatibility
  • ValidityData caching lifecycle — Ensure proper cleanup and memory bounds for the new validityDataCache Map in SharedState
  • Production-mode activation — Conditional DTR logic based on PROD flag should be thoroughly tested for both enabled and disabled paths
  • Validator detection logic — Integration with getShard.ts and consensus routines; correctness depends on getShard fix

Possibly related PRs

Suggested labels

Possible security concern, Review effort 5/5

Suggested reviewers

  • cwilvx

Poem

🐰 A whisker-twitching DTR dance,
Where relays hop from node to node,
Validators catch each lucky chance,
While retry magic lightens the load!
Background hops keep mempool trim,
Production's future looks quite bright!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Distributed Transaction Routing (DTR)' directly corresponds to the main feature introduced across the changeset—a comprehensive DTR implementation with validator relay logic, background retry service, and related infrastructure changes.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 5, 2025

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 5, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.gitignore (1)

111-118: Remove duplicate gitignore patterns.

Lines 111–113 duplicate patterns already present in lines 114–115 and 117. Consolidate these to reduce clutter and improve maintainability.

Apply this diff to remove the duplicates:

 # Output files
 output/*
 .env
-CLAUDE.md
-data/l2ps/example/config.json
-data/l2ps/example/iv.key
 data/l2ps/example/config.json
 data/l2ps/example/iv.key
 data/l2ps/*
 CLAUDE.md
 GEMINI.md
🧹 Nitpick comments (10)
src/libs/blockchain/mempool_v2.ts (1)

213-226: Use the project's logging utility for consistency.

The method uses console.log for all logging, while the rest of the file and codebase uses the log utility (imported on line 11). For consistency and to benefit from log file persistence:

 static async removeTransaction(txHash: string): Promise<void> {
     try {
         const result = await this.repo.delete({ hash: txHash })
         
-        if (result.affected > 0) {
-            console.log(`[Mempool] Removed transaction ${txHash} (DTR relay success)`)
+        if (result.affected && result.affected > 0) {
+            log.info(`[Mempool] Removed transaction ${txHash} (DTR relay success)`)
         } else {
-            console.log(`[Mempool] Transaction ${txHash} not found for removal`)
+            log.info(`[Mempool] Transaction ${txHash} not found for removal`)
         }
     } catch (error) {
-        console.log(`[Mempool] Error removing transaction ${txHash}:`, error)
+        log.error(`[Mempool] Error removing transaction ${txHash}: ${error}`)
         throw error
     }
 }
src/utilities/sharedState.ts (1)

57-59: Consider adding cache eviction to prevent unbounded memory growth.

The validityDataCache Map can grow indefinitely if transactions fail to relay or are never cleaned up. For long-running production nodes, this could lead to memory issues.

Consider adding a max size limit or TTL-based eviction, or ensure the DTRManager properly cleans up entries after successful relay or max retry attempts.

src/index.ts (1)

388-403: Consider consolidating shutdown logic and cleaning up other services.

The shutdown handlers only stop the DTR service, but other services (MCP server, signaling server) are also started and may benefit from graceful shutdown.

+function gracefulShutdown(signal: string) {
+    console.log(`[MAIN] Received ${signal}, shutting down gracefully...`)
+    if (getSharedState.PROD) {
+        DTRManager.getInstance().stop()
+    }
+    // Consider also: indexState.mcpServer?.stop(), signaling server cleanup
+    process.exit(0)
+}
+
-process.on("SIGINT", () => {
-    console.log("[DTR] Received SIGINT, shutting down gracefully...")
-    if (getSharedState.PROD) {
-        DTRManager.getInstance().stop()
-    }
-    process.exit(0)
-})
-
-process.on("SIGTERM", () => {
-    console.log("[DTR] Received SIGTERM, shutting down gracefully...")
-    if (getSharedState.PROD) {
-        DTRManager.getInstance().stop()
-    }
-    process.exit(0)
-})
+process.on("SIGINT", () => gracefulShutdown("SIGINT"))
+process.on("SIGTERM", () => gracefulShutdown("SIGTERM"))
src/libs/consensus/v2/routines/isValidator.ts (1)

6-19: Consider adding error handling for robustness.

The function has no try-catch, so any error from getCommonValidatorSeed() or getShard() will propagate up. Depending on how callers handle this, it may cause issues in the DTR retry loop.

 export default async function isValidatorForNextBlock(): Promise<{
     isValidator: boolean
     validators: Peer[]
 }> {
+    try {
         const { commonValidatorSeed } = await getCommonValidatorSeed()
         const validators = await getShard(commonValidatorSeed)

         return {
             isValidator: validators.some(
                 peer => peer.identity === getSharedState.publicKeyHex,
             ),
             validators,
         }
+    } catch (error) {
+        console.error("[isValidatorForNextBlock] Error:", error)
+        return { isValidator: false, validators: [] }
+    }
 }

Note: This function's correctness depends on fixing the early-return issue in getShard.ts (see related comment).

src/libs/network/manageNodeCall.ts (1)

470-474: Add input validation for RELAY_TX before delegating.

The handler directly accesses data.validityData without checking if data exists or has the expected structure. While receiveRelayedTransaction may handle invalid input, a defensive check at the boundary is good practice.

 // REVIEW DTR: Handle relayed transactions from non-validator nodes
 case "RELAY_TX":
+    if (!data?.validityData) {
+        response.result = 400
+        response.response = "Missing validityData in RELAY_TX request"
+        break
+    }
     return await DTRManager.receiveRelayedTransaction(
         data.validityData as ValidityData,
     )
src/libs/network/endpointHandlers.ts (2)

432-434: Use toSorted() to avoid mutating the original array.

The .sort() method mutates the array in place. While the validators array comes from isValidatorForNextBlock(), using toSorted() (or spreading first) is safer and more explicit about intent.

-                const availableValidators = validators.sort(
-                    () => Math.random() - 0.5,
-                ) // Random order for load balancing
+                const availableValidators = [...validators].sort(
+                    () => Math.random() - 0.5,
+                ) // Random order for load balancing

313-313: Track TODO comments for proper typing.

These TODO comments indicate deferred work for proper type definitions once l2ps is implemented. Consider creating a tracking issue.

Would you like me to open an issue to track these typing improvements for when l2ps is implemented?

Also applies to: 558-559

dtr_implementation/README.md (1)

51-65: Add language specifier to fenced code block.

The code block illustrating resource utilization lacks a language specifier, which prevents proper rendering/highlighting.

-```
+```text
 Traditional Demos Network:
 ├── Validator Node A: Full Mempool (1000 transactions)
src/libs/network/dtr/relayRetryService.ts (2)

35-38: Mark readonly properties appropriately.

Per static analysis, retryAttempts is never reassigned and validityDataCache is a public static that should be readonly to prevent accidental reassignment.

-    private retryAttempts = new Map<string, number>() // txHash -> attempt count
+    private readonly retryAttempts = new Map<string, number>() // txHash -> attempt count
     private readonly maxRetryAttempts = 10
     private readonly retryIntervalMs = 10000 // 10 seconds
-    public static validityDataCache = new Map<string, ValidityData>()
+    public static readonly validityDataCache = new Map<string, ValidityData>()

143-145: Improve error logging to avoid [object Object] output.

Concatenating error objects with strings produces unhelpful output. Use error.message or structured logging.

         } catch (error) {
-            log.error("[DTR RetryService] Error processing mempool: " + error)
+            log.error("[DTR RetryService] Error processing mempool: " + (error instanceof Error ? error.message : String(error)))
         }

Apply similar fix to lines 178-181 and 544.

Also applies to: 177-181, 543-545

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 3dde69a and b8f4d11.

📒 Files selected for processing (14)
  • .gitignore (1 hunks)
  • dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md (1 hunks)
  • dtr_implementation/README.md (1 hunks)
  • dtr_implementation/validator_status_minimal.md (1 hunks)
  • src/index.ts (2 hunks)
  • src/libs/blockchain/mempool_v2.ts (1 hunks)
  • src/libs/consensus/v2/PoRBFT.ts (0 hunks)
  • src/libs/consensus/v2/routines/getShard.ts (1 hunks)
  • src/libs/consensus/v2/routines/isValidator.ts (1 hunks)
  • src/libs/consensus/v2/routines/mergeMempools.ts (0 hunks)
  • src/libs/network/dtr/relayRetryService.ts (1 hunks)
  • src/libs/network/endpointHandlers.ts (6 hunks)
  • src/libs/network/manageNodeCall.ts (3 hunks)
  • src/utilities/sharedState.ts (2 hunks)
💤 Files with no reviewable changes (2)
  • src/libs/consensus/v2/PoRBFT.ts
  • src/libs/consensus/v2/routines/mergeMempools.ts
🧰 Additional context used
🧬 Code graph analysis (6)
src/libs/blockchain/mempool_v2.ts (1)
src/utilities/logger.ts (1)
  • error (125-132)
src/libs/consensus/v2/routines/isValidator.ts (4)
src/libs/peer/Peer.ts (1)
  • Peer (15-346)
src/libs/consensus/v2/routines/getCommonValidatorSeed.ts (1)
  • getCommonValidatorSeed (58-132)
src/libs/consensus/v2/routines/getShard.ts (1)
  • getShard (8-58)
src/utilities/sharedState.ts (1)
  • getSharedState (271-273)
src/libs/network/endpointHandlers.ts (3)
src/utilities/sharedState.ts (1)
  • getSharedState (271-273)
src/libs/consensus/v2/routines/isValidator.ts (1)
  • isValidatorForNextBlock (6-19)
src/libs/network/dtr/relayRetryService.ts (1)
  • DTRManager (31-570)
src/libs/network/dtr/relayRetryService.ts (10)
src/utilities/logger.ts (1)
  • error (125-132)
src/utilities/sharedState.ts (1)
  • getSharedState (271-273)
src/libs/consensus/v2/routines/isValidator.ts (1)
  • isValidatorForNextBlock (6-19)
src/libs/blockchain/mempool_v2.ts (1)
  • Mempool (18-227)
src/libs/consensus/v2/routines/getCommonValidatorSeed.ts (1)
  • getCommonValidatorSeed (58-132)
src/libs/consensus/v2/routines/getShard.ts (1)
  • getShard (8-58)
src/libs/peer/Peer.ts (1)
  • Peer (15-346)
src/libs/peer/PeerManager.ts (1)
  • PeerManager (20-423)
src/libs/crypto/hashing.ts (1)
  • Hashing (15-26)
src/libs/blockchain/transaction.ts (1)
  • isCoherent (263-273)
src/index.ts (2)
src/utilities/sharedState.ts (1)
  • getSharedState (271-273)
src/libs/network/dtr/relayRetryService.ts (1)
  • DTRManager (31-570)
src/libs/network/manageNodeCall.ts (1)
src/libs/network/dtr/relayRetryService.ts (1)
  • DTRManager (31-570)
🪛 Biome (2.1.2)
src/libs/network/endpointHandlers.ts

[error] 428-429: Unexpected constant condition.

(lint/correctness/noConstantCondition)

🪛 GitHub Check: SonarCloud Code Analysis
src/libs/network/endpointHandlers.ts

[warning] 40-40: Remove this unused import of 'getCommonValidatorSeed'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370a&open=AZruYDTcduZKUq2z370a&pullRequest=517


[warning] 39-39: Remove this unused import of 'getShard'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370Z&open=AZruYDTcduZKUq2z370Z&pullRequest=517


[warning] 313-313: Complete the task associated to this "TODO" comment.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370b&open=AZruYDTcduZKUq2z370b&pullRequest=517


[warning] 427-427: Remove this commented out code.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370d&open=AZruYDTcduZKUq2z370d&pullRequest=517


[warning] 428-428: This always evaluates to truthy. Consider refactoring this code.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370e&open=AZruYDTcduZKUq2z370e&pullRequest=517


[warning] 432-434: Move this array "sort" operation to a separate statement or replace it with "toSorted".

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370f&open=AZruYDTcduZKUq2z370f&pullRequest=517


[warning] 424-424: Remove this useless assignment to variable "isValidator".

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370c&open=AZruYDTcduZKUq2z370c&pullRequest=517


[warning] 459-459: Complete the task associated to this "TODO" comment.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370h&open=AZruYDTcduZKUq2z370h&pullRequest=517


[warning] 559-559: Complete the task associated to this "TODO" comment.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370i&open=AZruYDTcduZKUq2z370i&pullRequest=517

src/libs/network/dtr/relayRetryService.ts

[warning] 144-144: 'error' will use Object's default stringification format ('[object Object]') when stringified.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDS-duZKUq2z370V&open=AZruYDS-duZKUq2z370V&pullRequest=517


[warning] 544-544: 'error' will use Object's default stringification format ('[object Object]') when stringified.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDS-duZKUq2z370Y&open=AZruYDS-duZKUq2z370Y&pullRequest=517


[warning] 180-180: 'error' will use Object's default stringification format ('[object Object]') when stringified.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDS-duZKUq2z370W&open=AZruYDS-duZKUq2z370W&pullRequest=517


[warning] 38-38: Make this public static property readonly.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDS-duZKUq2z370U&open=AZruYDS-duZKUq2z370U&pullRequest=517


[warning] 35-35: Member 'retryAttempts' is never reassigned; mark it as readonly.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDS-duZKUq2z370T&open=AZruYDS-duZKUq2z370T&pullRequest=517

src/libs/network/manageNodeCall.ts

[warning] 1-1: Remove this unused import of 'SigningAlgorithm'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDQuduZKUq2z370L&open=AZruYDQuduZKUq2z370L&pullRequest=517


[warning] 32-32: Remove this unused import of 'ucrypto'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDQuduZKUq2z370R&open=AZruYDQuduZKUq2z370R&pullRequest=517


[warning] 21-21: Remove this unused import of 'isValidatorForNextBlock'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDQuduZKUq2z370M&open=AZruYDQuduZKUq2z370M&pullRequest=517


[warning] 22-22: Remove this unused import of 'TxUtils'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDQuduZKUq2z370N&open=AZruYDQuduZKUq2z370N&pullRequest=517


[warning] 23-23: '@kynesyslabs/demosdk/types' imported multiple times.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDQuduZKUq2z370P&open=AZruYDQuduZKUq2z370P&pullRequest=517


[warning] 35-35: Remove this unused import of 'PeerManager'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDQuduZKUq2z370S&open=AZruYDQuduZKUq2z370S&pullRequest=517


[warning] 23-23: Remove this unused import of 'Transaction'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDQuduZKUq2z370O&open=AZruYDQuduZKUq2z370O&pullRequest=517


[warning] 31-31: Remove this unused import of 'hexToUint8Array'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDQuduZKUq2z370Q&open=AZruYDQuduZKUq2z370Q&pullRequest=517

🪛 LanguageTool
dtr_implementation/README.md

[grammar] ~252-~252: Use a hyphen to join words.
Context: ... relay optimization - Quality-of-service based routing ### **Phase 3: Incentive ...

(QB_NEW_EN_HYPHEN)

🪛 markdownlint-cli2 (0.18.1)
dtr_implementation/README.md

51-51: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md

51-51: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (4)
dtr_implementation/README.md (1)

1-273: Documentation looks comprehensive and well-structured.

The README provides a clear explanation of DTR's purpose, architecture, security considerations, and implementation details. The flow diagrams and use cases help understand the feature.

src/libs/network/dtr/relayRetryService.ts (2)

31-49: DTRManager singleton with lifecycle management looks well-structured.

The singleton pattern, start/stop lifecycle with proper cleanup, and background interval management are implemented correctly.

Also applies to: 55-69, 75-91


360-554: receiveRelayedTransaction validation pipeline is thorough.

The multi-layer validation (validator check, signing algorithm, known validator, signature, coherence, transaction signature, mempool addition) provides good security guarantees for relayed transactions.

dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md (1)

224-333: Flow diagram is comprehensive and helpful.

The ASCII flow diagram clearly illustrates the DTR decision tree and processing paths for both immediate relay and background retry scenarios.

Comment on lines +25 to +35
export default async function isValidatorForNextBlock(): Promise<boolean> {
try {
const { commonValidatorSeed } = await getCommonValidatorSeed()
const validators = await getShard(commonValidatorSeed)
const ourIdentity = getSharedState.identity.ed25519.publicKey.toString("hex")
return validators.some(peer => peer.identity === ourIdentity)
} catch {
return false // Conservative fallback
}
}
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Documentation signature mismatch with actual implementation.

Same issue as in validator_status_minimal.md - the function signature shown returns Promise<boolean> but the actual implementation returns Promise<{ isValidator: boolean; validators: Peer[] }>.

🤖 Prompt for AI Agents
In dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md around lines 25 to 35, the
documented function signature declares Promise<boolean> but the real
implementation returns Promise<{ isValidator: boolean; validators: Peer[] }>;
update the documentation to match the actual implementation by changing the
signature to return Promise<{ isValidator: boolean; validators: Peer[] }>,
update any example calls and descriptions to reference the isValidator property
and the validators array, and ensure the Peer type is either referenced or
defined in the doc so types are consistent.

Comment on lines 129 to 138
### Total New Files: 2
- `src/libs/consensus/v2/routines/isValidator.ts` (15 lines)
- `src/libs/network/dtr/relayRetryService.ts` (240 lines) - Background retry service

### Total Modified Files: 4
- `src/libs/network/endpointHandlers.ts` (+50 lines) - Enhanced DTR logic with multi-validator retry
- `src/libs/network/manageNodeCall.ts` (+55 lines) - RELAY_TX handler with validation
- `src/libs/blockchain/mempool_v2.ts` (+20 lines) - removeTransaction method
- `src/utilities/sharedState.ts` (+3 lines) - ValidityData cache
- `src/index.ts` (+25 lines) - Service startup and graceful shutdown
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

File count discrepancy: lists 4 modified files but enumerates 5.

The summary says "Total Modified Files: 4" but then lists 5 files (endpointHandlers.ts, manageNodeCall.ts, mempool_v2.ts, sharedState.ts, and index.ts).

-### Total Modified Files: 4
+### Total Modified Files: 5
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Total New Files: 2
- `src/libs/consensus/v2/routines/isValidator.ts` (15 lines)
- `src/libs/network/dtr/relayRetryService.ts` (240 lines) - Background retry service
### Total Modified Files: 4
- `src/libs/network/endpointHandlers.ts` (+50 lines) - Enhanced DTR logic with multi-validator retry
- `src/libs/network/manageNodeCall.ts` (+55 lines) - RELAY_TX handler with validation
- `src/libs/blockchain/mempool_v2.ts` (+20 lines) - removeTransaction method
- `src/utilities/sharedState.ts` (+3 lines) - ValidityData cache
- `src/index.ts` (+25 lines) - Service startup and graceful shutdown
### Total New Files: 2
- `src/libs/consensus/v2/routines/isValidator.ts` (15 lines)
- `src/libs/network/dtr/relayRetryService.ts` (240 lines) - Background retry service
### Total Modified Files: 5
- `src/libs/network/endpointHandlers.ts` (+50 lines) - Enhanced DTR logic with multi-validator retry
- `src/libs/network/manageNodeCall.ts` (+55 lines) - RELAY_TX handler with validation
- `src/libs/blockchain/mempool_v2.ts` (+20 lines) - removeTransaction method
- `src/utilities/sharedState.ts` (+3 lines) - ValidityData cache
- `src/index.ts` (+25 lines) - Service startup and graceful shutdown
🤖 Prompt for AI Agents
In dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md around lines 129 to 138, the
"Total Modified Files" count is incorrect: it states 4 but lists 5 filenames.
Fix by updating the summary count to match the enumerated files (change "Total
Modified Files: 4" to "Total Modified Files: 5") or remove any file from the
list so the count and list are consistent.

Comment on lines +20 to +38
export default async function isValidatorForNextBlock(): Promise<boolean> {
try {
// Use existing seed generation (unchanged)
const { commonValidatorSeed } = await getCommonValidatorSeed()

// Use existing shard selection (unchanged)
const validators = await getShard(commonValidatorSeed)

// Use existing identity access (unchanged)
const ourIdentity = getSharedState.identity.ed25519.publicKey.toString("hex")

// Simple check if we're in the validator list
return validators.some(peer => peer.identity === ourIdentity)

} catch (error) {
// Conservative fallback - assume we're not validator
return false
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Documentation does not match actual implementation.

The code example shows isValidatorForNextBlock() returning Promise<boolean>, but based on the relevant code snippet from src/libs/consensus/v2/routines/isValidator.ts (lines 5-18), the actual implementation returns Promise<{ isValidator: boolean; validators: Peer[] }>.

Update the documentation to reflect the actual signature:

export default async function isValidatorForNextBlock(): Promise<{
    isValidator: boolean
    validators: Peer[]
}> {
    // ...implementation
}
🤖 Prompt for AI Agents
In dtr_implementation/validator_status_minimal.md around lines 20 to 38, the
documented function signature and return value are incorrect; update the example
to match the actual implementation which returns an object with both isValidator
and validators. Change the Promise<boolean> signature to Promise<{ isValidator:
boolean; validators: Peer[] }>, return an object { isValidator:
validators.some(...), validators } instead of a boolean, and update the
catch/fallback to return { isValidator: false, validators: [] } (or an
appropriate empty array) so the documented example matches
src/libs/consensus/v2/routines/isValidator.ts.

src/index.ts Outdated
Comment on lines 374 to 384
// mainLoop() // Is an async function so running without waiting send that to the background

// Start DTR relay retry service after background loop initialization
// The service will wait for syncStatus to be true before actually processing
if (getSharedState.PROD) {
console.log(
"[DTR] Initializing relay retry service (will start after sync)",
)
// Service will check syncStatus internally before processing
DTRManager.getInstance().start()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: mainLoop() is commented out, disabling core node functionality.

Line 374 shows mainLoop() commented out with //. This disables the main background loop which is essential for the node's core operations (sync, consensus, peer management, etc.). The DTR service alone cannot substitute for the main loop.

If this is intentional for isolated DTR testing, ensure it's reverted before merging:

 term.yellow("[MAIN] ✅ Starting the background loop\n")
 // ANCHOR Starting the main loop
-// mainLoop() // Is an async function so running without waiting send that to the background
+mainLoop() // Is an async function so running without waiting sends that to the background

 // Start DTR relay retry service after background loop initialization

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/index.ts around lines 374 to 384, the call to mainLoop() is commented out
which disables the node's primary background loop; uncomment and invoke
mainLoop() so the main background processing (sync, consensus, peer mgmt) runs —
start it in the background (call without awaiting) before initializing the DTR
retry service, and attach a rejection handler to the returned Promise (e.g.,
.catch to log errors). If the omission was intentional for isolated testing,
gate the omission behind a clear flag so production merges always call
mainLoop().

Comment on lines 11 to 12
return allPeers
const peers = allPeers.filter(peer => peer.status.online && peer.sync.status)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Early return bypasses deterministic shard selection, leaving dead code.

The return allPeers on line 11 makes all subsequent code unreachable, including:

  • Filtering for synced peers
  • Deterministic validator selection using the seed
  • Shard size limiting
  • Security warnings for small shards

This breaks the validator selection mechanism that DTR relies on. isValidatorForNextBlock() will now incorrectly identify all online peers as validators instead of the deterministically selected shard.

If this is intentional for testing, it should be reverted before merging to testnet. Otherwise, remove the early return:

 const allPeers = await PeerManager.getInstance().getOnlinePeers()
-return allPeers
 const peers = allPeers.filter(peer => peer.status.online && peer.sync.status)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return allPeers
const peers = allPeers.filter(peer => peer.status.online && peer.sync.status)
const allPeers = await PeerManager.getInstance().getOnlinePeers()
const peers = allPeers.filter(peer => peer.status.online && peer.sync.status)
🤖 Prompt for AI Agents
In src/libs/consensus/v2/routines/getShard.ts around lines 11 to 12, there is an
unintended early return ("return allPeers") that makes the subsequent
deterministic shard-selection logic (filtering for synced peers, using the seed
to pick validators, enforcing shard size limits, and emitting security warnings
for small shards) unreachable; remove the early return so the function proceeds
to: filter allPeers to only online and synced peers, apply the deterministic
selection using the provided seed, trim the selected list to the configured
shard size, and keep the existing small-shard warning logic intact (restore any
removed variables/branches necessary for deterministic selection and
validation).

Comment on lines +39 to +40
import getShard from "src/libs/consensus/v2/routines/getShard"
import getCommonValidatorSeed from "src/libs/consensus/v2/routines/getCommonValidatorSeed"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Remove unused imports.

The imports getShard and getCommonValidatorSeed are not used in this file. The DTR logic uses isValidatorForNextBlock() which internally calls these functions and returns both the validator status and the validators list.

-import getShard from "src/libs/consensus/v2/routines/getShard"
-import getCommonValidatorSeed from "src/libs/consensus/v2/routines/getCommonValidatorSeed"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import getShard from "src/libs/consensus/v2/routines/getShard"
import getCommonValidatorSeed from "src/libs/consensus/v2/routines/getCommonValidatorSeed"
🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis

[warning] 40-40: Remove this unused import of 'getCommonValidatorSeed'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370a&open=AZruYDTcduZKUq2z370a&pullRequest=517


[warning] 39-39: Remove this unused import of 'getShard'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370Z&open=AZruYDTcduZKUq2z370Z&pullRequest=517

🤖 Prompt for AI Agents
In src/libs/network/endpointHandlers.ts around lines 39 to 40, the imports
getShard and getCommonValidatorSeed are unused; remove those two import
statements (and any trailing commas or leftover spacing) so only actually used
imports remain, ensure there are no side-effect dependencies elsewhere in the
file, and run the linter/TypeScript build to confirm no remaining references.

Comment on lines 424 to 428
const { isValidator, validators } =
await isValidatorForNextBlock()

// if (!isValidator) {
if (true) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Debug code left in - if (true) bypasses validator check.

The isValidator variable is destructured but never used because line 428 uses if (true) instead of if (!isValidator). This causes all nodes (including validators) to attempt relaying, which defeats the purpose of DTR and could cause infinite relay loops between validators.

             const { isValidator, validators } =
                 await isValidatorForNextBlock()

-            // if (!isValidator) {
-            if (true) {
+            if (!isValidator) {
                 log.debug(
                     "[DTR] Non-validator node: attempting relay to all validators",
                 )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const { isValidator, validators } =
await isValidatorForNextBlock()
// if (!isValidator) {
if (true) {
const { isValidator, validators } =
await isValidatorForNextBlock()
if (!isValidator) {
log.debug(
"[DTR] Non-validator node: attempting relay to all validators",
)
🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis

[warning] 427-427: Remove this commented out code.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370d&open=AZruYDTcduZKUq2z370d&pullRequest=517


[warning] 428-428: This always evaluates to truthy. Consider refactoring this code.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370e&open=AZruYDTcduZKUq2z370e&pullRequest=517


[warning] 424-424: Remove this useless assignment to variable "isValidator".

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370c&open=AZruYDTcduZKUq2z370c&pullRequest=517

🤖 Prompt for AI Agents
In src/libs/network/endpointHandlers.ts around lines 424 to 428, the debug
placeholder `if (true)` bypasses the validator check and ignores the
destructured isValidator variable; change the condition back to use the actual
check (e.g., `if (!isValidator)`) so only non-validator nodes proceed to relay,
ensure isValidator is defined/handled (avoid undefined) and remove the debug
shortcut, and keep validators available if needed for further logic to prevent
validators from attempting to relay and creating loops.

Comment on lines 450 to 466
for (const result of results) {
if (result.status === "fulfilled") {
const response = result.value
log.debug("response: " + JSON.stringify(response))

if (response.result == 200) {
continue
}

// TODO: Handle response codes individually
DTRManager.validityDataCache.set(
response.extra.peer,
validatedData,
)
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n src/libs/network/endpointHandlers.ts | sed -n '440,480p'

Repository: kynesyslabs/node

Length of output: 2027


🏁 Script executed:

cat -n src/libs/network/endpointHandlers.ts | sed -n '420,470p'

Repository: kynesyslabs/node

Length of output: 2564


🏁 Script executed:

rg -n "DTR.*relay\|relay.*mempool\|addTransaction" -B 3 -A 3 src/libs/network/endpointHandlers.ts | head -60

Repository: kynesyslabs/node

Length of output: 42


🏁 Script executed:

cat -n src/libs/network/endpointHandlers.ts | sed -n '450,490p'

Repository: kynesyslabs/node

Length of output: 1953


DTR relay success does not prevent mempool addition—implement early return when relay succeeds.

The continue statement at line 456 only skips the cache.set operation for successful relays (status 200). After the relay loop completes, the code unconditionally proceeds to mempool addition at line 470, regardless of relay success. For the DTR goal of reducing non-validator mempool storage, a successful relay should skip local mempool addition entirely. Add logic to exit the DTR block early or return from the handler when at least one relay succeeds, preventing the transaction from being added to the local mempool.

🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis

[warning] 459-459: Complete the task associated to this "TODO" comment.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDTcduZKUq2z370h&open=AZruYDTcduZKUq2z370h&pullRequest=517

🤖 Prompt for AI Agents
In src/libs/network/endpointHandlers.ts around lines 450 to 466, the loop uses
"continue" when a relay returns result 200 but does not prevent the handler from
proceeding to mempool addition afterward; change the control flow so that if any
relay response is fulfilled and response.result === 200 you immediately exit the
DTR handling path (e.g., return from the handler or early-return out of the DTR
block) to prevent adding the transaction to the local mempool, otherwise proceed
with the existing cache.set and follow-up logic; ensure any necessary cleanup or
response is preserved when returning early.

Comment on lines +460 to +463
DTRManager.validityDataCache.set(
response.extra.peer,
validatedData,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: ValidityData is cached with wrong key (peer identity instead of txHash).

The cache key should be the transaction hash, not the peer identity. Using response.extra.peer as the key means:

  1. Only one transaction can be cached per peer
  2. The retry service at line 275 looks up by txHash, which won't find this data
                            // TODO: Handle response codes individually
                            DTRManager.validityDataCache.set(
-                               response.extra.peer,
+                               validatedData.data.transaction.hash,
                                validatedData,
                            )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
DTRManager.validityDataCache.set(
response.extra.peer,
validatedData,
)
DTRManager.validityDataCache.set(
validatedData.data.transaction.hash,
validatedData,
)
🤖 Prompt for AI Agents
In src/libs/network/endpointHandlers.ts around lines 460 to 463, the code caches
ValidityData using response.extra.peer but it should use the transaction hash;
change the cache key to the txHash (e.g., response.extra.txHash or
validatedData.txHash) instead of response.extra.peer, and add a guard to ensure
txHash is defined before calling DTRManager.validityDataCache.set (log or bail
if missing) so the retry service lookup by txHash will succeed.

@@ -1,4 +1,4 @@
import { RPCResponse } from "@kynesyslabs/demosdk/types"
import { RPCResponse, SigningAlgorithm } from "@kynesyslabs/demosdk/types"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Remove unused imports flagged by static analysis.

Multiple imports are unused in this file, as flagged by SonarCloud:

  • SigningAlgorithm (line 1)
  • isValidatorForNextBlock (line 21)
  • TxUtils (line 22)
  • Transaction (line 23)
  • hexToUint8Array (line 31)
  • ucrypto (line 32)
  • PeerManager (line 35)

Additionally, @kynesyslabs/demosdk/types is imported twice (lines 1 and 23).

-import { RPCResponse, SigningAlgorithm } from "@kynesyslabs/demosdk/types"
+import { RPCResponse, ValidityData } from "@kynesyslabs/demosdk/types"
 import { emptyResponse } from "./server_rpc"
 ...
-import isValidatorForNextBlock from "../consensus/v2/routines/isValidator"
-import TxUtils from "../blockchain/transaction"
-import { Transaction, ValidityData } from "@kynesyslabs/demosdk/types"
 import { Twitter } from "../identity/tools/twitter"
 ...
-import {
-    hexToUint8Array,
-    ucrypto,
-    uint8ArrayToHex,
-} from "@kynesyslabs/demosdk/encryption"
-import { PeerManager } from "../peer"
+import { uint8ArrayToHex } from "@kynesyslabs/demosdk/encryption"
 import { DTRManager } from "./dtr/relayRetryService"

Also applies to: 21-23, 30-36

🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis

[warning] 1-1: Remove this unused import of 'SigningAlgorithm'.

See more on https://sonarcloud.io/project/issues?id=kynesyslabs_node&issues=AZruYDQuduZKUq2z370L&open=AZruYDQuduZKUq2z370L&pullRequest=517

🤖 Prompt for AI Agents
In src/libs/network/manageNodeCall.ts around lines 1 and 21-36, several imports
are unused and one module is imported twice; remove the unused imports
SigningAlgorithm (line 1), isValidatorForNextBlock (line 21), TxUtils (line 22),
Transaction (line 23 — note duplicate import from @kynesyslabs/demosdk/types),
hexToUint8Array (line 31), ucrypto (line 32), and PeerManager (line 35), and
collapse the duplicate @kynesyslabs/demosdk/types import so only RPCResponse
remains imported; update the import statements accordingly to eliminate
unused/duplicate imports.

@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 5, 2025

@coderabbitai generate docstrings

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 5, 2025

✅ Actions performed

Initiated docstring generation; will generate only if new commits exist.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 5, 2025

Note

Docstrings generation - SUCCESS
Generated docstrings for this pull request at #521

Docstrings generation was requested by @tcsenpai.

* #517 (comment)

The following files were modified:

* `src/index.ts`
* `src/libs/consensus/v2/routines/getShard.ts`
* `src/libs/consensus/v2/routines/isValidator.ts`
* `src/libs/network/manageNodeCall.ts`
@coderabbitai coderabbitai bot mentioned this pull request Dec 5, 2025
@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 6, 2025

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 6, 2025

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

tcsenpai and others added 2 commits December 6, 2025 10:20
+ fix: synced blocks having no transactions
+ update RELAY_TX nodecall to accept a list
+ add dtr waiter key
+ refactor relayRetryService to DTR Manager
+ put back sleep 250ms in requestBlocks while loop (sync.ts)
+ start waitForBlockThenRelay if tx comes in in consensus loop
+ release dtr waiter after finalize block
+
@tcsenpai
Copy link
Contributor

tcsenpai commented Dec 7, 2025

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

+ add new getBlockTransactions node call
+ return null if rawTx is nullish (fromRawTransaction)
+ use getBlockTransactions to download block transactions in sync.ts
+ try/catch in for loops when running sync gcr edits and tx inserts
+ delete relay cache entries after consuming them in rpc node
+ more logging
@tcsenpai
Copy link
Contributor

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

@tcsenpai
Copy link
Contributor

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

+ release dtr relay waiter after consensus
+ bump shard size back to 4 nodes
@tcsenpai
Copy link
Contributor

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

@tcsenpai
Copy link
Contributor

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

@tcsenpai
Copy link
Contributor

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

@tcsenpai
Copy link
Contributor

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

@tcsenpai
Copy link
Contributor

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

@tcsenpai
Copy link
Contributor

Your trial has ended! 😢

To keep getting reviews, activate your plan here.

Got questions about plans or want to see if we can extend your trial? Talk to our founders here.😎

@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
1 Security Hotspot

See analysis details on SonarQube Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants