Skip to content

Conversation

@julienrbrt
Copy link
Member

@julienrbrt julienrbrt commented Jan 9, 2026

Blocked on #2891 being merged.

ref: #2906

Use Da height hints to verify force inclusion txs in p2p blocks.
With the retriever from #2952, this should not slow down anything (currently blocking)

@julienrbrt julienrbrt requested review from alpe and tac0turtle January 9, 2026 14:27
@julienrbrt julienrbrt marked this pull request as draft January 9, 2026 14:27
@github-actions
Copy link
Contributor

github-actions bot commented Jan 9, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedJan 9, 2026, 2:41 PM

@claude
Copy link
Contributor

claude bot commented Jan 9, 2026

Claude finished @julienrbrt's task in 1m 32s —— View job


Comprehensive Code Review

I've reviewed this PR which adds forced inclusion transaction verification for P2P blocks using DA height hints. Overall, the implementation is solid and well-tested. Here are my findings:


1. Code Quality & Architecture ✅

Strengths:

  • Clean refactoring of verifyForcedInclusionTxs to accept daHeight parameter directly (block/internal/syncing/syncer.go:866)
  • Proper separation of concerns - DA height hints stored separately from state
  • Idiomatic Go: proper use of maps, atomic operations, and existing patterns

Observations:

  • The p2pHeightHints map (syncer.go:113) grows unbounded until hints are consumed. While cleanup happens at syncer.go:647, consider edge cases where blocks are never processed (though unlikely in practice)

2. Error Handling & Resilience ⚠️

Critical Issue - Gemini's Review Comment:

I've reviewed the discussion between Gemini and the PR author about error handling at syncer.go:649-655. Let me provide context:

Current behavior (syncer.go:649-655):

if err := s.verifyForcedInclusionTxs(currentDaHeight, data); err != nil {
    s.logger.Error().Err(err).Uint64("height", nextHeight).Msg("forced inclusion verification failed")
    if errors.Is(err, errMaliciousProposer) {
        s.cache.RemoveHeaderDAIncluded(headerHash)
        return err
    }
}
// Continues to apply block even if verification failed with non-malicious error

Analysis:

The author's position (that transient DA layer errors shouldn't crash the node) has merit, but the current implementation has a significant flaw:

  1. Transient errors are silently ignored - The error is logged but the block is accepted anyway
  2. Security vs. Availability tradeoff - This prioritizes availability over security
  3. No retry mechanism - The block is permanently accepted even if verification failed due to temporary DA unavailability

Recommended Fix:

The block should be marked as pending and retried, not permanently accepted:

if err := s.verifyForcedInclusionTxs(currentDaHeight, data); err != nil {
    s.logger.Error().Err(err).Uint64("height", nextHeight).Msg("forced inclusion verification failed")
    if errors.Is(err, errMaliciousProposer) {
        s.cache.RemoveHeaderDAIncluded(headerHash)
        return err
    }
    // For transient errors, re-queue the block for retry rather than accepting it
    return fmt.Errorf("forced inclusion verification failed, will retry: %w", err)
}

This ensures:

  • Transient DA layer errors don't permanently accept potentially invalid blocks
  • The existing pending event mechanism (syncer.go:582) will retry the block
  • Node doesn't halt, but also doesn't compromise security

Reference: The discussion at syncer.go:654 shows this was flagged by Gemini's review.


3. Security 🔒

Positive:

  • ✅ Malicious sequencer detection is properly implemented
  • ✅ Grace period boundary checks are correct (syncer.go:939)
  • ✅ Forced inclusion transaction verification extended to P2P blocks

Concerns:

  • ⚠️ The error handling issue above is a security concern - accepting blocks when verification fails due to DA unavailability could allow invalid blocks to be accepted
  • ℹ️ The p2pHeightHints map is not protected by mutex, but this appears safe since it's only accessed from the main processing loop (single-threaded)

4. Performance & Resource Efficiency ⚡

Positive:

  • ✅ Efficient use of DA height hints to avoid sequential scanning
  • ✅ Map lookup is O(1) for height hint retrieval
  • ✅ Cleanup of hints after use (syncer.go:647)

Minor concern:

  • The p2pHeightHints map could accumulate entries if blocks are received out of order and never processed. Consider adding periodic cleanup or bounds checking.

5. Testing & Reliability 🧪

Excellent test coverage:

  • ✅ Two comprehensive P2P-specific tests added (syncer_forced_inclusion_test.go:1057-1287)
  • ✅ Tests cover both success and failure scenarios
  • ✅ All existing tests updated to match new function signature
  • ✅ Tests properly verify malicious sequencer detection

Test quality observations:

  • Tests use proper mocking and isolation
  • Edge cases are well covered (grace period boundaries, missing transactions)
  • Tests validate both immediate inclusion and deferred inclusion scenarios

Codecov Report: 60% patch coverage with 8 lines missing coverage is reasonable for this type of feature.


6. Documentation & Developer Experience 📚

Good:

  • ✅ Clear comment explaining the purpose of p2pHeightHints (syncer.go:113)
  • ✅ Existing documentation in verifyForcedInclusionTxs remains accurate

Could improve:

  • Add a comment at syncer.go:643-648 explaining the DA height hint selection logic
  • Document the cleanup strategy for the hints map

7. Additional Observations

Concurrency Safety:

  • The p2pHeightHints map is accessed without synchronization, but this appears intentional and safe since all access happens in the single-threaded processHeightEventtrySyncNextBlock call path

Integration with existing code:

  • ✅ Properly integrates with existing DA retrieval mechanism
  • ✅ Removes TODO comment at syncer.go:588 indicating dependency resolution

Summary & Recommendations

Must Address:

  1. Error handling issue - Transient DA errors should trigger retry, not silent acceptance of blocks (syncer.go:649-655)

Should Consider:

  1. Add bounds or periodic cleanup for p2pHeightHints map to prevent unbounded growth in edge cases
  2. Add inline comment explaining DA height hint selection logic

Nice to Have:

  1. Document the thread safety assumptions for p2pHeightHints access

Overall Assessment: This is a well-implemented feature with comprehensive tests. The main concern is the error handling behavior that could compromise security in favor of availability. Once the error handling is addressed, this should be ready to merge.


@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @julienrbrt, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a mechanism to utilize Data Availability (DA) height hints from P2P events to enhance the verification process for forced inclusion transactions. By storing and applying these specific DA height hints, the system can more accurately and efficiently validate transactions, ensuring that the verification logic aligns with the most relevant DA information available, without introducing performance bottlenecks due to a related retriever improvement.

Highlights

  • P2P DA Height Hint Storage: A new map, p2pHeightHints, has been added to the Syncer struct to store Data Availability (DA) height hints associated with specific block heights received via P2P events.
  • Dynamic DA Height for Verification: The trySyncNextBlock function now prioritizes using a stored P2P DA height hint for verifying forced inclusion transactions. If a hint exists for the next block height, it is used; otherwise, it falls back to the current state's DA height.
  • Refactored Forced Inclusion Verification: The verifyForcedInclusionTxs function has been refactored to directly accept a daHeight parameter, simplifying its interface and allowing for more flexible DA height input during verification.
  • Test Updates: All relevant test cases in syncer_forced_inclusion_test.go have been updated to reflect the new signature of the verifyForcedInclusionTxs function, ensuring continued correctness.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@julienrbrt julienrbrt marked this pull request as ready for review January 9, 2026 14:28
@julienrbrt julienrbrt mentioned this pull request Jan 9, 2026
13 tasks
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly extends forced inclusion transaction verification to blocks received from P2P by using DA height hints. The changes are logical, introducing a mechanism to store and use these hints and refactoring verifyForcedInclusionTxs to be called for P2P blocks. However, I've identified a critical issue in the error handling logic. Currently, transient errors during verification are ignored, which could lead to the node accepting an invalid block. My review includes a suggestion to fix this.

Comment on lines +650 to 654
s.logger.Error().Err(err).Uint64("height", nextHeight).Msg("forced inclusion verification failed")
if errors.Is(err, errMaliciousProposer) {
s.cache.RemoveHeaderDAIncluded(headerHash)
return err
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Failure to verify forced inclusion transactions should not be ignored. Currently, only errMaliciousProposer errors are returned from trySyncNextBlock, while other errors (e.g., transient DA layer connection issues) are only logged. This could lead to the node accepting a block that is invalid because it's missing forced transactions. All errors from verifyForcedInclusionTxs should be returned so the caller can handle them, for instance by retrying the block processing later.

Suggested change
s.logger.Error().Err(err).Uint64("height", nextHeight).Msg("forced inclusion verification failed")
if errors.Is(err, errMaliciousProposer) {
s.cache.RemoveHeaderDAIncluded(headerHash)
return err
}
s.logger.Error().Err(err).Uint64("height", nextHeight).Msg("forced inclusion verification failed")
if errors.Is(err, errMaliciousProposer) {
s.cache.RemoveHeaderDAIncluded(headerHash)
}
return err

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

current behavior is correct. a fetch issue due to the sync node have issues or the da layer down should not crash the node.

@codecov
Copy link

codecov bot commented Jan 9, 2026

Codecov Report

❌ Patch coverage is 95.00000% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 59.24%. Comparing base (544c0b9) to head (1f5a654).

Files with missing lines Patch % Lines
block/internal/syncing/syncer.go 95.00% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@                 Coverage Diff                  @@
##           alex/2609_hints    #2963       +/-   ##
====================================================
+ Coverage                 0   59.24%   +59.24%     
====================================================
  Files                    0       92       +92     
  Lines                    0     8914     +8914     
====================================================
+ Hits                     0     5281     +5281     
- Misses                   0     3041     +3041     
- Partials                 0      592      +592     
Flag Coverage Δ
combined 59.24% <95.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants