Skip to content

Conversation

@SF-Zhou
Copy link
Owner

@SF-Zhou SF-Zhou commented Dec 28, 2025

Optimize the entry cleanup process after EntryByVal/Ref is dropped. In practice, cleanup is only necessary when the reference count reaches zero.

Q: Why not check if the value is None inside EntryByVal/Ref::Drop to avoid redundant try_remove_entry calls?

We must decrement the reference count only after calling state.mutex.unlock(). If we were to decrement the reference count first and it hit zero, a concurrent map.remove(key) might assume the entry is no longer in use and deallocate the state. This would cause the subsequent state.mutex.unlock() in the current thread to fail.

Furthermore, once state.mutex.unlock() is executed, the value within the state could be immediately modified by another thread. Therefore, the current thread can no longer rely on the state of the value to determine whether try_remove_entry is required.

Copilot AI review requested due to automatic review settings December 28, 2025 13:43
@codecov
Copy link

codecov bot commented Dec 28, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 100.00%. Comparing base (82e7c3e) to head (153f836).
⚠️ Report is 1 commits behind head on main.
✅ All tests successful. No failed tests found.

Additional details and impacted files
@@            Coverage Diff            @@
##              main       #24   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files            3         3           
  Lines          892       871   -21     
=========================================
- Hits           892       871   -21     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR optimizes the entry cleanup mechanism in LockMap by refactoring how reference counting and entry removal are handled. The key change moves the reference count decrement from inside the shard lock to outside, improving concurrency by reducing lock hold time.

Key changes:

  • Strengthened atomic memory ordering (Relaxed → AcqRel/Acquire) to ensure correctness with the new cleanup timing
  • Refactored unlock() into try_remove_entry() that checks conditions before removal rather than unconditionally decrementing inside the lock
  • Moved the fetch_sub operation from inside the shard lock to the Drop implementations, reducing critical section duration

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 1 out of 1 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@SF-Zhou SF-Zhou merged commit 436955c into main Dec 28, 2025
7 checks passed
@SF-Zhou SF-Zhou deleted the optimize-unlock branch December 28, 2025 14:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants