-
Notifications
You must be signed in to change notification settings - Fork 2
optimize entry cleanup #24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #24 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 3 3
Lines 892 871 -21
=========================================
- Hits 892 871 -21 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR optimizes the entry cleanup mechanism in LockMap by refactoring how reference counting and entry removal are handled. The key change moves the reference count decrement from inside the shard lock to outside, improving concurrency by reducing lock hold time.
Key changes:
- Strengthened atomic memory ordering (Relaxed → AcqRel/Acquire) to ensure correctness with the new cleanup timing
- Refactored
unlock()intotry_remove_entry()that checks conditions before removal rather than unconditionally decrementing inside the lock - Moved the
fetch_suboperation from inside the shard lock to the Drop implementations, reducing critical section duration
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
72adf10 to
153f836
Compare
Optimize the entry cleanup process after
EntryByVal/Refis dropped. In practice, cleanup is only necessary when the reference count reaches zero.Q: Why not check if the
valueisNoneinsideEntryByVal/Ref::Dropto avoid redundanttry_remove_entrycalls?We must decrement the reference count only after calling
state.mutex.unlock(). If we were to decrement the reference count first and it hit zero, a concurrentmap.remove(key)might assume the entry is no longer in use and deallocate the state. This would cause the subsequentstate.mutex.unlock()in the current thread to fail.Furthermore, once
state.mutex.unlock()is executed, the value within the state could be immediately modified by another thread. Therefore, the current thread can no longer rely on the state of the value to determine whethertry_remove_entryis required.