Conversation
add exclusion_status enum 'ExclusionStatuses' to LeaderboardEntry and MedalExclusionRecord new settings allow for fine tuned display rules set up to depricate excluded and show_when_excluded fields
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplaces boolean exclusion flags with a new enum-based Changes
Sequence Diagram(s)sequenceDiagram
participant User as Client (Browser)
participant FE as Frontend components
participant API as Server (Django views / serializers)
participant Utils as scoring.utils (assign_exclusions_/assign_ranks_)
participant DB as Database (models)
User->>FE: Request leaderboard page
FE->>API: GET /api/leaderboard
API->>DB: Query LeaderboardEntry (filter by exclusion_status)
DB-->>API: Entries with exclusion_status
API->>Utils: process entries (assign_exclusions_ → assign_ranks_/assign_prizes_)
Utils-->>API: processed entries (ranks, prize%, exclusion flags)
API->>FE: JSON response (includes exclusion_status)
FE->>FE: Filter/render rows (hide/show per exclusion_status & user staff)
FE-->>User: Rendered leaderboard
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🚀 Preview EnvironmentYour preview environment is ready!
Details
ℹ️ Preview Environment InfoIsolation:
Limitations:
Cleanup:
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/table_row.tsx (1)
42-80:⚠️ Potential issue | 🟠 MajorShow rank for
EXCLUDE_PRIZE_AND_SHOWentries.Status (1) is supposed to take rank, but the current condition hides the rank for any non-INCLUDE entry, so prize-excluded entries lose their rank display.
🐛 Suggested fix
const t = useTranslations(); + const hidesRank = + exclusion_status === ExclusionStatuses.EXCLUDE_AND_SHOW || + exclusion_status === ExclusionStatuses.EXCLUDE_AND_SHOW_IN_ADVANCED || + exclusion_status === ExclusionStatuses.EXCLUDE; const highlight = user?.id === userId || exclusion_status !== ExclusionStatuses.INCLUDE; @@ - {exclusion_status !== ExclusionStatuses.INCLUDE ? ( + {hidesRank ? ( <> <ExcludedEntryTooltip /> </> ) : (front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/table_row.tsx (1)
70-96:⚠️ Potential issue | 🟠 MajorShow rank for
EXCLUDE_PRIZE_AND_SHOWentries.Status (1) should still take rank, but the current condition hides the rank for any non-INCLUDE entry.
🐛 Suggested fix
const t = useTranslations(); + const hidesRank = + exclusion_status === ExclusionStatuses.EXCLUDE_AND_SHOW || + exclusion_status === ExclusionStatuses.EXCLUDE_AND_SHOW_IN_ADVANCED || + exclusion_status === ExclusionStatuses.EXCLUDE; @@ - {exclusion_status != ExclusionStatuses.INCLUDE ? ( + {hidesRank ? ( <> <ExcludedEntryTooltip /> </> ) : (scoring/views.py (1)
250-255:⚠️ Potential issue | 🟡 MinorTotal entries should align with rank‑eligible statuses.
total_entriescurrently counts entries up toEXCLUDE_AND_SHOW_IN_ADVANCED, which includes rows excluded from ranks. This can inflate totals. Consider counting only rank‑eligible statuses.🔧 Suggested fix
- exclusion_status__lte=ExclusionStatuses.EXCLUDE_AND_SHOW_IN_ADVANCED, + exclusion_status__lte=ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW,scoring/utils.py (1)
860-875:⚠️ Potential issue | 🟡 MinorNormalize
exclusion_statuswhen importing CSV.
row.getreturns strings/empties; passing these directly to an IntegerField can raise or store unexpected values. Consider converting to int with a safe default.🔧 Suggested fix
- exclusion_status = row.get("exclusion_status") + exclusion_status = row.get("exclusion_status") + exclusion_status = ( + int(exclusion_status) + if exclusion_status not in (None, "") + else ExclusionStatuses.INCLUDE + ) ... - exclusion_status=exclusion_status, + exclusion_status=exclusion_status,scoring/management/commands/update_global_bot_leaderboard.py (1)
760-768:⚠️ Potential issue | 🔴 CriticalCommand must update deprecated
excludedfield to maintain consistency withexclusion_status.The command only sets
exclusion_statusbut leavesexcludedandshow_when_excludedat their default values (both False). At runtime,scoring/utils.py:667filters entries byexcluded=Falseto calculate leaderboard statistics. Entries markedexclusion_status=EXCLUDEwill still haveexcluded=False, causing them to be incorrectly included in stats calculations. Either update both the deprecated fields here to match theexclusion_statuslogic, or refactor the statistics query to checkexclusion_statusinstead of the deprecated fields.
🤖 Fix all issues with AI agents
In `@scoring/views.py`:
- Around line 71-78: The rank threshold calculation uses
entries.filter(exclusion_status=ExclusionStatuses.INCLUDE).count(), which omits
entries like ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW that still consume ranks;
update the count to include all rank-eligible statuses (e.g., replace the
.filter(...) call with a filter that includes both ExclusionStatuses.INCLUDE and
ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW or the inverse by excluding only
non-rank-eligible statuses), so the max(... * 0.05) uses the correct total;
adjust the expression around rank__lte=max(...) accordingly.
🧹 Nitpick comments (2)
scoring/models.py (1)
345-355: Consider indexingexclusion_statusto replace the oldexcludedindex.If queries now filter on
exclusion_status, the previousdb_indexonexcludedwon’t help and could introduce a perf regression. Adding an index (and migration) keeps filtering fast.♻️ Suggested adjustment
- exclusion_status: ExclusionStatuses = models.IntegerField( - max_length=200, + exclusion_status: ExclusionStatuses = models.IntegerField( + db_index=True, + max_length=200, choices=ExclusionStatuses.choices, default=ExclusionStatuses.INCLUDE, help_text="""This sets the exclusion status of this entry.front_end/src/types/scoring.ts (1)
52-68: PreferExclusionStatusesforexclusion_statustype safety.Now that the enum exists, using it directly prevents invalid values and keeps the type aligned with backend semantics.
♻️ Suggested refactor
- exclusion_status: number; + exclusion_status: ExclusionStatuses;
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/index.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/table_row.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/table_row.tsxfront_end/src/types/scoring.tsscoring/admin.pyscoring/management/commands/update_global_bot_leaderboard.pyscoring/migrations/0020_leaderboardentry_and_medalexclusionrecord_exclusion_status.pyscoring/models.pyscoring/serializers.pyscoring/utils.pyscoring/views.py
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-20T16:49:51.584Z
Learnt from: aseckin
Repo: Metaculus/metaculus PR: 4001
File: front_end/src/app/(futureeval)/futureeval/components/futureeval-leaderboard-table.tsx:15-22
Timestamp: 2026-01-20T16:49:51.584Z
Learning: The mockTranslate function in front_end/src/app/(futureeval)/futureeval/components/futureeval-leaderboard-table.tsx is an intentional workaround to satisfy the entryLabel function signature from shared AIB utils while keeping FutureEval translation-free.
Applied to files:
front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/table_row.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/index.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsxfront_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/table_row.tsx
📚 Learning: 2026-01-15T19:29:58.940Z
Learnt from: hlbmtc
Repo: Metaculus/metaculus PR: 4075
File: authentication/urls.py:24-26
Timestamp: 2026-01-15T19:29:58.940Z
Learning: In this codebase, DRF is configured to use IsAuthenticated as the default in REST_FRAMEWORK['DEFAULT_PERMISSION_CLASSES'] within metaculus_web/settings.py. Therefore, explicit permission_classes([IsAuthenticated]) decorators are unnecessary on DRF views unless a view needs to override the default. When reviewing Python files, verify that views relying on the default are not redundantly decorated, and flag cases where permissions are being over-specified or when a non-default permission is explicitly required.
Applied to files:
scoring/models.pyscoring/admin.pyscoring/serializers.pyscoring/migrations/0020_leaderboardentry_and_medalexclusionrecord_exclusion_status.pyscoring/management/commands/update_global_bot_leaderboard.pyscoring/utils.pyscoring/views.py
🧬 Code graph analysis (10)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/table_row.tsx (1)
scoring/models.py (1)
ExclusionStatuses(17-22)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/index.tsx (1)
scoring/models.py (1)
ExclusionStatuses(17-22)
scoring/models.py (1)
projects/permissions.py (1)
choices(10-15)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx (1)
scoring/models.py (1)
ExclusionStatuses(17-22)
scoring/migrations/0020_leaderboardentry_and_medalexclusionrecord_exclusion_status.py (1)
scoring/models.py (2)
LeaderboardEntry(314-380)MedalExclusionRecord(410-495)
scoring/management/commands/update_global_bot_leaderboard.py (2)
scoring/models.py (3)
Leaderboard(105-292)LeaderboardEntry(314-380)ExclusionStatuses(17-22)front_end/src/types/scoring.ts (1)
LeaderboardEntry(60-75)
front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/table_row.tsx (1)
scoring/models.py (1)
ExclusionStatuses(17-22)
scoring/utils.py (2)
scoring/models.py (1)
ExclusionStatuses(17-22)projects/models.py (1)
BotLeaderboardStatus(248-252)
front_end/src/types/scoring.ts (1)
scoring/models.py (2)
ExclusionStatuses(17-22)LeaderboardEntry(314-380)
scoring/views.py (2)
scoring/models.py (3)
Leaderboard(105-292)LeaderboardEntry(314-380)ExclusionStatuses(17-22)front_end/src/types/scoring.ts (1)
LeaderboardEntry(60-75)
🪛 Ruff (0.14.14)
scoring/admin.py
[warning] 156-156: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
scoring/migrations/0020_leaderboardentry_and_medalexclusionrecord_exclusion_status.py
[warning] 7-7: Unused function argument: schema_editor
(ARG001)
[warning] 27-27: Unused function argument: schema_editor
(ARG001)
[warning] 56-58: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
[warning] 60-104: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build Docker Image / Build Docker Image
🔇 Additional comments (11)
scoring/models.py (3)
17-22: ExclusionStatuses enum looks good.Clear mapping and explicit values make the new status system easy to reason about.
193-196: No review notes for this TODO block.
450-460: LGTM:exclusion_statusadded to MedalExclusionRecord.This aligns with the shared enum and keeps defaults explicit.
scoring/admin.py (1)
156-156: LGTM: admin list_display now surfacesexclusion_status.scoring/serializers.py (1)
15-40: LGTM: serializer now exposesexclusion_statusalongside legacy fields.front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx (1)
107-123: LGTM: status-based filtering matches the new enum semantics.front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/index.tsx (1)
92-103: LGTM: exclusion-status gating is clear and consistent.scoring/views.py (1)
84-87: Advanced-only entries may be exposed to standard users.
EXCLUDE_AND_SHOW_IN_ADVANCEDis included for all non‑staff. If this endpoint backs the standard leaderboard view, advanced‑only entries will leak. Please confirm the intended audience and gate this filter on an explicit “advanced” flag if needed.scoring/migrations/0020_leaderboardentry_and_medalexclusionrecord_exclusion_status.py (1)
7-24: Migration + backfill mapping looks consistent.Forward and reverse mappings align with the deprecated boolean semantics and provide a clear downgrade path.
Also applies to: 27-51, 60-103
scoring/utils.py (2)
43-44: Exclusion status propagation through ranks/prizes looks consistent.Rank gating, prize pool participation, and prize assignment align with the new enum semantics.
Also applies to: 468-528, 741-742
545-553: Confirm medal eligibility forEXCLUDE_PRIZE_AND_SHOW.Medals are currently limited to
INCLUDE. IfEXCLUDE_PRIZE_AND_SHOWentries should still be medal‑eligible (since they take rank), this will skip them.🔧 Possible adjustment (if medal‑eligible)
- entry_count = len( - [e for e in entries if e.exclusion_status == ExclusionStatuses.INCLUDE] - ) + entry_count = len( + [ + e + for e in entries + if e.exclusion_status <= ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW + ] + ) ... - if entry.exclusion_status != ExclusionStatuses.INCLUDE: + if entry.exclusion_status > ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW: continue
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| rank__lte=max( | ||
| 3, | ||
| np.ceil( | ||
| entries.filter( | ||
| exclusion_status=ExclusionStatuses.INCLUDE | ||
| ).count() | ||
| * 0.05 | ||
| ), |
There was a problem hiding this comment.
Rank threshold should include rank‑eligible exclusions.
EXCLUDE_PRIZE_AND_SHOW entries still take rank, but the count only includes INCLUDE, which can undercount and shrink the top‑% window. Consider counting all rank‑eligible statuses.
🔧 Suggested fix
- entries.filter(
- exclusion_status=ExclusionStatuses.INCLUDE
- ).count()
+ entries.filter(
+ exclusion_status__lte=ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW
+ ).count()🤖 Prompt for AI Agents
In `@scoring/views.py` around lines 71 - 78, The rank threshold calculation uses
entries.filter(exclusion_status=ExclusionStatuses.INCLUDE).count(), which omits
entries like ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW that still consume ranks;
update the count to include all rank-eligible statuses (e.g., replace the
.filter(...) call with a filter that includes both ExclusionStatuses.INCLUDE and
ExclusionStatuses.EXCLUDE_PRIZE_AND_SHOW or the inverse by excluding only
non-rank-eligible statuses), so the max(... * 0.05) uses the correct total;
adjust the expression around rank__lte=max(...) accordingly.
front_end/src/app/(main)/(leaderboards)/leaderboard/components/leaderboard_table/index.tsx
Outdated
Show resolved
Hide resolved
…ntry-exclusion-subtlety
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In
`@front_end/src/app/`(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx:
- Around line 152-171: The map callback in leaderboardEntries.map currently
re-checks exclusion_status and returns nothing when the condition fails,
producing undefined slots; remove the inner conditional entirely and always
return the TableRow for each mapped entry (keep the same props: key using
entry.user?.id ?? entry.aggregation_method, rowEntry={entry}, userId,
maxCoverage, withPrizePool, isAdvanced). Rely on the earlier filteredEntries
logic (the filtering by ExclusionStatuses done where filteredEntries is
computed) so you only map over already-filtered entries and avoid implicit
undefineds from the map.
In `@scoring/management/commands/update_global_bot_leaderboard.py`:
- Around line 936-940: The code in update_global_bot_leaderboard.py currently
does direct dict access user.metadata["bot_details"] which can raise KeyError or
TypeError if metadata is None or missing; change the check in the block that
handles uid ints to safely fetch bot_details via
user.metadata.get("bot_details") (or default to {}), then check
bot_details.get("display_in_leaderboard") and set excluded accordingly; ensure
you also guard for user.metadata being None before calling .get and prefer a
safe local variable (e.g., bot_details = (user.metadata or
{}).get("bot_details", {})) so the logic around excluded remains correct.
🧹 Nitpick comments (2)
front_end/src/types/scoring.ts (1)
69-69: Use theExclusionStatusesenum type instead ofnumber.The
ExclusionStatusesenum is defined in this file butexclusion_statusis typed asnumber, losing the type safety the enum provides. Comparisons in downstream components (e.g.,entry.exclusion_status <= ExclusionStatuses.EXCLUDE_AND_SHOW) would benefit from the stricter typing.Proposed fix
- exclusion_status: number; + exclusion_status: ExclusionStatuses;front_end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx (1)
59-63: Use strict equality (===) instead of==.Lines 60 and 62: the comparison with
ExclusionStatusesvalues should use===for idiomatic TypeScript.Proposed fix
(entry) => - entry.exclusion_status <= ExclusionStatuses.EXCLUDE_AND_SHOW || + entry.exclusion_status <= ExclusionStatuses.EXCLUDE_AND_SHOW || (isAdvanced && - entry.exclusion_status == + entry.exclusion_status === ExclusionStatuses.EXCLUDE_AND_SHOW_IN_ADVANCED)
...end/src/app/(main)/(leaderboards)/leaderboard/components/project_leaderboard_table/index.tsx
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
scoring/utils.py (2)
860-875:⚠️ Potential issue | 🟡 Minor
exclusion_statusfrom CSV is not type-converted andNonebreaks on non-nullable field.
row.get("exclusion_status")returnsNonefor any CSV that predates this field (backward-compat breakage). PassingNonedirectly to aNOT NULLIntegerFieldinbulk_createwill raise a DB-level error. Additionally, the raw string value (e.g.,"2") is never coerced toint, which may also fail depending on the DB adapter path taken bybulk_create.🐛 Proposed fix
- exclusion_status = row.get("exclusion_status") + _es_raw = row.get("exclusion_status") + exclusion_status = int(_es_raw) if _es_raw is not None else ExclusionStatuses.INCLUDE🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scoring/utils.py` around lines 860 - 875, row.get("exclusion_status") can be None or a string and must be normalized before constructing LeaderboardEntry to avoid inserting NULL into the NOT NULL IntegerField; convert the value to an int (e.g., int(str_val)) after handling empty/None and non-numeric cases and provide a safe default (such as 0 or another domain-appropriate sentinel) and use that normalized integer when building new_entry (refer to the exclusion_status variable and the LeaderboardEntry constructor in this diff).
668-676:⚠️ Potential issue | 🟠 Major
calculate_medals_points_at_timeat line 676 must be updated to useexclusion_statusfield.The
totalsqueryset filters withfilter(excluded=False), which relies on the deprecated boolean field. Since the PR migrates toexclusion_status(confirmed active in migrations, views, and utils throughout the codebase), this must change tofilter(exclusion_status=ExclusionStatuses.INCLUDE)to avoid silently incorrect participant counts in medal points calculations and rank percentile updates.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scoring/utils.py` around lines 668 - 676, In calculate_medals_points_at_time replace the deprecated filter(excluded=False) on relevant_entries_qs used to build totals with filter(exclusion_status=ExclusionStatuses.INCLUDE) so totals only counts included participants; ensure ExclusionStatuses is imported/available in the module and update the totals queryset definition (the variable totals derived from relevant_entries_qs) accordingly to use exclusion_status=ExclusionStatuses.INCLUDE.
🧹 Nitpick comments (1)
scoring/admin.py (1)
203-214: Suppress Ruff RUF012 by annotating mutable class attributes withClassVar.Ruff flags
inlines,readonly_fields, andactions(andlist_displayat line 450) as mutable class-attribute defaults. Annotating withClassVaris the idiomatic fix.♻️ Proposed fix
+from typing import ClassVar ... class LeaderboardAdmin(admin.ModelAdmin): ... - inlines = [MedalExclusionRecordInline] - readonly_fields = [ + inlines: ClassVar[list] = [MedalExclusionRecordInline] + readonly_fields: ClassVar[list] = [ "entries_list_link", "leaderboard_action_buttons", "entries_preview", ] ... - actions = [ + actions: ClassVar[list] = [ "make_primary_leaderboard", "update_leaderboards", "force_update_leaderboards", "force_finalize_and_assign_medals_leaderboards", ]Apply the same annotation to
list_displayat line 450.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scoring/admin.py` around lines 203 - 214, Annotate the mutable admin class attributes with typing.ClassVar to suppress Ruff RUF012: import ClassVar from typing, then change the declarations for inlines, readonly_fields, actions (and the list_display declaration noted later) to be typed as ClassVar[list[...]] (or ClassVar[tuple[...]]/ClassVar[Sequence[...]] as appropriate) instead of plain assignment so the attributes are explicit class-level constants (e.g., inlines: ClassVar[list] = [...], readonly_fields: ClassVar[list[str]] = [...], actions: ClassVar[list[str]] = [...], and list_display: ClassVar[list[str]] = [...]). Ensure you update the admin class where these symbols (inlines, readonly_fields, actions, list_display) are defined and add the import for ClassVar at the top of the module.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scoring/utils.py`:
- Around line 747-752: assign_prizes_ currently calls float(prize_pool)
unguarded which raises TypeError when prize_pool is None; update assign_prizes_
to treat a missing prize_pool as zero (e.g., coerce prize_pool to 0 or
Decimal(0) before computing) and use that value when calculating entry.prize
(the assignment inside assign_prizes_ that multiplies entry.percent_prize).
Ensure the same falsy-guard behavior used for minimum_prize_percent (if
prize_pool else 0) is applied here so leaderboards without a prize_pool don't
crash.
---
Outside diff comments:
In `@scoring/utils.py`:
- Around line 860-875: row.get("exclusion_status") can be None or a string and
must be normalized before constructing LeaderboardEntry to avoid inserting NULL
into the NOT NULL IntegerField; convert the value to an int (e.g., int(str_val))
after handling empty/None and non-numeric cases and provide a safe default (such
as 0 or another domain-appropriate sentinel) and use that normalized integer
when building new_entry (refer to the exclusion_status variable and the
LeaderboardEntry constructor in this diff).
- Around line 668-676: In calculate_medals_points_at_time replace the deprecated
filter(excluded=False) on relevant_entries_qs used to build totals with
filter(exclusion_status=ExclusionStatuses.INCLUDE) so totals only counts
included participants; ensure ExclusionStatuses is imported/available in the
module and update the totals queryset definition (the variable totals derived
from relevant_entries_qs) accordingly to use
exclusion_status=ExclusionStatuses.INCLUDE.
---
Duplicate comments:
In `@scoring/utils.py`:
- Around line 435-438: The listed functions (assign_exclusions_, assign_ranks_,
assign_prize_percentages_, assign_medals_, assign_prizes_) are annotated as
returning list[LeaderboardEntry] but perform in-place mutations and do not
return anything; change their return type annotations to -> None (or remove the
return annotation) so the signature accurately reflects they return nothing, and
update any related docstrings or callers that might expect a returned list to
avoid confusion.
---
Nitpick comments:
In `@scoring/admin.py`:
- Around line 203-214: Annotate the mutable admin class attributes with
typing.ClassVar to suppress Ruff RUF012: import ClassVar from typing, then
change the declarations for inlines, readonly_fields, actions (and the
list_display declaration noted later) to be typed as ClassVar[list[...]] (or
ClassVar[tuple[...]]/ClassVar[Sequence[...]] as appropriate) instead of plain
assignment so the attributes are explicit class-level constants (e.g., inlines:
ClassVar[list] = [...], readonly_fields: ClassVar[list[str]] = [...], actions:
ClassVar[list[str]] = [...], and list_display: ClassVar[list[str]] = [...]).
Ensure you update the admin class where these symbols (inlines, readonly_fields,
actions, list_display) are defined and add the import for ClassVar at the top of
the module.
… by score, misc comment improvements
new settings allow for fine tuned display rules for leaderboard entries
add exclusion_status enum
ExclusionStatusestoLeaderboardEntryandMedalExclusionRecordset up to deprecate excluded and show_when_excluded fields
Pics:
Here are some new buttons at the top of Admin:

This is the new way to view leaderboard entries and medal exclusions on Admin:

(usernames redacted, note that exclusion records are given to the top 5 users in order - redaction color matches)
This is the result on the relevant leaderboard: (note lack of prize for non-excluded entry 2 which corresponds to the exclusion record set at "Exclude Prize Only")

Summary by CodeRabbit