Skip to content

feat(digest): add entire digest command for team activity highlights#491

Open
mvanhorn wants to merge 1 commit intoentireio:mainfrom
mvanhorn:feat/digest-command
Open

feat(digest): add entire digest command for team activity highlights#491
mvanhorn wants to merge 1 commit intoentireio:mainfrom
mvanhorn:feat/digest-command

Conversation

@mvanhorn
Copy link

Summary

Adds /entire-digest, a Claude Code skill that turns your team's AI coding sessions into a highlights newsletter - the funniest exchanges, cleverest prompts, and usage stats. Think of it as a team Slack channel recap, but for your AI pair programming sessions.

The skill is installed automatically by entire enable and backed by a new entire digest CLI command that reads checkpoint data from the entire/checkpoints/v1 branch.

What it looks like

Invoke /entire-digest in Claude Code (or run entire digest from the terminal) and get something like:


This Week's Highlights

1. "If they were never committed to this branch, then why did you just commit a fix for them?" - Sheldon

Feb 21, fix-recording-data-loss branch

Claude: They were never committed to this branch. The conflict markers were only in your working tree.
Sheldon: If they were never committed to this branch, then why did you just commit a fix for them to this branch?
Claude: You're right - that's a good catch. I should revert my commit and just discard those working tree changes instead.

Sheldon catches Claude in a clean logical contradiction: "they were never committed" followed by... committing a fix for them. This is why you review the AI's work - it'll confidently narrate itself into a corner and not notice until a human points at the obvious.

2. "Thank you, Captain Obvious" - Mike

Feb 21, multi-segment recording branch

Mike: How do I deploy the LiveKit agent to production?
Claude: LiveKit agents are deployed using the LiveKit CLI. The general approach involves...
Mike: Thank you, Captain Obvious. I didn't ask for a lecture on backward compatibility theory. I asked HOW TO DEPLOY.

Classic frustrated-developer-with-AI moment. Mike asked a specific deployment question, Claude served up a five-paragraph essay on general concepts, and Mike was having none of it. We've all been there.

3. "Your URL doesn't work." - Sheldon

Feb 19, sentry-debugging branch

Claude: In the Cloudflare dashboard, that's: https://dash.cloudflare...
Sheldon: Your URL doesn't work.

Three words. No anger, no explanation, no "here's the 404 I got." Claude confidently served up a fabricated URL and Sheldon shut it down with the flattest possible delivery.

Cleverest Prompts

1. "Help me understand why that's the best possible option" - Robert

Feb 21, fix-recording-data-loss branch

Claude recommended adding a UUID lock token field for concurrency control. Robert wasn't buying it:

Robert: I'm not in love with the idea of creating a new field just for a UUID lock token. Help me understand why that's the best possible option.
Claude: It's not.

Then Claude walked through why the existing system already handled the edge case. One sentence saved a pointless schema change. The gold standard for working with AI: don't accept recommendations, make the AI defend them.

2. "diff this against what you said 10 minutes ago" - Sheldon

Feb 22, feat/auth-flow

Using the AI's own conversation history as a diffing tool. Sheldon noticed Claude contradicted its earlier recommendation and called it out by making Claude do the comparison itself. Meta-debugging at its finest.


How it works

The command

entire digest                       # Last 7 days (default)
entire digest --period 30d          # Last 30 days
entire digest --period today        # Just today
entire digest --author mike         # One person's sessions
entire digest --short               # Stats only, no prompts
entire digest --format json         # JSON for programmatic use
entire digest --format markdown     # Markdown output
entire digest --ai-curate=false     # Heuristic scoring only (no Claude call)

The pipeline

  1. Collect - Reads committed checkpoints from entire/checkpoints/v1. Iterates all sessions per checkpoint (not just the latest), extracting prompts, conversations, branches, timestamps, and token usage.

  2. Score - Heuristic scoring ranks conversations by "interestingness." Scores the entire conversation arc, not just the opening prompt - so a boring "fix this" followed by "Really?" and "STILL WRONG" scores high because the follow-up exchanges contribute. Signals include sarcasm ("Captain Obvious"), single-word disbelief ("Really?"), context-amnesia complaints ("Have you forgotten everything?"), AI self-reference ("Are you stuck?"), frustration, and celebration.

  3. Deduplicate & diversify - Cross-session dedup collapses identical prompts that appeared in multiple sessions. Diversity-aware ranking skips near-duplicate conversations (>60% word overlap) so the output has variety instead of 9 copies of the same prompt.

  4. Classify (optional) - When --ai-curate is enabled (default), calls Claude to classify the top-scoring conversations into editorial highlights with commentary. Falls back to heuristic-only output if Claude is unavailable.

  5. Format - Three output formats:

    • terminal - Color-coded, pager-enabled output for the terminal
    • markdown - Clean markdown for sharing
    • json - Structured output with top_conversations array, scored and sorted, including full exchange context

The Claude Code skill

The real magic is the /entire-digest skill that ships with entire enable. When invoked inside Claude Code, it:

  • Finds all Entire-enabled repos on the machine
  • Runs entire digest --format json against each
  • Curates the results into a two-section newsletter:
    • This Week's Highlights (3-5 items) - logical contradictions, escalating exchanges, frustrated developer moments, terse personality
    • Cleverest Prompts (2-3 items) - creative problem-solving, meta-debugging, inventive AI workflows
  • Adds brief stats as context

The skill uses JSON format so Claude Code gets the full conversation exchanges (both human and assistant messages), not just the human's prompts. This lets it identify conversation arcs - escalating frustration, logical contradictions, celebration after a long struggle.

What's in the box

New files

File What it does
cmd/entire/cli/digest.go Cobra command definition and runner
cmd/entire/cli/digest/digest.go Core pipeline - checkpoint reading, prompt/conversation extraction, cross-session dedup
cmd/entire/cli/digest/format.go Terminal, markdown, and JSON formatters with diversity-aware ranking
cmd/entire/cli/digest/classify.go AI curation via Claude subprocess
cmd/entire/cli/digest/score.go Heuristic scoring (sarcasm, disbelief, frustration, celebration, amnesia, AI self-reference)
cmd/entire/cli/digest/digest_test.go 841 lines of tests covering scoring, dedup, diversity, formats, multi-session
.claude/skills/entire-digest/SKILL.md Claude Code skill for inline digest delivery

Modified files

File Change
cmd/entire/cli/agent/claudecode/hooks.go Skill installation via entire enable, digestSkillContent constant
cmd/entire/cli/agent/claudecode/hooks_test.go Tests for skill install/uninstall/idempotency
cmd/entire/cli/root.go Register digest subcommand
cmd/entire/cli/setup.go Install skill on entire enable
cmd/entire/cli/lifecycle.go Session start tip mentioning /entire-digest
cmd/entire/cli/agent/agent.go ReadSessionContent(ctx, id, index) for multi-session reads
cmd/entire/cli/summarize/claude.go Shared StripClaudeEnvVars helper

Design decisions

Score conversation arcs, not just anchor prompts. Follow-up user messages contribute to the conversation score (dampened by /2). A boring "fix the deployment" followed by "Really?" (score 9) and "No." (score 7) gets 0 + (9+7)/2 = 8 points from the arc alone. Conversations with 4+ exchanges get a +2 bonus because rapid-fire volleys are almost always more entertaining.

Cross-session dedup + diversity-aware ranking. The same prompt appearing in 9 different sessions used to fill every top slot. Now: (1) cross-session dedup collapses identical prompt text, keeping the highest-scoring instance; (2) diversity-aware ranking skips candidates with >60% word overlap with already-selected items.

JSON format for the skill, not markdown. The skill originally used --format markdown, which only gave Claude Code the human's prompts. Switching to JSON gives it the full exchanges array (human + assistant messages), so it can identify conversational patterns - logical contradictions, escalating frustration, celebration after struggle.

Two editorial sections, not one. "Highlights" (emotional/personality moments) and "Cleverest Prompts" (creative/inventive AI usage) get separate sections because clever prompts kept getting crowded out by frustrated-developer moments. Frustration has stronger signal and dominated when everything was in one bucket.

Expanded scoring signals. Beyond the original frustration/celebration word lists, scoring now detects: sarcasm ("Captain Obvious", "thanks for nothing"), single-word disbelief ("Really?", "No."), context-amnesia complaints ("have you forgotten"), and AI self-reference questions ("are you stuck", "why did you"). This ensures wit and personality score competitively with raw frustration keywords.

--ai-curate=false for the skill. The skill always passes this flag because Claude Code itself is the curator. Running a nested Claude subprocess inside Claude Code would be redundant (and slow).

Test plan

  • go test ./cmd/entire/cli/digest/... - All digest package tests (scoring, dedup, diversity, formats, multi-session)
  • go test ./cmd/entire/cli/agent/claudecode/... - Skill install/uninstall/update tests
  • entire enable in a repo installs the skill at ~/.claude/skills/entire-digest/SKILL.md
  • entire digest in a repo with checkpoints produces terminal output
  • entire digest --format json produces valid JSON with top_conversations array
  • entire digest --format markdown produces clean markdown
  • entire digest --short shows stats only
  • entire digest --author <name> filters to one person
  • entire digest --period today / --period 30d / --period all filter correctly
  • /entire-digest in Claude Code produces a two-section newsletter with highlights and clever prompts
  • Same prompt in multiple sessions appears at most once in output
  • Output has variety - no theme dominates all slots

🤖 Generated with Claude Code

Adds `entire digest` CLI command and `/entire-digest` Claude Code skill
that turn checkpoint data into a team highlights newsletter - funniest
exchanges, cleverest prompts, and usage stats.

Pipeline: collect checkpoints, score conversation arcs with heuristic
signals (sarcasm, disbelief, frustration, amnesia, celebration),
cross-session dedup, diversity-aware ranking, optional AI curation.

Formats: terminal (color-coded, pager), markdown, JSON (full exchanges
for Claude Code skill). Skill installed automatically by `entire enable`.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@mvanhorn mvanhorn requested a review from a team as a code owner February 25, 2026 04:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

1 participant