A lightweight Claude Cowork skill that audits the efficiency of your scheduled agents. It samples the most recent run per agent, estimates token usage, and produces a comparison report with efficiency flags.
If you run scheduled agents in Cowork (job scrapers, daily briefings, processing pipelines, etc.), this skill helps you understand which agents are expensive relative to their output — without requiring exact token metrics.
For each agent, the audit:
- Reads the most recent session transcript (last 10 messages only, to stay cheap)
- Counts tool calls and categorizes them (web search, database reads/writes, file ops)
- Extracts yield metrics from the agent's run summary (items found vs. added, etc.)
- Estimates token weight using a simple heuristic
- Flags agents as OK, Review (high cost relative to yield), or Lean (low activity, likely exited early)
The output is a single markdown report saved to your workspace folder.
This skill is intentionally lightweight. It was built with the constraint that the audit itself should cost roughly the same as a single agent run (~12-18K estimated tokens). It samples rather than censuses, reads summaries rather than full transcripts, and produces compact output.
/plugin marketplace add LukeFusion/agent-efficiency-audit
/plugin install agent-efficiency-audit@agent-efficiency-audit
Once installed, trigger the audit by asking Claude in any Cowork session:
- "Audit my agents"
- "How are my agents doing?"
- "Run an agent efficiency report"
- "How much are my scheduled tasks costing?"
You can also schedule it to run automatically (e.g., every 5 days) using Cowork's scheduled tasks.
# Agent Efficiency Audit — 2026-04-03
## Summary
5 agents audited. 4 flagged OK, 1 flagged for Review (social-impact discovery
running heavy relative to yield).
## Per-Agent Breakdown
| Agent | Type | Last Run | Messages | Tool Calls | Est. Tokens | Yield | Flag |
|-----------------|------------|------------|----------|------------|-------------|----------------|--------|
| Energy/Climate | Discovery | Apr 3 1:04am | 24 | 14 | ~22K | 5 added / 15 | OK |
| Data Platform | Discovery | Apr 3 1:02am | 18 | 10 | ~16K | 2 added / 8 | OK |
| Social Impact | Discovery | Apr 3 1:08am | 30 | 18 | ~28K | 1 added / 12 | Review |
| Job Processing | Processing | Apr 3 5:04am | 40 | 22 | ~45K | 3 scored | OK |
| Morning Brief | Briefing | Apr 3 7:01am | 8 | 4 | ~5K | completed | OK |
## Tool Call Breakdown
| Agent | Web Search | DB Read | DB Write | File Ops | Other |
|-----------------|------------|---------|----------|----------|-------|
| Energy/Climate | 6 | 3 | 4 | 0 | 1 |
| Data Platform | 4 | 2 | 3 | 0 | 1 |
| Social Impact | 8 | 4 | 2 | 1 | 3 |
| Job Processing | 0 | 6 | 4 | 8 | 4 |
| Morning Brief | 1 | 0 | 0 | 2 | 1 |
## Flags & Observations
- Social Impact discovery used 28K est. tokens but only added 1 role from 12 evaluated.
High web search count (8) relative to yield suggests broad searches returning
low-relevance results. Consider narrowing search terms or adding filters.
## Comparison to Prior Audit
First audit — no baseline yet.
MIT