Problem
Agents running inside Moat containers have no built-in way to:
- Look up Moat documentation when they encounter unfamiliar behaviour
- Report bugs or unexpected errors back to the Moat team
- Diagnose and explain permission/network errors to the user
Users (and agents) currently have to leave the container context to search docs or file GitHub issues manually. There is also no automatic context attached to reports, so bug reports tend to be sparse.
Proposed solution
Inject a moat skill into every container at run time. The skill would be auto-invoked by Claude when relevant (e.g. on permission errors, network blocks, unexpected proxy behaviour) and would provide:
1. Documentation lookup
- Answers questions about Moat concepts (grants, network policy, ports, services, hooks, etc.)
- Links to the relevant section of https://majorcontext.com/moat/ for deeper reading
2. Bug / feedback reporting
- Opens a pre-filled GitHub issue against
majorcontext/moat
- Automatically attaches non-private environment context, for example:
- Moat version / run ID
- Grant types active in this run (e.g.
github, claude) — not the credential values
- Network policy in effect
- Configured ports / services (names only)
- Agent type (e.g.
claude)
- OS / platform info
- Omits anything private (tokens, secrets,
MOAT_*_URL values with credentials)
3. Permission / network error diagnosis
- When Claude hits a blocked host or a 407 from the proxy, the skill explains why (network policy, missing grant, etc.) and suggests the fix (e.g. "add
--allow flag", "run moat grant <name>")
Implementation sketch
The skill could be delivered as a CLAUDE.md / AGENTS.md fragment and a small MCP tool (or just prompt context) injected into the container's Claude Code configuration at startup. The bug-reporting tool would call the GitHub API (or gh) with a structured template and the pre-collected context.
The skill description should be broad enough for Claude to auto-invoke it whenever Moat-related confusion arises, without requiring the user to know to ask for it.
Why this matters
Moat containers are intentionally isolated — that isolation makes it harder to get help when something goes wrong. A built-in skill closes that loop: agents can self-diagnose common issues and file high-quality bug reports with zero friction, improving both the user experience and the quality of feedback the Moat team receives.
Problem
Agents running inside Moat containers have no built-in way to:
Users (and agents) currently have to leave the container context to search docs or file GitHub issues manually. There is also no automatic context attached to reports, so bug reports tend to be sparse.
Proposed solution
Inject a
moatskill into every container at run time. The skill would be auto-invoked by Claude when relevant (e.g. on permission errors, network blocks, unexpected proxy behaviour) and would provide:1. Documentation lookup
2. Bug / feedback reporting
majorcontext/moatgithub,claude) — not the credential valuesclaude)MOAT_*_URLvalues with credentials)3. Permission / network error diagnosis
--allowflag", "runmoat grant <name>")Implementation sketch
The skill could be delivered as a CLAUDE.md / AGENTS.md fragment and a small MCP tool (or just prompt context) injected into the container's Claude Code configuration at startup. The bug-reporting tool would call the GitHub API (or
gh) with a structured template and the pre-collected context.The skill description should be broad enough for Claude to auto-invoke it whenever Moat-related confusion arises, without requiring the user to know to ask for it.
Why this matters
Moat containers are intentionally isolated — that isolation makes it harder to get help when something goes wrong. A built-in skill closes that loop: agents can self-diagnose common issues and file high-quality bug reports with zero friction, improving both the user experience and the quality of feedback the Moat team receives.