RunWhen Platform MCP lets your coding agent (such as Cursor, Claude, Continue, or Copilot) talk to the RunWhen platform — workspace chat, issues, SLXs, run sessions, and the Tool Builder — over the Model Context Protocol (MCP).
- Workspace chat: Ask the RunWhen AI assistant about your infrastructure. It has access to issue search, task/SLX search, run sessions, resource discovery, knowledge base, graphing, and Mermaid diagrams. Supports selecting an assistant (persona) via
persona_name. - Task authoring (Tool Builder): Write bash or Python scripts locally, validate them against the RunWhen contract, run them against live infrastructure, and commit them as SLXs. Use
get_workspace_contextto loadRUNWHEN.mdconventions before writing. - Direct data access: List workspaces, issues, SLXs, run sessions; get runbooks and config index; search tasks and resources. Plus create and update chat rules and commands.
- Python 3.10 or newer
- RunWhen account and API token (see Getting a token)
- Any MCP client (Cursor, Claude Desktop, Continue, etc.)
-
Install the server:
pip install runwhen-platform-mcp
Or from source (use a venv and then point your MCP client at the venv’s
runwhen-platform-mcp):git clone https://github.com/runwhen-contrib/runwhen-platform-mcp.git cd runwhen-platform-mcp python3 -m venv .venv source .venv/bin/activate # Windows: .venv\Scripts\activate pip install -e .
-
Set environment variables (see Configuration):
RW_API_URL,RUNWHEN_TOKEN, and optionallyDEFAULT_WORKSPACE. -
Add the server to your MCP client using the config below. Replace
your-jwt-tokenandyour-workspacewith your RunWhen token and workspace name.
Add the following to your MCP client config:
{
"mcpServers": {
"runwhen": {
"command": "runwhen-platform-mcp",
"env": {
"RW_API_URL": "https://papi.beta.runwhen.com",
"RUNWHEN_TOKEN": "your-jwt-token",
"DEFAULT_WORKSPACE": "your-workspace"
}
}
}
}If you installed from source into a venv, use the full path to the venv’s runwhen-platform-mcp as command (e.g. /path/to/runwhen-platform-mcp/.venv/bin/runwhen-platform-mcp). Find it with which runwhen-platform-mcp after activating the venv.
Configure the RunWhen MCP server in your client as shown below. Use the JSON block from Getting started; only the location of the config differs by client.
Go to Cursor Settings → MCP → New MCP Server (or edit .cursor/mcp.json). Paste the config from Getting started. If you use a venv, set command to the full path to .venv/bin/runwhen-platform-mcp.
VS Code supports MCP servers through GitHub Copilot. Add the config to your workspace or user settings:
- Workspace:
.vscode/mcp.jsonin your project root - User:
settings.json→"mcp.servers"key
git clone https://github.com/runwhen-contrib/runwhen-platform-mcp.git
cd runwhen-platform-mcp
python -m venv .venv
.venv\Scripts\activate
pip install -e .Then add to .vscode/mcp.json:
{
"mcpServers": {
"runwhen": {
"command": "C:\\path\\to\\runwhen-platform-mcp\\.venv\\Scripts\\runwhen-platform-mcp.exe",
"env": {
"RW_API_URL": "https://papi.beta.runwhen.com",
"RUNWHEN_TOKEN": "your-jwt-token",
"DEFAULT_WORKSPACE": "your-workspace"
}
}
}
}Replace C:\\path\\to\\ with the actual path where you cloned the repo. To find the exact path, run where runwhen-platform-mcp in a terminal with the venv activated.
Tip: On Windows, pip installs console scripts as
.exefiles in.venv\Scripts\. Always use the full absolute path with backslashes in the MCP config.
{
"mcpServers": {
"runwhen": {
"command": "/path/to/runwhen-platform-mcp/.venv/bin/runwhen-platform-mcp",
"env": {
"RW_API_URL": "https://papi.beta.runwhen.com",
"RUNWHEN_TOKEN": "your-jwt-token",
"DEFAULT_WORKSPACE": "your-workspace"
}
}
}
}Add the config to:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/claude/claude_desktop_config.json
Use the same mcpServers.runwhen block as in Getting started.
Any client that supports MCP over stdio can use this server. Register a local MCP server with:
- Command:
runwhen-platform-mcp(or full path to the venv’srunwhen-platform-mcpif you installed from source) - Env:
RW_API_URL,RUNWHEN_TOKEN, and optionallyDEFAULT_WORKSPACE
See your client’s docs for where to add MCP servers (e.g. Continue, Codex, Gemini CLI, etc.).
The MCP server supports streamable HTTP so your editor can connect over HTTPS without a local Python install.
RunWhen operates a shared endpoint for the beta environment:
https://mcp.beta.runwhen.com/mcp
Use your RunWhen beta JWT or Personal Access Token (same as local mode) in the Authorization header. Official docs: RunWhen MCP Server — Remote server (HTTP).
Example mcpServers block (all remote clients below use this shape):
{
"mcpServers": {
"runwhen": {
"url": "https://mcp.beta.runwhen.com/mcp",
"headers": {
"Authorization": "Bearer your-runwhen-token"
}
}
}
}Important: Use
/mcpwith no trailing slash. The server redirects/mcp/→/mcp, which can break some MCP clients.
Workspace: Pass
workspace_nameon tools that support it when you need a specific workspace. RunWhen’s hosted service is configured for the beta API; self-hosted deployments often setDEFAULT_WORKSPACEin server environment variables.
To run the server yourself (Docker, Kubernetes, etc.), set url to your own hostname (for example https://mcp.your-domain.com/mcp) and the same Bearer token pattern. See Running the server in HTTP mode yourself below.
- Open Cursor Settings → MCP → New MCP Server, or edit
.cursor/mcp.jsonin your project (or user config, depending on how you scope MCP). - Add the
mcpServers.runwhenblock above (https://mcp.beta.runwhen.com/mcpfor hosted beta, or your self-hosted URL) and Bearer token. - Reload MCP / restart Cursor if the client does not pick up changes immediately.
Remote MCP support depends on your Cursor version; if url + headers are not accepted, use the local command install instead.
- Add the same
mcpServersentry to.vscode/mcp.json(workspace) or to usersettings.jsonunder the key your VS Code build uses for MCP servers (for examplemcp.servers— check VS Code MCP documentation for the current schema). - Use
urlandheadersas in the JSON block above.
Availability of remote MCP in VS Code evolves with Copilot; confirm in release notes if url-based servers are enabled for your version.
- Edit the Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/claude/claude_desktop_config.json
- macOS:
- Merge the
mcpServers.runwhenobject from the JSON block above (hosted or self-hosted URL) into the top-levelmcpServersmap (alongside any other servers you already have). - Fully quit and restart Claude Desktop.
Any client that supports remote or HTTP MCP (streamable HTTP) can use the same url + headers pattern. For a local-only client, use the stdio command + env setup in Getting started.
Running the server in HTTP mode yourself:
Using Docker:
docker run -p 8000:8000 \
-e RW_API_URL=https://papi.beta.runwhen.com \
ghcr.io/runwhen-contrib/runwhen-platform-mcp:latestOr locally:
export MCP_TRANSPORT=http
export MCP_HOST=0.0.0.0
export MCP_PORT=8000
export FASTMCP_STATELESS_HTTP=true
export RW_API_URL=https://papi.beta.runwhen.com
runwhen-platform-mcpThe server exposes:
/mcp/— Streamable HTTP MCP endpoint (POST for tool calls, GET for SSE)/health— Health check (200 OK with version info)/livez— Kubernetes liveness probe
Authentication in HTTP mode: Clients send a RunWhen token via Authorization: Bearer <token> header. The server validates it against PAPI's whoami endpoint — both JWTs and Personal Access Tokens work. No RUNWHEN_TOKEN env var is needed on the server side; each client authenticates with their own token.
| Variable | Required | Description |
|---|---|---|
MCP_TRANSPORT |
Yes | Set to http to enable remote mode (default: stdio). |
MCP_HOST |
No | Bind address (default: 0.0.0.0). |
MCP_PORT |
No | Listen port (default: 8000). |
FASTMCP_STATELESS_HTTP |
No | Set to true for horizontal scaling behind a load balancer. |
RW_API_URL |
Yes | PAPI base URL. Used for token verification and API calls. |
If you work across multiple RunWhen environments (e.g. beta and production, or separate workspaces), you can register multiple MCP servers. Important: only enable one at a time unless you specifically need cross-environment workflows — multiple active servers with identical tool names confuse LLM agents.
Use MCP_SERVER_LABEL to give each server a clear identity:
{
"mcpServers": {
"runwhen": {
"command": "runwhen-platform-mcp",
"env": {
"RW_API_URL": "https://papi.app.runwhen.com",
"RUNWHEN_TOKEN": "your-prod-token",
"DEFAULT_WORKSPACE": "my-prod-workspace",
"MCP_SERVER_LABEL": "prod"
}
},
"runwhen-beta": {
"command": "runwhen-platform-mcp",
"env": {
"RW_API_URL": "https://papi.beta.runwhen.com",
"RUNWHEN_TOKEN": "your-beta-token",
"DEFAULT_WORKSPACE": "my-beta-workspace",
"MCP_SERVER_LABEL": "beta"
}
}
}
}The server includes its label, environment, and workspace in its name and instructions so agents can route tool calls to the correct instance. See mcp-multi-env.json for a full example.
After the server is connected, try:
What workspaces do I have access to?
or:
Summarize the current issues in my workspace.
Your client should call list_workspaces or get_workspace_issues and show the result. For the full chat experience, try:
Using workspace chat, what tasks are watching my production namespace?
The server exposes these tools, grouped by use case.
-
Workspace intelligence (10 tools)
workspace_chat— Ask the RunWhen AI assistant about your infrastructure (issues, tasks, run sessions, resources, knowledge base). Optionalpersona_nameto select an assistant.list_workspaces— List workspaces you have access to.get_workspace_chat_config— Get resolved chat rules and commands (metadata). Optionalpersona_name.get_workspace_issues— Current issues; optional severity filter (1–4).get_workspace_slxs— List SLXs (health checks and tasks).get_run_sessions— Recent run session results.get_workspace_config_index— Workspace config and resource relationships.get_issue_details— Details for a specific issue by ID.get_slx_runbook— Runbook definition for an SLX.search_workspace— Search tasks, resources, and config by keyword.
-
Chat rules and commands (8 tools)
list_chat_rules— List chat rules (optional filters: scope_type, scope_id, is_active).get_chat_rule— Get a chat rule by ID (full content).create_chat_rule— Create a rule (name, ruleContent, scopeType, scopeId, isActive).update_chat_rule— Update a rule by ID.list_chat_commands— List chat commands (slash-commands).get_chat_command— Get a command by ID (full content).create_chat_command— Create a command (name, commandContent, scopeType, scopeId).update_chat_command— Update a command by ID.
-
CodeBundle Registry (3 tools)
search_registry— Search the public CodeBundle Registry for reusable automation. Always check before writing custom scripts.get_registry_codebundle— Get full details of a specific codebundle (tasks, SLIs, env vars, source URL).deploy_registry_codebundle— Deploy a registry codebundle as an SLX. Generates native codebundle YAML (different fromcommit_slxwhich embeds inline scripts).
-
Task authoring — Tool Builder (9 tools)
get_workspace_context— LoadRUNWHEN.mdfrom the project. Call before writing scripts so the agent follows your conventions.validate_script— Validate a script against the RunWhen contract (main, issue format, FD 3 for bash).run_script— Run a script on a RunWhen runner; returns run ID.get_run_status— Status of a run (RUNNING, SUCCEEDED, FAILED).get_run_output— Parsed output (issues, stdout, stderr, report).run_script_and_wait— Run script and wait for full results (run + poll + output).commit_slx— Commit a tested script as an SLX (task + optional SLI; supportssli_scriptorcron_schedule).get_workspace_secrets— List secret keys (e.g.kubeconfig).get_workspace_locations— List runner locations. Location auto-resolves forrun_script,commit_slx, etc.; this tool is only needed when multiple workspace runners exist and you need to choose.
| Variable | Required | Description |
|---|---|---|
RW_API_URL |
Yes | RunWhen API base URL (e.g. https://papi.beta.runwhen.com). Agent URL is derived (subdomain papi → agentfarm). |
RUNWHEN_TOKEN |
Yes | RunWhen API token (JWT or Personal Access Token). Used for both API and Agent. |
DEFAULT_WORKSPACE |
No | Default workspace so tools don’t need workspace_name every time. |
MCP_SERVER_LABEL |
No | Human-readable label for this server instance (e.g. prod, beta). Included in server name and instructions for multi-environment setups. Auto-derived from RW_API_URL if not set. |
RUNWHEN_CONTEXT_FILE |
No | Override path to RUNWHEN.md; otherwise auto-discovered from cwd. |
RUNWHEN_REGISTRY_URL |
No | CodeBundle Registry URL (default: https://registry.runwhen.com). Public API, no auth required. |
See .env.example in the repo.
- Personal Access Token (recommended, up to 180 days): RunWhen UI → Profile → Personal Tokens.
- Email/password (short-lived):
POST {RW_API_URL}/api/v3/token/with{"email": "...", "password": "..."}. - Browser: Dev Tools → Network → copy
Authorization: Bearer ...from any API request.
Workspace roles: readonly, readandrun, readandrunwithassistant, readwrite, admin.
- Read and Run with Assistant (
readandrunwithassistant): Run tasks only when tied to an assistant (persona) you’re allowed to use. Applies to run sessions (e.g. Run button in the UI), not Tool Builder script runs. - Workspace chat: Use
persona_nameinworkspace_chat/get_workspace_chat_configto use chat in the context of an assistant you’re allowed to use. - Tool Builder run (
run_script,run_script_and_wait): Uses author/run API; currently admin only. No "run with assistant" for MCP script execution today. - commit_slx: Requires admin or readwrite.
- Workspace chat: The server forwards
workspace_chatto the RunWhen Agent (AgentFarm), which has many internal tools. You ask in natural language; optionalpersona_nameselects the assistant. - Tool Builder flow: Search registry (
search_registry) → load context (get_workspace_context) → write script → validate → get secrets/locations → test withrun_script_and_wait→ iterate →commit_slx→ verify withget_workspace_slxs. - Knowledge base: Full CRUD via
list_knowledge_base_articles,create_knowledge_base_article,update_knowledge_base_article,delete_knowledge_base_article. Search also works insideworkspace_chat. - CodeBundle Registry: Search for existing automation before building custom. The registry at
registry.runwhen.comis public and requires no authentication.
Put a RUNWHEN.md in your project root with infrastructure rules (DBs, naming, severity, etc.). The server discovers it by walking up from the current working directory. Agents should call get_workspace_context before writing scripts.
- Template:
runwhen_platform_mcp/docs/RUNWHEN.md.template - Example:
runwhen_platform_mcp/docs/RUNWHEN.md.example - Flow and SLI patterns:
runwhen_platform_mcp/docs/tool-builder-flow.md
| Component | Path | Description |
|---|---|---|
| MCP server | runwhen_platform_mcp/ |
Python package; run via runwhen-platform-mcp or python -m runwhen_platform_mcp.server. |
| Docs | runwhen_platform_mcp/docs/ |
Tool Builder flow, RUNWHEN.md template/example. |
| Tests | tests/ |
Pytest tests; run with pytest tests/ -v (see requirements-dev.txt). |
| Skills | skills/ |
Reusable AI workflow skills (SKILL.md) — discovered by Cursor, Copilot, and Claude. Symlinked at .github/skills/ for Copilot auto-discovery. |
| Rules & agents | rules/, agents/ |
Optional Cursor rules and agent personas. |
| Docker | Dockerfile |
Container image for remote HTTP deployment. Published to ghcr.io/runwhen-contrib/runwhen-platform-mcp. |
| Cursor plugin | .cursor-plugin/, mcp.json |
Plugin metadata and example MCP config. |
| Copilot instructions | .github/copilot-instructions.md |
Always-on instructions for GitHub Copilot. |
The MCP server is client-agnostic; client-specific pieces (.cursor-plugin/, .github/copilot-instructions.md) are optional.
pip install -e .
pip install -r requirements-dev.txt
pytest tests/ -vOptional Git hooks (Ruff check + format, same as CI):
pip install pre-commit # or install with: pip install -e ".[dev]"
pre-commit install
pre-commit run --all-files # first-time / manual checkCI runs tests on push and PRs to main (.github/workflows/ci.yaml).
Optional repository secrets RUNWHEN_MCP_URL (full streamable HTTP MCP URL, e.g. https://mcp.<env>.runwhen.com/mcp, no trailing slash) and RUNWHEN_TOKEN (same Bearer token as MCP clients) enable a remote MCP HTTP smoke step that exercises initialize, tools/list, list_workspaces, and get_workspace_issues for workspace t-oncall (the workflow sets RW_SMOKE_WORKSPACE=t-oncall). If either secret is unset, that step is skipped with a notice.
PyPI — On every push to main (including merges), .github/workflows/pypi.yaml publishes to PyPI via runwhen-contrib/github-actions/publish-pypi with date-based versioning (YYYY.MM.DD.N). Configure PYPI_TOKEN (and optionally SLACK_BOT_TOKEN / slack_channel) in repo secrets.
Docker (GHCR and GCP) — Pull requests that touch image-related paths (see .github/workflows/docker.yaml) build and push a preview image (pr-{branch}-{sha}). Pushes to main use .github/workflows/release.yml: the workflow runs on each merge to main, and a new image is built and pushed only if that merge changes the same image-related paths (package code, Dockerfile, pyproject.toml, requirements.txt, or docker.yaml). README-only (or other non-image) merges skip the Docker job so latest and version tags are not republished for doc-only changes. Run Actions → Release → Run workflow to force a full run including Docker regardless of paths.
Apache-2.0