Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,21 @@ fi
echo "VENDORED_GSTACK: $_VENDORED"
# Detect spawned session (OpenClaw or other orchestrator)
[ -n "$OPENCLAW_SESSION" ] && echo "SPAWNED_SESSION: true" || true
# Multi-LLM orchestration (llm-cli-gateway)
_LLM_GW="unavailable"
_LLM_GW_CLAUDE="no"
_LLM_GW_CODEX="no"
_LLM_GW_GEMINI="no"
if command -v llm-cli-gateway >/dev/null 2>&1; then
_LLM_GW="available"
command -v claude >/dev/null 2>&1 && _LLM_GW_CLAUDE="yes"
command -v codex >/dev/null 2>&1 && _LLM_GW_CODEX="yes"
command -v gemini >/dev/null 2>&1 && _LLM_GW_GEMINI="yes"
fi
echo "LLM_GATEWAY: $_LLM_GW"
[ "$_LLM_GW" = "available" ] && echo "LLM_GW_CLAUDE: $_LLM_GW_CLAUDE"
[ "$_LLM_GW" = "available" ] && echo "LLM_GW_CODEX: $_LLM_GW_CODEX"
[ "$_LLM_GW" = "available" ] && echo "LLM_GW_GEMINI: $_LLM_GW_GEMINI"
```

If `PROACTIVE` is `"false"`, do not proactively suggest gstack skills AND do not
Expand Down
44 changes: 44 additions & 0 deletions contrib/add-tool/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Adding an External Tool to gstack

This directory contains integrations for external development tools that
enhance gstack's workflow skills with specialized capabilities.

## Structure

Each tool integration lives in its own directory:

contrib/add-tool/<tool-name>/
├── README.md # What the tool does and how the integration works
├── tools.json # Routing contract: which gstack skills use which tools
├── detection.sh # Bash fragment appended to preamble for detection
├── install.sh # Idempotent install script
└── uninstall.sh # Clean removal script

## How it works

1. **Detection**: A bash block in the preamble checks if the tool binary
exists and outputs status variables (available/unavailable, version, etc.)

2. **Resolver**: A TypeScript resolver reads `tools.json` and generates
conditional markdown blocks for each skill template. The block is skipped
entirely when the tool is not detected.

3. **Template**: Skills that benefit from the tool include `{{TOOL_CONTEXT}}`
in their SKILL.md.tmpl, placed after `{{LLM_GATEWAY_CONTEXT}}` where present;
otherwise after `{{LEARNINGS_SEARCH}}`.

## Requirements for a tool integration

- Tool MUST be optional — gstack works without it
- Detection MUST be fast (< 50ms) — it runs on every skill invocation
- Resolver output MUST be concise — avoid prompt bloat
- Install script MUST be idempotent
- Uninstall script MUST leave gstack in a clean state
- tools.json MUST include min_version for compatibility gating

## Existing integrations

- [llm-gateway](llm-gateway/) — Multi-LLM orchestration via MCP (Gemini + Codex + Claude,
async parallel reviews, session continuity)
- [sqry](sqry/) — AST-based semantic code search via MCP (callers/callees tracing,
cycle detection, complexity metrics, structural call-path tracing)
41 changes: 41 additions & 0 deletions contrib/add-tool/llm-gateway/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# llm-cli-gateway Integration for gstack

[llm-cli-gateway](https://github.com/verivus-oss/llm-cli-gateway) provides
unified multi-LLM orchestration via 23 MCP tools. This integration adds Gemini
as a third review voice, async parallel orchestration, and session continuity
to gstack skills.

## Install

bash contrib/add-tool/llm-gateway/install.sh [claude|codex|all]

## What it does

When llm-cli-gateway is installed and configured as an MCP server, gstack
skills gain a "Multi-LLM Orchestration" section with contextual tool
recommendations. For example:

- `/review` gets async parallel Gemini + Codex reviews with cross-model synthesis
- `/investigate` gets Gemini hypothesis validation alongside Codex second opinion
- `/plan-eng-review` gets multi-model architecture feedback
- `/ship` gets parallel pre-ship reviews from all available models

See `tools.json` for the complete routing table.

## Relationship to existing multi-LLM

gstack already invokes Codex via shell subprocess (`codex exec`). This
integration does NOT replace that — it adds complementary capabilities:

| Existing | Gateway adds |
|----------|-------------|
| Codex via `codex exec` (Bash) | Codex via `mcp__llm-cli-gw__codex_request` (MCP, structured) |
| Claude subagent (Agent tool) | Gemini as third voice (new) |
| Sequential blocking calls | Async parallel orchestration (new) |
| Stateless invocations | Session continuity (new) |

## Uninstall

bash contrib/add-tool/llm-gateway/uninstall.sh

This removes the gstack integration. llm-cli-gateway itself remains installed.
16 changes: 16 additions & 0 deletions contrib/add-tool/llm-gateway/detection.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Multi-LLM orchestration (llm-cli-gateway)
# Reference fragment — inlined by preamble.ts resolver
_LLM_GW="unavailable"
_LLM_GW_CLAUDE="no"
_LLM_GW_CODEX="no"
_LLM_GW_GEMINI="no"
if command -v llm-cli-gateway >/dev/null 2>&1; then
_LLM_GW="available"
command -v claude >/dev/null 2>&1 && _LLM_GW_CLAUDE="yes"
command -v codex >/dev/null 2>&1 && _LLM_GW_CODEX="yes"
command -v gemini >/dev/null 2>&1 && _LLM_GW_GEMINI="yes"
fi
echo "LLM_GATEWAY: $_LLM_GW"
[ "$_LLM_GW" = "available" ] && echo "LLM_GW_CLAUDE: $_LLM_GW_CLAUDE"
[ "$_LLM_GW" = "available" ] && echo "LLM_GW_CODEX: $_LLM_GW_CODEX"
[ "$_LLM_GW" = "available" ] && echo "LLM_GW_GEMINI: $_LLM_GW_GEMINI"
124 changes: 124 additions & 0 deletions contrib/add-tool/llm-gateway/install.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
#!/usr/bin/env bash
# Install llm-cli-gateway as a gstack multi-LLM orchestration add-in.
# Idempotent — safe to run multiple times.
set -e

AGENT="${1:-claude}"
MIN_VERSION="1.1.0"

echo "=== llm-cli-gateway integration for gstack ==="
echo ""

# 1. Check for llm-cli-gateway
if ! command -v llm-cli-gateway >/dev/null 2>&1; then
echo "llm-cli-gateway not found on PATH."
echo ""
echo "Install via npm:"
echo " npm install -g llm-cli-gateway"
echo ""
echo "Or clone and build:"
echo " git clone https://github.com/verivus-oss/llm-cli-gateway.git"
echo " cd llm-cli-gateway && npm install && npm run build && npm link"
echo ""
echo "Then re-run this script."
exit 1
fi

# 2. Check version
GW_VERSION=$(llm-cli-gateway --version 2>/dev/null || echo "0.0.0")
echo "Found llm-cli-gateway $GW_VERSION"

version_lt() {
# Portable semver comparison (no sort -V, works on macOS + Linux)
local IFS=.
local i a=($1) b=($2)
for ((i=0; i<3; i++)); do
local ai=${a[i]:-0} bi=${b[i]:-0}
if [ "$ai" -lt "$bi" ] 2>/dev/null; then return 0; fi
if [ "$ai" -gt "$bi" ] 2>/dev/null; then return 1; fi
done
return 1 # equal
}

if version_lt "$GW_VERSION" "$MIN_VERSION"; then
echo "llm-cli-gateway $MIN_VERSION+ required. Please upgrade:"
echo " npm install -g llm-cli-gateway@latest"
exit 1
fi

# 3. Report CLI availability
echo ""
echo "CLI availability:"
command -v claude >/dev/null 2>&1 && echo " claude: yes" || echo " claude: no (optional — install for Claude orchestration)"
command -v codex >/dev/null 2>&1 && echo " codex: yes" || echo " codex: no (optional — install for Codex orchestration)"
command -v gemini >/dev/null 2>&1 && echo " gemini: yes" || echo " gemini: no (optional — install for Gemini orchestration)"

# 4. Configure MCP for the target agent
echo ""
echo "Configuring MCP server for $AGENT..."

configure_claude() {
local settings="$HOME/.claude/settings.json"
if [ ! -f "$settings" ]; then
mkdir -p "$HOME/.claude"
echo '{}' > "$settings"
fi
# Add llm-cli-gw MCP server if not present
node -e "
const fs = require('fs');
const s = JSON.parse(fs.readFileSync('$settings', 'utf-8'));
if (!s.mcpServers) s.mcpServers = {};
if (!s.mcpServers['llm-cli-gw']) {
s.mcpServers['llm-cli-gw'] = {
command: 'llm-cli-gateway',
args: [],
env: {}
};
fs.writeFileSync('$settings', JSON.stringify(s, null, 2));
console.log('Added llm-cli-gw MCP server to ' + '$settings');
} else {
console.log('llm-cli-gw MCP server already configured in ' + '$settings');
}
"
}

configure_codex() {
local config="$HOME/.codex/config.toml"
if [ ! -f "$config" ]; then
mkdir -p "$HOME/.codex"
echo "" > "$config"
fi
if ! grep -q 'llm-cli-gw' "$config" 2>/dev/null; then
cat >> "$config" << 'TOML'

[[mcp_servers]]
name = "llm-cli-gw"
command = "llm-cli-gateway"
args = []
TOML
echo "Added llm-cli-gw MCP server to $config"
else
echo "llm-cli-gw MCP server already configured in $config"
fi
}

case "$AGENT" in
claude) configure_claude ;;
codex) configure_codex ;;
all) configure_claude; configure_codex ;;
*) echo "Warning: Auto-configuration not supported for $AGENT. Configure MCP manually." ;;
esac

# 5. Regenerate gstack skills (picks up {{LLM_GATEWAY_CONTEXT}} resolver)
GSTACK_DIR="${GSTACK_ROOT:-$HOME/.claude/skills/gstack}"
if [ -f "$GSTACK_DIR/package.json" ]; then
echo ""
echo "Regenerating gstack skill docs..."
(cd "$GSTACK_DIR" && bun run gen:skill-docs --host all 2>/dev/null) || {
echo "Warning: Could not regenerate skill docs. Run manually:"
echo " cd $GSTACK_DIR && bun run gen:skill-docs --host all"
}
fi

echo ""
echo "Done. llm-cli-gateway multi-LLM orchestration is now available in gstack skills."
66 changes: 66 additions & 0 deletions contrib/add-tool/llm-gateway/tools.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
{
"tool": "llm-cli-gateway",
"homepage": "https://github.com/verivus-oss/llm-cli-gateway",
"mcp_server_name": "llm-cli-gw",
"detection": {
"binary": "llm-cli-gateway",
"min_version": "1.1.0"
},
"integrations": {
"review": {
"phase": "multi-llm-review",
"context": "multi-LLM code review",
"tools": [
{ "tool": "gemini_request_async", "when": "dispatch Gemini review in parallel with Codex", "requires_cli": "gemini" },
{ "tool": "codex_request_async", "when": "dispatch Codex review in parallel with Gemini", "requires_cli": "codex" },
{ "tool": "llm_job_status", "when": "poll async job completion" },
{ "tool": "llm_job_result", "when": "collect finished review results" },
{ "tool": "session_create", "when": "establish review session for follow-up clarification" }
]
},
"investigate": {
"phase": "hypothesis-validation",
"context": "cross-model root cause validation",
"tools": [
{ "tool": "gemini_request", "when": "validate root cause hypothesis with a fresh perspective", "requires_cli": "gemini" },
{ "tool": "codex_request", "when": "get Codex second opinion on the suspected root cause", "requires_cli": "codex" },
{ "tool": "session_create", "when": "establish investigation session for iterative hypothesis testing" }
]
},
"plan-eng-review": {
"phase": "architecture-review",
"context": "multi-LLM architecture review",
"tools": [
{ "tool": "gemini_request", "when": "get Gemini perspective on architecture decisions and trade-offs", "requires_cli": "gemini" },
{ "tool": "codex_request", "when": "get Codex cold-read of the architecture plan", "requires_cli": "codex" },
{ "tool": "session_create", "when": "establish review session for follow-up questions" }
]
},
"plan-ceo-review": {
"phase": "strategic-review",
"context": "multi-LLM strategic assessment",
"tools": [
{ "tool": "gemini_request", "when": "get Gemini perspective on scope and strategic direction", "requires_cli": "gemini" },
{ "tool": "codex_request", "when": "get Codex assessment of feasibility and technical risk", "requires_cli": "codex" }
]
},
"ship": {
"phase": "pre-ship-review",
"context": "multi-LLM pre-ship verification",
"tools": [
{ "tool": "gemini_request_async", "when": "dispatch Gemini pre-ship review in parallel", "requires_cli": "gemini" },
{ "tool": "codex_request_async", "when": "dispatch Codex pre-ship review in parallel", "requires_cli": "codex" },
{ "tool": "llm_job_status", "when": "poll async job completion" },
{ "tool": "llm_job_result", "when": "collect pre-ship review results" }
]
},
"retro": {
"phase": "multi-perspective-analysis",
"context": "multi-LLM retrospective analysis",
"tools": [
{ "tool": "gemini_request", "when": "get Gemini perspective on patterns, trends, and blind spots", "requires_cli": "gemini" },
{ "tool": "codex_request", "when": "get Codex analysis of code quality trends", "requires_cli": "codex" }
]
}
}
}
64 changes: 64 additions & 0 deletions contrib/add-tool/llm-gateway/uninstall.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
#!/usr/bin/env bash
# Remove llm-cli-gateway integration from gstack.
# Does NOT uninstall llm-cli-gateway itself — only removes the gstack integration.
set -e

echo "=== Removing llm-cli-gateway integration from gstack ==="

# 1. Remove MCP config entries (best-effort)
if command -v node >/dev/null 2>&1; then
node -e "
const fs = require('fs');
const settings = process.env.HOME + '/.claude/settings.json';
try {
const s = JSON.parse(fs.readFileSync(settings, 'utf-8'));
if (s.mcpServers && s.mcpServers['llm-cli-gw']) {
delete s.mcpServers['llm-cli-gw'];
fs.writeFileSync(settings, JSON.stringify(s, null, 2));
console.log('Removed llm-cli-gw MCP server from Claude settings');
}
} catch(e) {}
" 2>/dev/null || true
fi

# 2. Remove from Codex config (best-effort, uses node for portability — no sed -i)
CODEX_CONFIG="$HOME/.codex/config.toml"
if [ -f "$CODEX_CONFIG" ] && grep -q 'llm-cli-gw' "$CODEX_CONFIG" 2>/dev/null; then
if command -v node >/dev/null 2>&1; then
node -e "
const fs = require('fs');
const config = '$CODEX_CONFIG';
try {
const lines = fs.readFileSync(config, 'utf-8').split('\n');
const out = [];
let skip = false;
for (let i = 0; i < lines.length; i++) {
if (lines[i].trim() === '[[mcp_servers]]') {
// Look ahead: is this the llm-cli-gw block?
const block = lines.slice(i, i + 5).join('\n');
if (block.includes('llm-cli-gw')) { skip = true; continue; }
}
if (skip) {
if (lines[i].trim() === '' || lines[i].startsWith('[[')) { skip = false; }
else { continue; }
}
if (!skip) out.push(lines[i]);
}
fs.writeFileSync(config, out.join('\n'));
console.log('Removed llm-cli-gw MCP server from Codex config');
} catch(e) {}
" 2>/dev/null || true
else
echo "Warning: node not available. Manually remove llm-cli-gw from $CODEX_CONFIG"
fi
fi

# 3. Regenerate gstack skills ({{LLM_GATEWAY_CONTEXT}} emits nothing without gateway)
GSTACK_DIR="${GSTACK_ROOT:-$HOME/.claude/skills/gstack}"
if [ -f "$GSTACK_DIR/package.json" ]; then
echo "Regenerating gstack skill docs..."
(cd "$GSTACK_DIR" && bun run gen:skill-docs --host all 2>/dev/null) || true
fi

echo "Done. llm-cli-gateway integration removed. The gateway itself is still installed."
echo "To fully uninstall: npm uninstall -g llm-cli-gateway"
2 changes: 2 additions & 0 deletions investigate/SKILL.md.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,8 @@ Gather context before forming any hypothesis.

{{LEARNINGS_SEARCH}}

{{LLM_GATEWAY_CONTEXT}}

Output: **"Root cause hypothesis: ..."** — a specific, testable claim about what is wrong and why.

---
Expand Down
Loading
Loading